diff --git "a/data/python/train.jsonl" "b/data/python/train.jsonl" new file mode 100644--- /dev/null +++ "b/data/python/train.jsonl" @@ -0,0 +1,241 @@ +{"url": "https://docs.python.org/3/c-api/frame.html", "title": "Frame Objects", "content": "Frame Objects\u00b6\n-\ntype PyFrameObject\u00b6\n- Part of the Limited API (as an opaque struct).\nThe C structure of the objects used to describe frame objects.\nThere are no public members in this structure.\nChanged in version 3.11: The members of this structure were removed from the public C API. Refer to the What\u2019s New entry for details.\nThe PyEval_GetFrame()\nand PyThreadState_GetFrame()\nfunctions\ncan be used to get a frame object.\nSee also Reflection.\n-\nPyTypeObject PyFrame_Type\u00b6\nThe type of frame objects. It is the same object as\ntypes.FrameType\nin the Python layer.Changed in version 3.11: Previously, this type was only available after including\n\n.\n-\nPyFrameObject *PyFrame_New(PyThreadState *tstate, PyCodeObject *code, PyObject *globals, PyObject *locals)\u00b6\nCreate a new frame object. This function returns a strong reference to the new frame object on success, and returns\nNULL\nwith an exception set on failure.\n-\nint PyFrame_Check(PyObject *obj)\u00b6\nReturn non-zero if obj is a frame object.\nChanged in version 3.11: Previously, this function was only available after including\n\n.\n-\nPyFrameObject *PyFrame_GetBack(PyFrameObject *frame)\u00b6\n- Return value: New reference.\nGet the frame next outer frame.\nReturn a strong reference, or\nNULL\nif frame has no outer frame.Added in version 3.9.\n-\nPyObject *PyFrame_GetBuiltins(PyFrameObject *frame)\u00b6\n- Return value: New reference.\nGet the frame\u2019s\nf_builtins\nattribute.Return a strong reference. The result cannot be\nNULL\n.Added in version 3.11.\n-\nPyCodeObject *PyFrame_GetCode(PyFrameObject *frame)\u00b6\n- Return value: New reference. Part of the Stable ABI since version 3.10.\nGet the frame code.\nReturn a strong reference.\nThe result (frame code) cannot be\nNULL\n.Added in version 3.9.\n-\nPyObject *PyFrame_GetGenerator(PyFrameObject *frame)\u00b6\n- Return value: New reference.\nGet the generator, coroutine, or async generator that owns this frame, or\nNULL\nif this frame is not owned by a generator. Does not raise an exception, even if the return value isNULL\n.Return a strong reference, or\nNULL\n.Added in version 3.11.\n-\nPyObject *PyFrame_GetGlobals(PyFrameObject *frame)\u00b6\n- Return value: New reference.\nGet the frame\u2019s\nf_globals\nattribute.Return a strong reference. The result cannot be\nNULL\n.Added in version 3.11.\n-\nint PyFrame_GetLasti(PyFrameObject *frame)\u00b6\nGet the frame\u2019s\nf_lasti\nattribute.Returns -1 if\nframe.f_lasti\nisNone\n.Added in version 3.11.\n-\nPyObject *PyFrame_GetVar(PyFrameObject *frame, PyObject *name)\u00b6\n- Return value: New reference.\nGet the variable name of frame.\nReturn a strong reference to the variable value on success.\nRaise\nNameError\nand returnNULL\nif the variable does not exist.Raise an exception and return\nNULL\non error.\nname type must be a\nstr\n.Added in version 3.12.\n-\nPyObject *PyFrame_GetVarString(PyFrameObject *frame, const char *name)\u00b6\n- Return value: New reference.\nSimilar to\nPyFrame_GetVar()\n, but the variable name is a C string encoded in UTF-8.Added in version 3.12.\n-\nPyObject *PyFrame_GetLocals(PyFrameObject *frame)\u00b6\n- Return value: New reference.\nGet the frame\u2019s\nf_locals\nattribute. If the frame refers to an optimized scope, this returns a write-through proxy object that allows modifying the locals. In all other cases (classes, modules,exec()\n,eval()\n) it returns the mapping representing the frame locals directly (as described forlocals()\n).Return a strong reference.\nAdded in version 3.11.\nChanged in version 3.13: As part of PEP 667, return an instance of\nPyFrameLocalsProxy_Type\n.\n-\nint PyFrame_GetLineNumber(PyFrameObject *frame)\u00b6\n- Part of the Stable ABI since version 3.10.\nReturn the line number that frame is currently executing.\nFrame Locals Proxies\u00b6\nAdded in version 3.13.\nThe f_locals\nattribute on a frame object\nis an instance of a \u201cframe-locals proxy\u201d. The proxy object exposes a\nwrite-through view of the underlying locals dictionary for the frame. This\nensures that the variables exposed by f_locals\nare always up to date with\nthe live local variables in the frame itself.\nSee PEP 667 for more information.\n-\nPyTypeObject PyFrameLocalsProxy_Type\u00b6\nThe type of frame\nlocals()\nproxy objects.\nLegacy Local Variable APIs\u00b6\nThese APIs are soft deprecated. As of Python 3.13, they do nothing. They exist solely for backwards compatibility.\n-\nvoid PyFrame_LocalsToFast(PyFrameObject *f, int clear)\u00b6\nThis function is soft deprecated and does nothing.\nPrior to Python 3.13, this function would copy the\nf_locals\nattribute of f to the internal \u201cfast\u201d array of local variables, allowing changes in frame objects to be visible to the interpreter. If clear was true, this function would process variables that were unset in the locals dictionary.Changed in version 3.13: This function now does nothing.\n-\nvoid PyFrame_FastToLocals(PyFrameObject *f)\u00b6\nThis function is soft deprecated and does nothing.\nPrior to Python 3.13, this function would copy the internal \u201cfast\u201d array of local variables (which is used by the interpreter) to the\nf_locals\nattribute of f, allowing changes in local variables to be visible to frame objects.Changed in version 3.13: This function now does nothing.\n-\nint PyFrame_FastToLocalsWithError(PyFrameObject *f)\u00b6\nThis function is soft deprecated and does nothing.\nPrior to Python 3.13, this function was similar to\nPyFrame_FastToLocals()\n, but would return0\non success, and-1\nwith an exception set on failure.Changed in version 3.13: This function now does nothing.\nSee also\nInternal Frames\u00b6\nUnless using PEP 523, you will not need this.\n-\nstruct _PyInterpreterFrame\u00b6\nThe interpreter\u2019s internal frame representation.\nAdded in version 3.11.\n-\nPyObject *PyUnstable_InterpreterFrame_GetCode(struct _PyInterpreterFrame *frame);\u00b6\n- This is Unstable API. It may change without warning in minor releases.\nReturn a strong reference to the code object for the frame.\nAdded in version 3.12.\n-\nint PyUnstable_InterpreterFrame_GetLasti(struct _PyInterpreterFrame *frame);\u00b6\n- This is Unstable API. It may change without warning in minor releases.\nReturn the byte offset into the last executed instruction.\nAdded in version 3.12.\n-\nint PyUnstable_InterpreterFrame_GetLine(struct _PyInterpreterFrame *frame);\u00b6\n- This is Unstable API. It may change without warning in minor releases.\nReturn the currently executing line number, or -1 if there is no line number.\nAdded in version 3.12.", "code_snippets": [], "language": "Python", "source": "python.org", "token_count": 1578} +{"url": "https://docs.python.org/3/tutorial/whatnow.html", "title": "What Now?", "content": "13. What Now?\u00b6\nReading this tutorial has probably reinforced your interest in using Python \u2014 you should be eager to apply Python to solving your real-world problems. Where should you go to learn more?\nThis tutorial is part of Python\u2019s documentation set. Some other documents in the set are:\n-\nYou should browse through this manual, which gives complete (though terse) reference material about types, functions, and the modules in the standard library. The standard Python distribution includes a lot of additional code. There are modules to read Unix mailboxes, retrieve documents via HTTP, generate random numbers, parse command-line options, compress data, and many other tasks. Skimming through the Library Reference will give you an idea of what\u2019s available.\nInstalling Python Modules explains how to install additional modules written by other Python users.\nThe Python Language Reference: A detailed explanation of Python\u2019s syntax and semantics. It\u2019s heavy reading, but is useful as a complete guide to the language itself.\nMore Python resources:\nhttps://www.python.org: The major Python website. It contains code, documentation, and pointers to Python-related pages around the web.\nhttps://docs.python.org: Fast access to Python\u2019s documentation.\nhttps://pypi.org: The Python Package Index, previously also nicknamed the Cheese Shop [1], is an index of user-created Python modules that are available for download. Once you begin releasing code, you can register it here so that others can find it.\nhttps://code.activestate.com/recipes/langs/python/: The Python Cookbook is a sizable collection of code examples, larger modules, and useful scripts. Particularly notable contributions are collected in a book also titled Python Cookbook (O\u2019Reilly & Associates, ISBN 0-596-00797-3.)\nhttps://pyvideo.org collects links to Python-related videos from conferences and user-group meetings.\nhttps://scipy.org: The Scientific Python project includes modules for fast array computations and manipulations plus a host of packages for such things as linear algebra, Fourier transforms, non-linear solvers, random number distributions, statistical analysis and the like.\nFor Python-related questions and problem reports, you can post to the newsgroup comp.lang.python, or send them to the mailing list at python-list@python.org. The newsgroup and mailing list are gatewayed, so messages posted to one will automatically be forwarded to the other. There are hundreds of postings a day, asking (and answering) questions, suggesting new features, and announcing new modules. Mailing list archives are available at https://mail.python.org/pipermail/.\nBefore posting, be sure to check the list of Frequently Asked Questions (also called the FAQ). The FAQ answers many of the questions that come up again and again, and may already contain the solution for your problem.\nFootnotes", "code_snippets": [], "language": "Python", "source": "python.org", "token_count": 716} +{"url": "https://docs.python.org/3/tutorial/stdlib2.html", "title": "Brief Tour of the Standard Library \u2014 Part II", "content": "11. Brief Tour of the Standard Library \u2014 Part II\u00b6\nThis second tour covers more advanced modules that support professional programming needs. These modules rarely occur in small scripts.\n11.1. Output Formatting\u00b6\nThe reprlib\nmodule provides a version of repr()\ncustomized for\nabbreviated displays of large or deeply nested containers:\n>>> import reprlib\n>>> reprlib.repr(set('supercalifragilisticexpialidocious'))\n\"{'a', 'c', 'd', 'e', 'f', 'g', ...}\"\nThe pprint\nmodule offers more sophisticated control over printing both\nbuilt-in and user defined objects in a way that is readable by the interpreter.\nWhen the result is longer than one line, the \u201cpretty printer\u201d adds line breaks\nand indentation to more clearly reveal data structure:\n>>> import pprint\n>>> t = [[[['black', 'cyan'], 'white', ['green', 'red']], [['magenta',\n... 'yellow'], 'blue']]]\n...\n>>> pprint.pprint(t, width=30)\n[[[['black', 'cyan'],\n'white',\n['green', 'red']],\n[['magenta', 'yellow'],\n'blue']]]\nThe textwrap\nmodule formats paragraphs of text to fit a given screen\nwidth:\n>>> import textwrap\n>>> doc = \"\"\"The wrap() method is just like fill() except that it returns\n... a list of strings instead of one big string with newlines to separate\n... the wrapped lines.\"\"\"\n...\n>>> print(textwrap.fill(doc, width=40))\nThe wrap() method is just like fill()\nexcept that it returns a list of strings\ninstead of one big string with newlines\nto separate the wrapped lines.\nThe locale\nmodule accesses a database of culture specific data formats.\nThe grouping attribute of locale\u2019s format function provides a direct way of\nformatting numbers with group separators:\n>>> import locale\n>>> locale.setlocale(locale.LC_ALL, 'English_United States.1252')\n'English_United States.1252'\n>>> conv = locale.localeconv() # get a mapping of conventions\n>>> x = 1234567.8\n>>> locale.format_string(\"%d\", x, grouping=True)\n'1,234,567'\n>>> locale.format_string(\"%s%.*f\", (conv['currency_symbol'],\n... conv['frac_digits'], x), grouping=True)\n'$1,234,567.80'\n11.2. Templating\u00b6\nThe string\nmodule includes a versatile Template\nclass\nwith a simplified syntax suitable for editing by end-users. This allows users\nto customize their applications without having to alter the application.\nThe format uses placeholder names formed by $\nwith valid Python identifiers\n(alphanumeric characters and underscores). Surrounding the placeholder with\nbraces allows it to be followed by more alphanumeric letters with no intervening\nspaces. Writing $$\ncreates a single escaped $\n:\n>>> from string import Template\n>>> t = Template('${village}folk send $$10 to $cause.')\n>>> t.substitute(village='Nottingham', cause='the ditch fund')\n'Nottinghamfolk send $10 to the ditch fund.'\nThe substitute()\nmethod raises a KeyError\nwhen a\nplaceholder is not supplied in a dictionary or a keyword argument. For\nmail-merge style applications, user supplied data may be incomplete and the\nsafe_substitute()\nmethod may be more appropriate \u2014\nit will leave placeholders unchanged if data is missing:\n>>> t = Template('Return the $item to $owner.')\n>>> d = dict(item='unladen swallow')\n>>> t.substitute(d)\nTraceback (most recent call last):\n...\nKeyError: 'owner'\n>>> t.safe_substitute(d)\n'Return the unladen swallow to $owner.'\nTemplate subclasses can specify a custom delimiter. For example, a batch renaming utility for a photo browser may elect to use percent signs for placeholders such as the current date, image sequence number, or file format:\n>>> import time, os.path\n>>> photofiles = ['img_1074.jpg', 'img_1076.jpg', 'img_1077.jpg']\n>>> class BatchRename(Template):\n... delimiter = '%'\n...\n>>> fmt = input('Enter rename style (%d-date %n-seqnum %f-format): ')\nEnter rename style (%d-date %n-seqnum %f-format): Ashley_%n%f\n>>> t = BatchRename(fmt)\n>>> date = time.strftime('%d%b%y')\n>>> for i, filename in enumerate(photofiles):\n... base, ext = os.path.splitext(filename)\n... newname = t.substitute(d=date, n=i, f=ext)\n... print('{0} --> {1}'.format(filename, newname))\nimg_1074.jpg --> Ashley_0.jpg\nimg_1076.jpg --> Ashley_1.jpg\nimg_1077.jpg --> Ashley_2.jpg\nAnother application for templating is separating program logic from the details of multiple output formats. This makes it possible to substitute custom templates for XML files, plain text reports, and HTML web reports.\n11.3. Working with Binary Data Record Layouts\u00b6\nThe struct\nmodule provides pack()\nand\nunpack()\nfunctions for working with variable length binary\nrecord formats. The following example shows\nhow to loop through header information in a ZIP file without using the\nzipfile\nmodule. Pack codes \"H\"\nand \"I\"\nrepresent two and four\nbyte unsigned numbers respectively. The \"<\"\nindicates that they are\nstandard size and in little-endian byte order:\nimport struct\nwith open('myfile.zip', 'rb') as f:\ndata = f.read()\nstart = 0\nfor i in range(3): # show the first 3 file headers\nstart += 14\nfields = struct.unpack('>> import weakref, gc\n>>> class A:\n... def __init__(self, value):\n... self.value = value\n... def __repr__(self):\n... return str(self.value)\n...\n>>> a = A(10) # create a reference\n>>> d = weakref.WeakValueDictionary()\n>>> d['primary'] = a # does not create a reference\n>>> d['primary'] # fetch the object if it is still alive\n10\n>>> del a # remove the one reference\n>>> gc.collect() # run garbage collection right away\n0\n>>> d['primary'] # entry was automatically removed\nTraceback (most recent call last):\nFile \"\", line 1, in \nd['primary'] # entry was automatically removed\nFile \"C:/python314/lib/weakref.py\", line 46, in __getitem__\no = self.data[key]()\nKeyError: 'primary'\n11.7. Tools for Working with Lists\u00b6\nMany data structure needs can be met with the built-in list type. However, sometimes there is a need for alternative implementations with different performance trade-offs.\nThe array\nmodule provides an array\nobject that is like\na list that stores only homogeneous data and stores it more compactly. The\nfollowing example shows an array of numbers stored as two byte unsigned binary\nnumbers (typecode \"H\"\n) rather than the usual 16 bytes per entry for regular\nlists of Python int objects:\n>>> from array import array\n>>> a = array('H', [4000, 10, 700, 22222])\n>>> sum(a)\n26932\n>>> a[1:3]\narray('H', [10, 700])\nThe collections\nmodule provides a deque\nobject\nthat is like a list with faster appends and pops from the left side but slower\nlookups in the middle. These objects are well suited for implementing queues\nand breadth first tree searches:\n>>> from collections import deque\n>>> d = deque([\"task1\", \"task2\", \"task3\"])\n>>> d.append(\"task4\")\n>>> print(\"Handling\", d.popleft())\nHandling task1\nunsearched = deque([starting_node])\ndef breadth_first_search(unsearched):\nnode = unsearched.popleft()\nfor m in gen_moves(node):\nif is_goal(m):\nreturn m\nunsearched.append(m)\nIn addition to alternative list implementations, the library also offers other\ntools such as the bisect\nmodule with functions for manipulating sorted\nlists:\n>>> import bisect\n>>> scores = [(100, 'perl'), (200, 'tcl'), (400, 'lua'), (500, 'python')]\n>>> bisect.insort(scores, (300, 'ruby'))\n>>> scores\n[(100, 'perl'), (200, 'tcl'), (300, 'ruby'), (400, 'lua'), (500, 'python')]\nThe heapq\nmodule provides functions for implementing heaps based on\nregular lists. The lowest valued entry is always kept at position zero. This\nis useful for applications which repeatedly access the smallest element but do\nnot want to run a full list sort:\n>>> from heapq import heapify, heappop, heappush\n>>> data = [1, 3, 5, 7, 9, 2, 4, 6, 8, 0]\n>>> heapify(data) # rearrange the list into heap order\n>>> heappush(data, -5) # add a new entry\n>>> [heappop(data) for i in range(3)] # fetch the three smallest entries\n[-5, 0, 1]\n11.8. Decimal Floating-Point Arithmetic\u00b6\nThe decimal\nmodule offers a Decimal\ndatatype for\ndecimal floating-point arithmetic. Compared to the built-in float\nimplementation of binary floating point, the class is especially helpful for\nfinancial applications and other uses which require exact decimal representation,\ncontrol over precision,\ncontrol over rounding to meet legal or regulatory requirements,\ntracking of significant decimal places, or\napplications where the user expects the results to match calculations done by hand.\nFor example, calculating a 5% tax on a 70 cent phone charge gives different results in decimal floating point and binary floating point. The difference becomes significant if the results are rounded to the nearest cent:\n>>> from decimal import *\n>>> round(Decimal('0.70') * Decimal('1.05'), 2)\nDecimal('0.74')\n>>> round(.70 * 1.05, 2)\n0.73\nThe Decimal\nresult keeps a trailing zero, automatically\ninferring four place significance from multiplicands with two place\nsignificance. Decimal reproduces mathematics as done by hand and avoids\nissues that can arise when binary floating point cannot exactly represent\ndecimal quantities.\nExact representation enables the Decimal\nclass to perform\nmodulo calculations and equality tests that are unsuitable for binary floating\npoint:\n>>> Decimal('1.00') % Decimal('.10')\nDecimal('0.00')\n>>> 1.00 % 0.10\n0.09999999999999995\n>>> sum([Decimal('0.1')]*10) == Decimal('1.0')\nTrue\n>>> 0.1 + 0.1 + 0.1 + 0.1 + 0.1 + 0.1 + 0.1 + 0.1 + 0.1 + 0.1 == 1.0\nFalse\nThe decimal\nmodule provides arithmetic with as much precision as needed:\n>>> getcontext().prec = 36\n>>> Decimal(1) / Decimal(7)\nDecimal('0.142857142857142857142857142857142857')", "code_snippets": ["\n", "\n", "\n", "\n", " ", " ", " ", " ", " ", " ", " ", "\n", " ", " ", "\n", "\n", " ", "\n", "\n", "\n", "\n", "\n", "\n", "\n", " ", " ", "\n", "\n", "\n", "\n", " ", "\n", "\n", "\n", "\n", "\n", "\n", " ", "\n", "\n", " ", " ", " ", "\n", " ", " ", "\n", " ", " ", "\n", "\n", " ", "\n", " ", " ", " ", "\n", "\n", " ", "\n", " ", " ", "\n", " ", "\n", "\n", " ", " ", "\n", " ", " ", "\n", "\n", "\n", "\n", ": ", "\n", "\n", "\n", "\n", " ", " ", " ", " ", "\n", "\n", " ", " ", " ", "\n", "\n", " ", " ", "\n", "\n\n", " ", " ", "\n", " ", " ", "\n", " ", " ", " ", " ", "\n", " ", " ", " ", " ", "\n", " ", " ", " ", " ", " ", "\n", " ", " ", "\n\n", "\n", "\n", "\n", "\n\n", " ", " ", " ", " ", "\n ", " ", " ", "\n\n", " ", " ", "\n", " ", " ", " ", " ", "\n ", " ", " ", "\n ", " ", " ", " ", "\n ", " ", " ", " ", " ", " ", " ", "\n\n ", " ", " ", "\n ", " ", " ", "\n ", " ", " ", "\n ", " ", " ", "\n ", " ", " ", " ", "\n\n ", " ", " ", " ", " ", " ", "\n", "\n\n", "\n ", " ", " ", "\n ", "\n ", " ", " ", "\n ", " ", " ", "\n\n ", "\n ", " ", " ", " ", " ", "\n ", "\n ", "\n ", " ", "\n\n", " ", " ", " ", "\n", "\n", "\n\n", " ", "\n", "\n", "\n", "\n", "\n", " ", "\n", "\n", "\n", "\n", "\n", " ", " ", "\n", " ", " ", " ", "\n", " ", "\n", " ", " ", "\n", "\n", " ", " ", " ", "\n", " ", " ", "\n", " ", " ", " ", "\n", " ", "\n", "\n", " ", " ", "\n", " ", "\n", "\n", " ", "\n", "\n File ", ", line ", ", in ", "\n", " ", "\n File ", ", line ", ", in ", "\n", " ", " ", "\n", ": ", "\n", " ", "\n", " ", " ", " ", " ", " ", " ", "\n", "\n", "\n", "\n", "\n", " ", "\n", " ", " ", " ", " ", "\n", "\n", " ", "\n", "\n", " ", " ", "\n", "\n ", " ", " ", "\n ", " ", " ", " ", "\n ", " ", "\n ", " ", "\n ", "\n", "\n", " ", " ", " ", " ", " ", " ", " ", " ", " ", "\n", " ", " ", "\n", "\n", "\n", " ", " ", " ", "\n", " ", " ", " ", " ", " ", " ", " ", " ", " ", " ", " ", "\n", " ", "\n", " ", " ", "\n", " ", " ", " ", " ", " ", "\n", "\n", " ", "\n", " ", " ", " ", "\n", "\n", " ", " ", " ", "\n", "\n", " ", " ", "\n", "\n", " ", " ", "\n", "\n\n", " ", " ", "\n", "\n", " ", " ", " ", " ", " ", " ", " ", " ", " ", " ", " ", " ", " ", " ", " ", " ", " ", " ", " ", " ", "\n", "\n", " ", " ", "\n", " ", " ", "\n", "\n"], "language": "Python", "source": "python.org", "token_count": 3436} +{"url": "https://docs.python.org/3/c-api/long.html", "title": "Integer Objects", "content": "Integer Objects\u00b6\nAll integers are implemented as \u201clong\u201d integer objects of arbitrary size.\nOn error, most PyLong_As*\nAPIs return (return type)-1\nwhich cannot be\ndistinguished from a number. Use PyErr_Occurred()\nto disambiguate.\n-\ntype PyLongObject\u00b6\n- Part of the Limited API (as an opaque struct).\nThis subtype of\nPyObject\nrepresents a Python integer object.\n-\nPyTypeObject PyLong_Type\u00b6\n- Part of the Stable ABI.\nThis instance of\nPyTypeObject\nrepresents the Python integer type. This is the same object asint\nin the Python layer.\n-\nint PyLong_Check(PyObject *p)\u00b6\nReturn true if its argument is a\nPyLongObject\nor a subtype ofPyLongObject\n. This function always succeeds.\n-\nint PyLong_CheckExact(PyObject *p)\u00b6\nReturn true if its argument is a\nPyLongObject\n, but not a subtype ofPyLongObject\n. This function always succeeds.\n-\nPyObject *PyLong_FromLong(long v)\u00b6\n- Return value: New reference. Part of the Stable ABI.\nReturn a new\nPyLongObject\nobject from v, orNULL\non failure.CPython implementation detail: CPython keeps an array of integer objects for all integers between\n-5\nand256\n. When you create an int in that range you actually just get back a reference to the existing object.\n-\nPyObject *PyLong_FromUnsignedLong(unsigned long v)\u00b6\n- Return value: New reference. Part of the Stable ABI.\nReturn a new\nPyLongObject\nobject from a C unsigned long, orNULL\non failure.\n-\nPyObject *PyLong_FromSsize_t(Py_ssize_t v)\u00b6\n- Return value: New reference. Part of the Stable ABI.\nReturn a new\nPyLongObject\nobject from a CPy_ssize_t\n, orNULL\non failure.\n-\nPyObject *PyLong_FromSize_t(size_t v)\u00b6\n- Return value: New reference. Part of the Stable ABI.\nReturn a new\nPyLongObject\nobject from a Csize_t\n, orNULL\non failure.\n-\nPyObject *PyLong_FromLongLong(long long v)\u00b6\n- Return value: New reference. Part of the Stable ABI.\nReturn a new\nPyLongObject\nobject from a C long long, orNULL\non failure.\n-\nPyObject *PyLong_FromInt32(int32_t value)\u00b6\n-\nPyObject *PyLong_FromInt64(int64_t value)\u00b6\n- Part of the Stable ABI since version 3.14.\nReturn a new\nPyLongObject\nobject from a signed C int32_t or int64_t, orNULL\nwith an exception set on failure.Added in version 3.14.\n-\nPyObject *PyLong_FromUnsignedLongLong(unsigned long long v)\u00b6\n- Return value: New reference. Part of the Stable ABI.\nReturn a new\nPyLongObject\nobject from a C unsigned long long, orNULL\non failure.\n-\nPyObject *PyLong_FromUInt32(uint32_t value)\u00b6\n-\nPyObject *PyLong_FromUInt64(uint64_t value)\u00b6\n- Part of the Stable ABI since version 3.14.\nReturn a new\nPyLongObject\nobject from an unsigned C uint32_t or uint64_t, orNULL\nwith an exception set on failure.Added in version 3.14.\n-\nPyObject *PyLong_FromDouble(double v)\u00b6\n- Return value: New reference. Part of the Stable ABI.\nReturn a new\nPyLongObject\nobject from the integer part of v, orNULL\non failure.\n-\nPyObject *PyLong_FromString(const char *str, char **pend, int base)\u00b6\n- Return value: New reference. Part of the Stable ABI.\nReturn a new\nPyLongObject\nbased on the string value in str, which is interpreted according to the radix in base, orNULL\non failure. If pend is non-NULL\n, *pend will point to the end of str on success or to the first character that could not be processed on error. If base is0\n, str is interpreted using the Integer literals definition; in this case, leading zeros in a non-zero decimal number raises aValueError\n. If base is not0\n, it must be between2\nand36\n, inclusive. Leading and trailing whitespace and single underscores after a base specifier and between digits are ignored. If there are no digits or str is not NULL-terminated following the digits and trailing whitespace,ValueError\nwill be raised.See also\nPyLong_AsNativeBytes()\nandPyLong_FromNativeBytes()\nfunctions can be used to convert aPyLongObject\nto/from an array of bytes in base256\n.\n-\nPyObject *PyLong_FromUnicodeObject(PyObject *u, int base)\u00b6\n- Return value: New reference.\nConvert a sequence of Unicode digits in the string u to a Python integer value.\nAdded in version 3.3.\n-\nPyObject *PyLong_FromVoidPtr(void *p)\u00b6\n- Return value: New reference. Part of the Stable ABI.\nCreate a Python integer from the pointer p. The pointer value can be retrieved from the resulting value using\nPyLong_AsVoidPtr()\n.\n-\nPyObject *PyLong_FromNativeBytes(const void *buffer, size_t n_bytes, int flags)\u00b6\n- Part of the Stable ABI since version 3.14.\nCreate a Python integer from the value contained in the first n_bytes of buffer, interpreted as a two\u2019s-complement signed number.\nflags are as for\nPyLong_AsNativeBytes()\n. Passing-1\nwill select the native endian that CPython was compiled with and assume that the most-significant bit is a sign bit. PassingPy_ASNATIVEBYTES_UNSIGNED_BUFFER\nwill produce the same result as callingPyLong_FromUnsignedNativeBytes()\n. Other flags are ignored.Added in version 3.13.\n-\nPyObject *PyLong_FromUnsignedNativeBytes(const void *buffer, size_t n_bytes, int flags)\u00b6\n- Part of the Stable ABI since version 3.14.\nCreate a Python integer from the value contained in the first n_bytes of buffer, interpreted as an unsigned number.\nflags are as for\nPyLong_AsNativeBytes()\n. Passing-1\nwill select the native endian that CPython was compiled with and assume that the most-significant bit is not a sign bit. Flags other than endian are ignored.Added in version 3.13.\n-\nPyLong_FromPid(pid)\u00b6\nMacro for creating a Python integer from a process identifier.\nThis can be defined as an alias to\nPyLong_FromLong()\norPyLong_FromLongLong()\n, depending on the size of the system\u2019s PID type.Added in version 3.2.\n-\nlong PyLong_AsLong(PyObject *obj)\u00b6\n- Part of the Stable ABI.\nReturn a C long representation of obj. If obj is not an instance of\nPyLongObject\n, first call its__index__()\nmethod (if present) to convert it to aPyLongObject\n.Raise\nOverflowError\nif the value of obj is out of range for a long.Returns\n-1\non error. UsePyErr_Occurred()\nto disambiguate.Changed in version 3.8: Use\n__index__()\nif available.Changed in version 3.10: This function will no longer use\n__int__()\n.-\nlong PyLong_AS_LONG(PyObject *obj)\u00b6\nA soft deprecated alias. Exactly equivalent to the preferred\nPyLong_AsLong\n. In particular, it can fail withOverflowError\nor another exception.Deprecated since version 3.14: The function is soft deprecated.\n-\nlong PyLong_AS_LONG(PyObject *obj)\u00b6\n-\nint PyLong_AsInt(PyObject *obj)\u00b6\n- Part of the Stable ABI since version 3.13.\nSimilar to\nPyLong_AsLong()\n, but store the result in a C int instead of a C long.Added in version 3.13.\n-\nlong PyLong_AsLongAndOverflow(PyObject *obj, int *overflow)\u00b6\n- Part of the Stable ABI.\nReturn a C long representation of obj. If obj is not an instance of\nPyLongObject\n, first call its__index__()\nmethod (if present) to convert it to aPyLongObject\n.If the value of obj is greater than\nLONG_MAX\nor less thanLONG_MIN\n, set *overflow to1\nor-1\n, respectively, and return-1\n; otherwise, set *overflow to0\n. If any other exception occurs set *overflow to0\nand return-1\nas usual.Returns\n-1\non error. UsePyErr_Occurred()\nto disambiguate.Changed in version 3.8: Use\n__index__()\nif available.Changed in version 3.10: This function will no longer use\n__int__()\n.\n-\nlong long PyLong_AsLongLong(PyObject *obj)\u00b6\n- Part of the Stable ABI.\nReturn a C long long representation of obj. If obj is not an instance of\nPyLongObject\n, first call its__index__()\nmethod (if present) to convert it to aPyLongObject\n.Raise\nOverflowError\nif the value of obj is out of range for a long long.Returns\n-1\non error. UsePyErr_Occurred()\nto disambiguate.Changed in version 3.8: Use\n__index__()\nif available.Changed in version 3.10: This function will no longer use\n__int__()\n.\n-\nlong long PyLong_AsLongLongAndOverflow(PyObject *obj, int *overflow)\u00b6\n- Part of the Stable ABI.\nReturn a C long long representation of obj. If obj is not an instance of\nPyLongObject\n, first call its__index__()\nmethod (if present) to convert it to aPyLongObject\n.If the value of obj is greater than\nLLONG_MAX\nor less thanLLONG_MIN\n, set *overflow to1\nor-1\n, respectively, and return-1\n; otherwise, set *overflow to0\n. If any other exception occurs set *overflow to0\nand return-1\nas usual.Returns\n-1\non error. UsePyErr_Occurred()\nto disambiguate.Added in version 3.2.\nChanged in version 3.8: Use\n__index__()\nif available.Changed in version 3.10: This function will no longer use\n__int__()\n.\n-\nPy_ssize_t PyLong_AsSsize_t(PyObject *pylong)\u00b6\n- Part of the Stable ABI.\nReturn a C\nPy_ssize_t\nrepresentation of pylong. pylong must be an instance ofPyLongObject\n.Raise\nOverflowError\nif the value of pylong is out of range for aPy_ssize_t\n.Returns\n-1\non error. UsePyErr_Occurred()\nto disambiguate.\n-\nunsigned long PyLong_AsUnsignedLong(PyObject *pylong)\u00b6\n- Part of the Stable ABI.\nReturn a C unsigned long representation of pylong. pylong must be an instance of\nPyLongObject\n.Raise\nOverflowError\nif the value of pylong is out of range for a unsigned long.Returns\n(unsigned long)-1\non error. UsePyErr_Occurred()\nto disambiguate.\n-\nsize_t PyLong_AsSize_t(PyObject *pylong)\u00b6\n- Part of the Stable ABI.\nReturn a C\nsize_t\nrepresentation of pylong. pylong must be an instance ofPyLongObject\n.Raise\nOverflowError\nif the value of pylong is out of range for asize_t\n.Returns\n(size_t)-1\non error. UsePyErr_Occurred()\nto disambiguate.\n-\nunsigned long long PyLong_AsUnsignedLongLong(PyObject *pylong)\u00b6\n- Part of the Stable ABI.\nReturn a C unsigned long long representation of pylong. pylong must be an instance of\nPyLongObject\n.Raise\nOverflowError\nif the value of pylong is out of range for an unsigned long long.Returns\n(unsigned long long)-1\non error. UsePyErr_Occurred()\nto disambiguate.Changed in version 3.1: A negative pylong now raises\nOverflowError\n, notTypeError\n.\n-\nunsigned long PyLong_AsUnsignedLongMask(PyObject *obj)\u00b6\n- Part of the Stable ABI.\nReturn a C unsigned long representation of obj. If obj is not an instance of\nPyLongObject\n, first call its__index__()\nmethod (if present) to convert it to aPyLongObject\n.If the value of obj is out of range for an unsigned long, return the reduction of that value modulo\nULONG_MAX + 1\n.Returns\n(unsigned long)-1\non error. UsePyErr_Occurred()\nto disambiguate.Changed in version 3.8: Use\n__index__()\nif available.Changed in version 3.10: This function will no longer use\n__int__()\n.\n-\nunsigned long long PyLong_AsUnsignedLongLongMask(PyObject *obj)\u00b6\n- Part of the Stable ABI.\nReturn a C unsigned long long representation of obj. If obj is not an instance of\nPyLongObject\n, first call its__index__()\nmethod (if present) to convert it to aPyLongObject\n.If the value of obj is out of range for an unsigned long long, return the reduction of that value modulo\nULLONG_MAX + 1\n.Returns\n(unsigned long long)-1\non error. UsePyErr_Occurred()\nto disambiguate.Changed in version 3.8: Use\n__index__()\nif available.Changed in version 3.10: This function will no longer use\n__int__()\n.\n-\nint PyLong_AsInt32(PyObject *obj, int32_t *value)\u00b6\n-\nint PyLong_AsInt64(PyObject *obj, int64_t *value)\u00b6\n- Part of the Stable ABI since version 3.14.\nSet *value to a signed C int32_t or int64_t representation of obj.\nIf obj is not an instance of\nPyLongObject\n, first call its__index__()\nmethod (if present) to convert it to aPyLongObject\n.If the obj value is out of range, raise an\nOverflowError\n.Set *value and return\n0\non success. Set an exception and return-1\non error.value must not be\nNULL\n.Added in version 3.14.\n-\nint PyLong_AsUInt32(PyObject *obj, uint32_t *value)\u00b6\n-\nint PyLong_AsUInt64(PyObject *obj, uint64_t *value)\u00b6\n- Part of the Stable ABI since version 3.14.\nSet *value to an unsigned C uint32_t or uint64_t representation of obj.\nIf obj is not an instance of\nPyLongObject\n, first call its__index__()\nmethod (if present) to convert it to aPyLongObject\n.If obj is negative, raise a\nValueError\n.If the obj value is out of range, raise an\nOverflowError\n.\nSet *value and return\n0\non success. Set an exception and return-1\non error.value must not be\nNULL\n.Added in version 3.14.\n-\ndouble PyLong_AsDouble(PyObject *pylong)\u00b6\n- Part of the Stable ABI.\nReturn a C double representation of pylong. pylong must be an instance of\nPyLongObject\n.Raise\nOverflowError\nif the value of pylong is out of range for a double.Returns\n-1.0\non error. UsePyErr_Occurred()\nto disambiguate.\n-\nvoid *PyLong_AsVoidPtr(PyObject *pylong)\u00b6\n- Part of the Stable ABI.\nConvert a Python integer pylong to a C void pointer. If pylong cannot be converted, an\nOverflowError\nwill be raised. This is only assured to produce a usable void pointer for values created withPyLong_FromVoidPtr()\n.Returns\nNULL\non error. UsePyErr_Occurred()\nto disambiguate.\n-\nPy_ssize_t PyLong_AsNativeBytes(PyObject *pylong, void *buffer, Py_ssize_t n_bytes, int flags)\u00b6\n- Part of the Stable ABI since version 3.14.\nCopy the Python integer value pylong to a native buffer of size n_bytes. The flags can be set to\n-1\nto behave similarly to a C cast, or to values documented below to control the behavior.Returns\n-1\nwith an exception raised on error. This may happen if pylong cannot be interpreted as an integer, or if pylong was negative and thePy_ASNATIVEBYTES_REJECT_NEGATIVE\nflag was set.Otherwise, returns the number of bytes required to store the value. If this is equal to or less than n_bytes, the entire value was copied. All n_bytes of the buffer are written: remaining bytes filled by copies of the sign bit.\nIf the returned value is greater than n_bytes, the value was truncated: as many of the lowest bits of the value as could fit are written, and the higher bits are ignored. This matches the typical behavior of a C-style downcast.\nNote\nOverflow is not considered an error. If the returned value is larger than n_bytes, most significant bits were discarded.\n0\nwill never be returned.Values are always copied as two\u2019s-complement.\nUsage example:\nint32_t value; Py_ssize_t bytes = PyLong_AsNativeBytes(pylong, &value, sizeof(value), -1); if (bytes < 0) { // Failed. A Python exception was set with the reason. return NULL; } else if (bytes <= (Py_ssize_t)sizeof(value)) { // Success! } else { // Overflow occurred, but 'value' contains the truncated // lowest bits of pylong. }\nPassing zero to n_bytes will return the size of a buffer that would be large enough to hold the value. This may be larger than technically necessary, but not unreasonably so. If n_bytes=0, buffer may be\nNULL\n.Note\nPassing n_bytes=0 to this function is not an accurate way to determine the bit length of the value.\nTo get at the entire Python value of an unknown size, the function can be called twice: first to determine the buffer size, then to fill it:\n// Ask how much space we need. Py_ssize_t expected = PyLong_AsNativeBytes(pylong, NULL, 0, -1); if (expected < 0) { // Failed. A Python exception was set with the reason. return NULL; } assert(expected != 0); // Impossible per the API definition. uint8_t *bignum = malloc(expected); if (!bignum) { PyErr_SetString(PyExc_MemoryError, \"bignum malloc failed.\"); return NULL; } // Safely get the entire value. Py_ssize_t bytes = PyLong_AsNativeBytes(pylong, bignum, expected, -1); if (bytes < 0) { // Exception has been set. free(bignum); return NULL; } else if (bytes > expected) { // This should not be possible. PyErr_SetString(PyExc_RuntimeError, \"Unexpected bignum truncation after a size check.\"); free(bignum); return NULL; } // The expected success given the above pre-check. // ... use bignum ... free(bignum);\nflags is either\n-1\n(Py_ASNATIVEBYTES_DEFAULTS\n) to select defaults that behave most like a C cast, or a combination of the other flags in the table below. Note that-1\ncannot be combined with other flags.Currently,\n-1\ncorresponds toPy_ASNATIVEBYTES_NATIVE_ENDIAN | Py_ASNATIVEBYTES_UNSIGNED_BUFFER\n.Flag\nValue\n-\nPy_ASNATIVEBYTES_DEFAULTS\u00b6\n- Part of the Stable ABI since version 3.14.\n-1\n-\nPy_ASNATIVEBYTES_BIG_ENDIAN\u00b6\n- Part of the Stable ABI since version 3.14.\n0\n-\nPy_ASNATIVEBYTES_LITTLE_ENDIAN\u00b6\n- Part of the Stable ABI since version 3.14.\n1\n-\nPy_ASNATIVEBYTES_NATIVE_ENDIAN\u00b6\n- Part of the Stable ABI since version 3.14.\n3\n-\nPy_ASNATIVEBYTES_UNSIGNED_BUFFER\u00b6\n- Part of the Stable ABI since version 3.14.\n4\n-\nPy_ASNATIVEBYTES_REJECT_NEGATIVE\u00b6\n- Part of the Stable ABI since version 3.14.\n8\n-\nPy_ASNATIVEBYTES_ALLOW_INDEX\u00b6\n- Part of the Stable ABI since version 3.14.\n16\nSpecifying\nPy_ASNATIVEBYTES_NATIVE_ENDIAN\nwill override any other endian flags. Passing2\nis reserved.By default, sufficient buffer will be requested to include a sign bit. For example, when converting 128 with n_bytes=1, the function will return 2 (or more) in order to store a zero sign bit.\nIf\nPy_ASNATIVEBYTES_UNSIGNED_BUFFER\nis specified, a zero sign bit will be omitted from size calculations. This allows, for example, 128 to fit in a single-byte buffer. If the destination buffer is later treated as signed, a positive input value may become negative. Note that the flag does not affect handling of negative values: for those, space for a sign bit is always requested.Specifying\nPy_ASNATIVEBYTES_REJECT_NEGATIVE\ncauses an exception to be set if pylong is negative. Without this flag, negative values will be copied provided there is enough space for at least one sign bit, regardless of whetherPy_ASNATIVEBYTES_UNSIGNED_BUFFER\nwas specified.If\nPy_ASNATIVEBYTES_ALLOW_INDEX\nis specified and a non-integer value is passed, its__index__()\nmethod will be called first. This may result in Python code executing and other threads being allowed to run, which could cause changes to other objects or values in use. When flags is-1\n, this option is not set, and non-integer values will raiseTypeError\n.Note\nWith the default flags (\n-1\n, or UNSIGNED_BUFFER without REJECT_NEGATIVE), multiple Python integers can map to a single value without overflow. For example, both255\nand-1\nfit a single-byte buffer and set all its bits. This matches typical C cast behavior.Added in version 3.13.\n-\nPy_ASNATIVEBYTES_DEFAULTS\u00b6\n-\nPyLong_AsPid(pid)\u00b6\nMacro for converting a Python integer into a process identifier.\nThis can be defined as an alias to\nPyLong_AsLong()\n,PyLong_FromLongLong()\n, orPyLong_AsInt()\n, depending on the size of the system\u2019s PID type.Added in version 3.2.\n-\nint PyLong_GetSign(PyObject *obj, int *sign)\u00b6\nGet the sign of the integer object obj.\nOn success, set *sign to the integer sign (0, -1 or +1 for zero, negative or positive integer, respectively) and return 0.\nOn failure, return -1 with an exception set. This function always succeeds if obj is a\nPyLongObject\nor its subtype.Added in version 3.14.\n-\nint PyLong_IsPositive(PyObject *obj)\u00b6\nCheck if the integer object obj is positive (\nobj > 0\n).If obj is an instance of\nPyLongObject\nor its subtype, return1\nwhen it\u2019s positive and0\notherwise. Else set an exception and return-1\n.Added in version 3.14.\n-\nint PyLong_IsNegative(PyObject *obj)\u00b6\nCheck if the integer object obj is negative (\nobj < 0\n).If obj is an instance of\nPyLongObject\nor its subtype, return1\nwhen it\u2019s negative and0\notherwise. Else set an exception and return-1\n.Added in version 3.14.\n-\nint PyLong_IsZero(PyObject *obj)\u00b6\nCheck if the integer object obj is zero.\nIf obj is an instance of\nPyLongObject\nor its subtype, return1\nwhen it\u2019s zero and0\notherwise. Else set an exception and return-1\n.Added in version 3.14.\n-\nPyObject *PyLong_GetInfo(void)\u00b6\n- Part of the Stable ABI.\nOn success, return a read only named tuple, that holds information about Python\u2019s internal representation of integers. See\nsys.int_info\nfor description of individual fields.On failure, return\nNULL\nwith an exception set.Added in version 3.1.\n-\nint PyUnstable_Long_IsCompact(const PyLongObject *op)\u00b6\n- This is Unstable API. It may change without warning in minor releases.\nReturn 1 if op is compact, 0 otherwise.\nThis function makes it possible for performance-critical code to implement a \u201cfast path\u201d for small integers. For compact values use\nPyUnstable_Long_CompactValue()\n; for others fall back to aPyLong_As*\nfunction orPyLong_AsNativeBytes()\n.The speedup is expected to be negligible for most users.\nExactly what values are considered compact is an implementation detail and is subject to change.\nAdded in version 3.12.\n-\nPy_ssize_t PyUnstable_Long_CompactValue(const PyLongObject *op)\u00b6\n- This is Unstable API. It may change without warning in minor releases.\nIf op is compact, as determined by\nPyUnstable_Long_IsCompact()\n, return its value.Otherwise, the return value is undefined.\nAdded in version 3.12.\nExport API\u00b6\nAdded in version 3.14.\n-\nstruct PyLongLayout\u00b6\nLayout of an array of \u201cdigits\u201d (\u201climbs\u201d in the GMP terminology), used to represent absolute value for arbitrary precision integers.\nUse\nPyLong_GetNativeLayout()\nto get the native layout of Pythonint\nobjects, used internally for integers with \u201cbig enough\u201d absolute value.See also\nsys.int_info\nwhich exposes similar information in Python.-\nuint8_t bits_per_digit\u00b6\nBits per digit. For example, a 15 bit digit means that bits 0-14 contain meaningful information.\n-\nuint8_t digit_size\u00b6\nDigit size in bytes. For example, a 15 bit digit will require at least 2 bytes.\n-\nint8_t digits_order\u00b6\nDigits order:\n1\nfor most significant digit first-1\nfor least significant digit first\n-\nint8_t digit_endianness\u00b6\nDigit endianness:\n1\nfor most significant byte first (big endian)-1\nfor least significant byte first (little endian)\n-\nuint8_t bits_per_digit\u00b6\n-\nconst PyLongLayout *PyLong_GetNativeLayout(void)\u00b6\nGet the native layout of Python\nint\nobjects.See the\nPyLongLayout\nstructure.The function must not be called before Python initialization nor after Python finalization. The returned layout is valid until Python is finalized. The layout is the same for all Python sub-interpreters in a process, and so it can be cached.\n-\nstruct PyLongExport\u00b6\nExport of a Python\nint\nobject.There are two cases:\n-\nPy_ssize_t ndigits\u00b6\nNumber of digits in\ndigits\narray. Only valid ifdigits\nis notNULL\n.\n-\nconst void *digits\u00b6\nRead-only array of unsigned digits. Can be\nNULL\n.\n-\nPy_ssize_t ndigits\u00b6\n-\nint PyLong_Export(PyObject *obj, PyLongExport *export_long)\u00b6\nExport a Python\nint\nobject.export_long must point to a\nPyLongExport\nstructure allocated by the caller. It must not beNULL\n.On success, fill in *export_long and return\n0\n. On error, set an exception and return-1\n.PyLong_FreeExport()\nmust be called when the export is no longer needed.CPython implementation detail: This function always succeeds if obj is a Python\nint\nobject or a subclass.\n-\nvoid PyLong_FreeExport(PyLongExport *export_long)\u00b6\nRelease the export export_long created by\nPyLong_Export()\n.CPython implementation detail: Calling\nPyLong_FreeExport()\nis optional if export_long->digits isNULL\n.\nPyLongWriter API\u00b6\nThe PyLongWriter\nAPI can be used to import an integer.\nAdded in version 3.14.\n-\nstruct PyLongWriter\u00b6\nA Python\nint\nwriter instance.The instance must be destroyed by\nPyLongWriter_Finish()\norPyLongWriter_Discard()\n.\n-\nPyLongWriter *PyLongWriter_Create(int negative, Py_ssize_t ndigits, void **digits)\u00b6\nCreate a\nPyLongWriter\n.On success, allocate *digits and return a writer. On error, set an exception and return\nNULL\n.negative is\n1\nif the number is negative, or0\notherwise.ndigits is the number of digits in the digits array. It must be greater than 0.\ndigits must not be NULL.\nAfter a successful call to this function, the caller should fill in the array of digits digits and then call\nPyLongWriter_Finish()\nto get a Pythonint\n. The layout of digits is described byPyLong_GetNativeLayout()\n.Digits must be in the range [\n0\n;(1 << bits_per_digit) - 1\n] (where thebits_per_digit\nis the number of bits per digit). Any unused most significant digits must be set to0\n.Alternately, call\nPyLongWriter_Discard()\nto destroy the writer instance without creating anint\nobject.\n-\nPyObject *PyLongWriter_Finish(PyLongWriter *writer)\u00b6\n- Return value: New reference.\nFinish a\nPyLongWriter\ncreated byPyLongWriter_Create()\n.On success, return a Python\nint\nobject. On error, set an exception and returnNULL\n.The function takes care of normalizing the digits and converts the object to a compact integer if needed.\nThe writer instance and the digits array are invalid after the call.\n-\nvoid PyLongWriter_Discard(PyLongWriter *writer)\u00b6\nDiscard a\nPyLongWriter\ncreated byPyLongWriter_Create()\n.If writer is\nNULL\n, no operation is performed.The writer instance and the digits array are invalid after the call.\nDeprecated API\u00b6\nThese macros are soft deprecated. They describe parameters\nof the internal representation of PyLongObject\ninstances.\nUse PyLong_GetNativeLayout()\ninstead, along with PyLong_Export()\nto read integer data or PyLongWriter\nto write it.\nThese currently use the same layout, but are designed to continue working correctly\neven if CPython\u2019s internal integer representation changes.\n-\nPyLong_SHIFT\u00b6\nThis is equivalent to\nbits_per_digit\nin the output ofPyLong_GetNativeLayout()\n.\n-\nPyLong_BASE\u00b6\nThis is currently equivalent to 1 << PyLong_SHIFT.\n-\nPyLong_MASK\u00b6\nThis is currently equivalent to (1 << PyLong_SHIFT) - 1", "code_snippets": [], "language": "Python", "source": "python.org", "token_count": 6201} +{"url": "https://docs.python.org/3/library/uu.html", "title": " \u2014 Encode and decode uuencode files", "content": "uu\n\u2014 Encode and decode uuencode files\u00b6\nDeprecated since version 3.11, removed in version 3.13.\nThis module is no longer part of the Python standard library. It was removed in Python 3.13 after being deprecated in Python 3.11. The removal was decided in PEP 594.\nThe last version of Python that provided the uu\nmodule was\nPython 3.12.", "code_snippets": [], "language": "Python", "source": "python.org", "token_count": 83} +{"url": "https://docs.python.org/3/library/msvcrt.html", "title": " \u2014 Useful routines from the MS VC++ runtime", "content": "msvcrt\n\u2014 Useful routines from the MS VC++ runtime\u00b6\nThese functions provide access to some useful capabilities on Windows platforms.\nSome higher-level modules use these functions to build the Windows\nimplementations of their services. For example, the getpass\nmodule uses\nthis in the implementation of the getpass()\nfunction.\nFurther documentation on these functions can be found in the Platform API documentation.\nThe module implements both the normal and wide char variants of the console I/O api. The normal API deals only with ASCII characters and is of limited use for internationalized applications. The wide char API should be used where ever possible.\nAvailability: Windows.\nFile Operations\u00b6\n- msvcrt.locking(fd, mode, nbytes)\u00b6\nLock part of a file based on file descriptor fd from the C runtime. Raises\nOSError\non failure. The locked region of the file extends from the current file position for nbytes bytes, and may continue beyond the end of the file. mode must be one of theLK_*\nconstants listed below. Multiple regions in a file may be locked at the same time, but may not overlap. Adjacent regions are not merged; they must be unlocked individually.Raises an auditing event\nmsvcrt.locking\nwith argumentsfd\n,mode\n,nbytes\n.\n- msvcrt.LK_LOCK\u00b6\n- msvcrt.LK_RLCK\u00b6\nLocks the specified bytes. If the bytes cannot be locked, the program immediately tries again after 1 second. If, after 10 attempts, the bytes cannot be locked,\nOSError\nis raised.\n- msvcrt.LK_NBLCK\u00b6\n- msvcrt.LK_NBRLCK\u00b6\nLocks the specified bytes. If the bytes cannot be locked,\nOSError\nis raised.\n- msvcrt.LK_UNLCK\u00b6\nUnlocks the specified bytes, which must have been previously locked.\n- msvcrt.setmode(fd, flags)\u00b6\nSet the line-end translation mode for the file descriptor fd. To set it to text mode, flags should be\nos.O_TEXT\n; for binary, it should beos.O_BINARY\n.\n- msvcrt.open_osfhandle(handle, flags)\u00b6\nCreate a C runtime file descriptor from the file handle handle. The flags parameter should be a bitwise OR of\nos.O_APPEND\n,os.O_RDONLY\n,os.O_TEXT\nandos.O_NOINHERIT\n. The returned file descriptor may be used as a parameter toos.fdopen()\nto create a file object.The file descriptor is inheritable by default. Pass\nos.O_NOINHERIT\nflag to make it non inheritable.Raises an auditing event\nmsvcrt.open_osfhandle\nwith argumentshandle\n,flags\n.\n- msvcrt.get_osfhandle(fd)\u00b6\nReturn the file handle for the file descriptor fd. Raises\nOSError\nif fd is not recognized.Raises an auditing event\nmsvcrt.get_osfhandle\nwith argumentfd\n.\nConsole I/O\u00b6\n- msvcrt.kbhit()\u00b6\nReturns a nonzero value if a keypress is waiting to be read. Otherwise, return 0.\n- msvcrt.getch()\u00b6\nRead a keypress and return the resulting character as a byte string. Nothing is echoed to the console. This call will block if a keypress is not already available, but will not wait for Enter to be pressed. If the pressed key was a special function key, this will return\n'\\000'\nor'\\xe0'\n; the next call will return the keycode. The Control-C keypress cannot be read with this function.\n- msvcrt.getche()\u00b6\nSimilar to\ngetch()\n, but the keypress will be echoed if it represents a printable character.\n- msvcrt.putch(char)\u00b6\nPrint the byte string char to the console without buffering.\nOther Functions\u00b6\n- msvcrt.heapmin()\u00b6\nForce the\nmalloc()\nheap to clean itself up and return unused blocks to the operating system. On failure, this raisesOSError\n.\n- msvcrt.set_error_mode(mode)\u00b6\nChanges the location where the C runtime writes an error message for an error that might end the program. mode must be one of the\nOUT_*\nconstants listed below orREPORT_ERRMODE\n. Returns the old setting or -1 if an error occurs. Only available in debug build of Python.\n- msvcrt.OUT_TO_DEFAULT\u00b6\nError sink is determined by the app\u2019s type. Only available in debug build of Python.\n- msvcrt.OUT_TO_STDERR\u00b6\nError sink is a standard error. Only available in debug build of Python.\n- msvcrt.OUT_TO_MSGBOX\u00b6\nError sink is a message box. Only available in debug build of Python.\n- msvcrt.REPORT_ERRMODE\u00b6\nReport the current error mode value. Only available in debug build of Python.\n- msvcrt.CrtSetReportMode(type, mode)\u00b6\nSpecifies the destination or destinations for a specific report type generated by\n_CrtDbgReport()\nin the MS VC++ runtime. type must be one of theCRT_*\nconstants listed below. mode must be one of theCRTDBG_*\nconstants listed below. Only available in debug build of Python.\n- msvcrt.CrtSetReportFile(type, file)\u00b6\nAfter you use\nCrtSetReportMode()\nto specifyCRTDBG_MODE_FILE\n, you can specify the file handle to receive the message text. type must be one of theCRT_*\nconstants listed below. file should be the file handle your want specified. Only available in debug build of Python.\n- msvcrt.CRT_WARN\u00b6\nWarnings, messages, and information that doesn\u2019t need immediate attention.\n- msvcrt.CRT_ERROR\u00b6\nErrors, unrecoverable problems, and issues that require immediate attention.\n- msvcrt.CRT_ASSERT\u00b6\nAssertion failures.\n- msvcrt.CRTDBG_MODE_DEBUG\u00b6\nWrites the message to the debugger\u2019s output window.\n- msvcrt.CRTDBG_MODE_FILE\u00b6\nWrites the message to a user-supplied file handle.\nCrtSetReportFile()\nshould be called to define the specific file or stream to use as the destination.\n- msvcrt.CRTDBG_MODE_WNDW\u00b6\nCreates a message box to display the message along with the\nAbort\n,Retry\n, andIgnore\nbuttons.\n- msvcrt.CRTDBG_REPORT_MODE\u00b6\nReturns current mode for the specified type.\n- msvcrt.CRT_ASSEMBLY_VERSION\u00b6\nThe CRT Assembly version, from the\ncrtassem.h\nheader file.\n- msvcrt.VC_ASSEMBLY_PUBLICKEYTOKEN\u00b6\nThe VC Assembly public key token, from the\ncrtassem.h\nheader file.\n- msvcrt.LIBRARIES_ASSEMBLY_NAME_PREFIX\u00b6\nThe Libraries Assembly name prefix, from the\ncrtassem.h\nheader file.", "code_snippets": [], "language": "Python", "source": "python.org", "token_count": 1420} +{"url": "https://docs.python.org/3/whatsnew/3.7.html", "title": "What\u2019s New In Python 3.7", "content": "What\u2019s New In Python 3.7\u00b6\n- Editor:\nElvis Pranskevichus \nThis article explains the new features in Python 3.7, compared to 3.6. Python 3.7 was released on June 27, 2018. For full details, see the changelog.\nSummary \u2013 Release Highlights\u00b6\nNew syntax features:\nPEP 563, postponed evaluation of type annotations.\nBackwards incompatible syntax changes:\nNew library modules:\nNew built-in features:\nPEP 553, the new\nbreakpoint()\nfunction.\nPython data model improvements:\nPEP 562, customization of access to module attributes.\nPEP 560, core support for typing module and generic types.\nthe insertion-order preservation nature of dict objects has been declared to be an official part of the Python language spec.\nSignificant improvements in the standard library:\nThe\nasyncio\nmodule has received new features, significant usability and performance improvements.The\ntime\nmodule gained support for functions with nanosecond resolution.\nCPython implementation improvements:\nAvoiding the use of ASCII as a default text encoding:\nPEP 552, deterministic .pycs\nPEP 565, improved\nDeprecationWarning\nhandling\nC API improvements:\nPEP 539, new C API for thread-local storage\nDocumentation improvements:\nPEP 545, Python documentation translations\nNew documentation translations: Japanese, French, and Korean.\nThis release features notable performance improvements in many areas. The Optimizations section lists them in detail.\nFor a list of changes that may affect compatibility with previous Python releases please refer to the Porting to Python 3.7 section.\nNew Features\u00b6\nPEP 563: Postponed Evaluation of Annotations\u00b6\nThe advent of type hints in Python uncovered two glaring usability issues with the functionality of annotations added in PEP 3107 and refined further in PEP 526:\nannotations could only use names which were already available in the current scope, in other words they didn\u2019t support forward references of any kind; and\nannotating source code had adverse effects on startup time of Python programs.\nBoth of these issues are fixed by postponing the evaluation of\nannotations. Instead of compiling code which executes expressions in\nannotations at their definition time, the compiler stores the annotation\nin a string form equivalent to the AST of the expression in question.\nIf needed, annotations can be resolved at runtime using\ntyping.get_type_hints()\n. In the common case where this is not\nrequired, the annotations are cheaper to store (since short strings\nare interned by the interpreter) and make startup time faster.\nUsability-wise, annotations now support forward references, making the following syntax valid:\nclass C:\n@classmethod\ndef from_string(cls, source: str) -> C:\n...\ndef validate_b(self, obj: B) -> bool:\n...\nclass B:\n...\nSince this change breaks compatibility, the new behavior needs to be enabled\non a per-module basis in Python 3.7 using a __future__\nimport:\nfrom __future__ import annotations\nIt will become the default in Python 3.10.\nSee also\n- PEP 563 \u2013 Postponed evaluation of annotations\nPEP written and implemented by \u0141ukasz Langa.\nPEP 538: Legacy C Locale Coercion\u00b6\nAn ongoing challenge within the Python 3 series has been determining a sensible default strategy for handling the \u201c7-bit ASCII\u201d text encoding assumption currently implied by the use of the default C or POSIX locale on non-Windows platforms.\nPEP 538 updates the default interpreter command line interface to\nautomatically coerce that locale to an available UTF-8 based locale as\ndescribed in the documentation of the new PYTHONCOERCECLOCALE\nenvironment variable. Automatically setting LC_CTYPE\nthis way means that\nboth the core interpreter and locale-aware C extensions (such as\nreadline\n) will assume the use of UTF-8 as the default text encoding,\nrather than ASCII.\nThe platform support definition in PEP 11 has also been updated to limit full text handling support to suitably configured non-ASCII based locales.\nAs part of this change, the default error handler for stdin\nand\nstdout\nis now surrogateescape\n(rather than strict\n) when\nusing any of the defined coercion target locales (currently C.UTF-8\n,\nC.utf8\n, and UTF-8\n). The default error handler for stderr\ncontinues to be backslashreplace\n, regardless of locale.\nLocale coercion is silent by default, but to assist in debugging potentially\nlocale related integration problems, explicit warnings (emitted directly on\nstderr\n) can be requested by setting PYTHONCOERCECLOCALE=warn\n.\nThis setting will also cause the Python runtime to emit a warning if the\nlegacy C locale remains active when the core interpreter is initialized.\nWhile PEP 538\u2019s locale coercion has the benefit of also affecting extension\nmodules (such as GNU readline\n), as well as child processes (including those\nrunning non-Python applications and older versions of Python), it has the\ndownside of requiring that a suitable target locale be present on the running\nsystem. To better handle the case where no suitable target locale is available\n(as occurs on RHEL/CentOS 7, for example), Python 3.7 also implements\nPEP 540: Forced UTF-8 Runtime Mode.\nSee also\n- PEP 538 \u2013 Coercing the legacy C locale to a UTF-8 based locale\nPEP written and implemented by Nick Coghlan.\nPEP 540: Forced UTF-8 Runtime Mode\u00b6\nThe new -X\nutf8\ncommand line option and PYTHONUTF8\nenvironment variable can be used to enable the Python UTF-8 Mode.\nWhen in UTF-8 mode, CPython ignores the locale settings, and uses the\nUTF-8 encoding by default. The error handlers for sys.stdin\nand\nsys.stdout\nstreams are set to surrogateescape\n.\nThe forced UTF-8 mode can be used to change the text handling behavior in an embedded Python interpreter without changing the locale settings of an embedding application.\nWhile PEP 540\u2019s UTF-8 mode has the benefit of working regardless of which\nlocales are available on the running system, it has the downside of having no\neffect on extension modules (such as GNU readline\n), child processes running\nnon-Python applications, and child processes running older versions of Python.\nTo reduce the risk of corrupting text data when communicating with such\ncomponents, Python 3.7 also implements PEP 540: Forced UTF-8 Runtime Mode).\nThe UTF-8 mode is enabled by default when the locale is C\nor POSIX\n, and\nthe PEP 538 locale coercion feature fails to change it to a UTF-8 based\nalternative (whether that failure is due to PYTHONCOERCECLOCALE=0\nbeing set,\nLC_ALL\nbeing set, or the lack of a suitable target locale).\nSee also\n- PEP 540 \u2013 Add a new UTF-8 mode\nPEP written and implemented by Victor Stinner\nPEP 553: Built-in breakpoint()\n\u00b6\nPython 3.7 includes the new built-in breakpoint()\nfunction as\nan easy and consistent way to enter the Python debugger.\nBuilt-in breakpoint()\ncalls sys.breakpointhook()\n. By default, the\nlatter imports pdb\nand then calls pdb.set_trace()\n, but by binding\nsys.breakpointhook()\nto the function of your choosing, breakpoint()\ncan\nenter any debugger. Additionally, the environment variable\nPYTHONBREAKPOINT\ncan be set to the callable of your debugger of\nchoice. Set PYTHONBREAKPOINT=0\nto completely disable built-in\nbreakpoint()\n.\nSee also\n- PEP 553 \u2013 Built-in breakpoint()\nPEP written and implemented by Barry Warsaw\nPEP 539: New C API for Thread-Local Storage\u00b6\nWhile Python provides a C API for thread-local storage support; the existing Thread Local Storage (TLS) API has used int to represent TLS keys across all platforms. This has not generally been a problem for officially support platforms, but that is neither POSIX-compliant, nor portable in any practical sense.\nPEP 539 changes this by providing a new Thread Specific Storage (TSS)\nAPI to CPython which supersedes use of the\nexisting TLS API within the CPython interpreter, while deprecating the existing\nAPI. The TSS API uses a new type Py_tss_t\ninstead of int\nto represent TSS keys\u2013an opaque type the definition of which may depend on\nthe underlying TLS implementation. Therefore, this will allow to build CPython\non platforms where the native TLS key is defined in a way that cannot be safely\ncast to int.\nNote that on platforms where the native TLS key is defined in a way that cannot be safely cast to int, all functions of the existing TLS API will be no-op and immediately return failure. This indicates clearly that the old API is not supported on platforms where it cannot be used reliably, and that no effort will be made to add such support.\nSee also\n- PEP 539 \u2013 A New C-API for Thread-Local Storage in CPython\nPEP written by Erik M. Bray; implementation by Masayuki Yamamoto.\nPEP 562: Customization of Access to Module Attributes\u00b6\nPython 3.7 allows defining __getattr__()\non modules and will call\nit whenever a module attribute is otherwise not found. Defining\n__dir__()\non modules is now also allowed.\nA typical example of where this may be useful is module attribute deprecation and lazy loading.\nSee also\n- PEP 562 \u2013 Module\n__getattr__\nand__dir__\nPEP written and implemented by Ivan Levkivskyi\nPEP 564: New Time Functions With Nanosecond Resolution\u00b6\nThe resolution of clocks in modern systems can exceed the limited precision\nof a floating-point number returned by the time.time()\nfunction\nand its variants. To avoid loss of precision, PEP 564 adds six new\n\u201cnanosecond\u201d variants of the existing timer functions to the time\nmodule:\nThe new functions return the number of nanoseconds as an integer value.\nMeasurements\nshow that on Linux and Windows the resolution of time.time_ns()\nis\napproximately 3 times better than that of time.time()\n.\nSee also\n- PEP 564 \u2013 Add new time functions with nanosecond resolution\nPEP written and implemented by Victor Stinner\nPEP 565: Show DeprecationWarning in __main__\n\u00b6\nThe default handling of DeprecationWarning\nhas been changed such that\nthese warnings are once more shown by default, but only when the code\ntriggering them is running directly in the __main__\nmodule. As a result,\ndevelopers of single file scripts and those using Python interactively should\nonce again start seeing deprecation warnings for the APIs they use, but\ndeprecation warnings triggered by imported application, library and framework\nmodules will continue to be hidden by default.\nAs a result of this change, the standard library now allows developers to choose between three different deprecation warning behaviours:\nFutureWarning\n: always displayed by default, recommended for warnings intended to be seen by application end users (e.g. for deprecated application configuration settings).DeprecationWarning\n: displayed by default only in__main__\nand when running tests, recommended for warnings intended to be seen by other Python developers where a version upgrade may result in changed behaviour or an error.PendingDeprecationWarning\n: displayed by default only when running tests, intended for cases where a future version upgrade will change the warning category toDeprecationWarning\norFutureWarning\n.\nPreviously both DeprecationWarning\nand PendingDeprecationWarning\nwere only visible when running tests, which meant that developers primarily\nwriting single file scripts or using Python interactively could be surprised\nby breaking changes in the APIs they used.\nSee also\n- PEP 565 \u2013 Show DeprecationWarning in\n__main__\nPEP written and implemented by Nick Coghlan\nPEP 560: Core Support for typing\nmodule and Generic Types\u00b6\nInitially PEP 484 was designed in such way that it would not introduce any\nchanges to the core CPython interpreter. Now type hints and the typing\nmodule are extensively used by the community, so this restriction is removed.\nThe PEP introduces two special methods __class_getitem__()\nand\n__mro_entries__()\n, these methods are now used by most classes and special\nconstructs in typing\n. As a result, the speed of various operations\nwith types increased up to 7 times, the generic types can be used without\nmetaclass conflicts, and several long standing bugs in typing\nmodule are\nfixed.\nSee also\n- PEP 560 \u2013 Core support for typing module and generic types\nPEP written and implemented by Ivan Levkivskyi\nPEP 552: Hash-based .pyc Files\u00b6\nPython has traditionally checked the up-to-dateness of bytecode cache files\n(i.e., .pyc\nfiles) by comparing the source metadata (last-modified timestamp\nand size) with source metadata saved in the cache file header when it was\ngenerated. While effective, this invalidation method has its drawbacks. When\nfilesystem timestamps are too coarse, Python can miss source updates, leading to\nuser confusion. Additionally, having a timestamp in the cache file is\nproblematic for build reproducibility and\ncontent-based build systems.\nPEP 552 extends the pyc format to allow the hash of the source file to be\nused for invalidation instead of the source timestamp. Such .pyc\nfiles are\ncalled \u201chash-based\u201d. By default, Python still uses timestamp-based invalidation\nand does not generate hash-based .pyc\nfiles at runtime. Hash-based .pyc\nfiles may be generated with py_compile\nor compileall\n.\nHash-based .pyc\nfiles come in two variants: checked and unchecked. Python\nvalidates checked hash-based .pyc\nfiles against the corresponding source\nfiles at runtime but doesn\u2019t do so for unchecked hash-based pycs. Unchecked\nhash-based .pyc\nfiles are a useful performance optimization for environments\nwhere a system external to Python (e.g., the build system) is responsible for\nkeeping .pyc\nfiles up-to-date.\nSee Cached bytecode invalidation for more information.\nSee also\n- PEP 552 \u2013 Deterministic pycs\nPEP written and implemented by Benjamin Peterson\nPEP 545: Python Documentation Translations\u00b6\nPEP 545 describes the process of creating and maintaining Python documentation translations.\nThree new translations have been added:\nJapanese: https://docs.python.org/ja/\nFrench: https://docs.python.org/fr/\nKorean: https://docs.python.org/ko/\nSee also\n- PEP 545 \u2013 Python Documentation Translations\nPEP written and implemented by Julien Palard, Inada Naoki, and Victor Stinner.\nPython Development Mode (-X dev)\u00b6\nThe new -X\ndev\ncommand line option or the new\nPYTHONDEVMODE\nenvironment variable can be used to enable\nPython Development Mode. When in development mode, Python performs\nadditional runtime checks that are too expensive to be enabled by default.\nSee Python Development Mode documentation for the full\ndescription.\nOther Language Changes\u00b6\nAn\nawait\nexpression and comprehensions containing anasync for\nclause were illegal in the expressions in formatted string literals due to a problem with the implementation. In Python 3.7 this restriction was lifted.More than 255 arguments can now be passed to a function, and a function can now have more than 255 parameters. (Contributed by Serhiy Storchaka in bpo-12844 and bpo-18896.)\nbytes.fromhex()\nandbytearray.fromhex()\nnow ignore all ASCII whitespace, not only spaces. (Contributed by Robert Xiao in bpo-28927.)str\n,bytes\n, andbytearray\ngained support for the newisascii()\nmethod, which can be used to test if a string or bytes contain only the ASCII characters. (Contributed by INADA Naoki in bpo-32677.)ImportError\nnow displays module name and module__file__\npath whenfrom ... import ...\nfails. (Contributed by Matthias Bussonnier in bpo-29546.)Circular imports involving absolute imports with binding a submodule to a name are now supported. (Contributed by Serhiy Storchaka in bpo-30024.)\nobject.__format__(x, '')\nis now equivalent tostr(x)\nrather thanformat(str(self), '')\n. (Contributed by Serhiy Storchaka in bpo-28974.)In order to better support dynamic creation of stack traces,\ntypes.TracebackType\ncan now be instantiated from Python code, and thetb_next\nattribute on tracebacks is now writable. (Contributed by Nathaniel J. Smith in bpo-30579.)When using the\n-m\nswitch,sys.path[0]\nis now eagerly expanded to the full starting directory path, rather than being left as the empty directory (which allows imports from the current working directory at the time when an import occurs) (Contributed by Nick Coghlan in bpo-33053.)The new\n-X\nimporttime\noption or thePYTHONPROFILEIMPORTTIME\nenvironment variable can be used to show the timing of each module import. (Contributed by Inada Naoki in bpo-31415.)\nNew Modules\u00b6\ncontextvars\u00b6\nThe new contextvars\nmodule and a set of\nnew C APIs introduce\nsupport for context variables. Context variables are conceptually\nsimilar to thread-local variables. Unlike TLS, context variables\nsupport asynchronous code correctly.\nThe asyncio\nand decimal\nmodules have been updated to use\nand support context variables out of the box. Particularly the active\ndecimal context is now stored in a context variable, which allows\ndecimal operations to work with the correct context in asynchronous code.\nSee also\n- PEP 567 \u2013 Context Variables\nPEP written and implemented by Yury Selivanov\ndataclasses\u00b6\nThe new dataclass()\ndecorator provides a way to declare\ndata classes. A data class describes its attributes using class variable\nannotations. Its constructor and other magic methods, such as\n__repr__()\n, __eq__()\n, and\n__hash__()\nare generated automatically.\nExample:\n@dataclass\nclass Point:\nx: float\ny: float\nz: float = 0.0\np = Point(1.5, 2.5)\nprint(p) # produces \"Point(x=1.5, y=2.5, z=0.0)\"\nSee also\n- PEP 557 \u2013 Data Classes\nPEP written and implemented by Eric V. Smith\nimportlib.resources\u00b6\nThe new importlib.resources\nmodule provides several new APIs and one\nnew ABC for access to, opening, and reading resources inside packages.\nResources are roughly similar to files inside packages, but they needn\u2019t\nbe actual files on the physical file system. Module loaders can provide a\nget_resource_reader()\nfunction which returns\na importlib.abc.ResourceReader\ninstance to support this\nnew API. Built-in file path loaders and zip file loaders both support this.\nContributed by Barry Warsaw and Brett Cannon in bpo-32248.\nSee also\nimportlib_resources \u2013 a PyPI backport for earlier Python versions.\nImproved Modules\u00b6\nargparse\u00b6\nThe new ArgumentParser.parse_intermixed_args()\nmethod allows intermixing options and positional arguments.\n(Contributed by paul.j3 in bpo-14191.)\nasyncio\u00b6\nThe asyncio\nmodule has received many new features, usability and\nperformance improvements. Notable changes\ninclude:\nThe new provisional\nasyncio.run()\nfunction can be used to run a coroutine from synchronous code by automatically creating and destroying the event loop. (Contributed by Yury Selivanov in bpo-32314.)asyncio gained support for\ncontextvars\n.loop.call_soon()\n,loop.call_soon_threadsafe()\n,loop.call_later()\n,loop.call_at()\n, andFuture.add_done_callback()\nhave a new optional keyword-only context parameter.Tasks\nnow track their context automatically. See PEP 567 for more details. (Contributed by Yury Selivanov in bpo-32436.)The new\nasyncio.create_task()\nfunction has been added as a shortcut toasyncio.get_event_loop().create_task()\n. (Contributed by Andrew Svetlov in bpo-32311.)The new\nloop.start_tls()\nmethod can be used to upgrade an existing connection to TLS. (Contributed by Yury Selivanov in bpo-23749.)The new\nloop.sock_recv_into()\nmethod allows reading data from a socket directly into a provided buffer making it possible to reduce data copies. (Contributed by Antoine Pitrou in bpo-31819.)The new\nasyncio.current_task()\nfunction returns the currently runningTask\ninstance, and the newasyncio.all_tasks()\nfunction returns a set of all existingTask\ninstances in a given loop. TheTask.current_task()\nandTask.all_tasks()\nmethods have been deprecated. (Contributed by Andrew Svetlov in bpo-32250.)The new provisional\nBufferedProtocol\nclass allows implementing streaming protocols with manual control over the receive buffer. (Contributed by Yury Selivanov in bpo-32251.)The new\nasyncio.get_running_loop()\nfunction returns the currently running loop, and raises aRuntimeError\nif no loop is running. This is in contrast withasyncio.get_event_loop()\n, which will create a new event loop if none is running. (Contributed by Yury Selivanov in bpo-32269.)The new\nStreamWriter.wait_closed()\ncoroutine method allows waiting until the stream writer is closed. The newStreamWriter.is_closing()\nmethod can be used to determine if the writer is closing. (Contributed by Andrew Svetlov in bpo-32391.)The new\nloop.sock_sendfile()\ncoroutine method allows sending files usingos.sendfile\nwhen possible. (Contributed by Andrew Svetlov in bpo-32410.)The new\nFuture.get_loop()\nandTask.get_loop()\nmethods return the instance of the loop on which a task or a future were created.Server.get_loop()\nallows doing the same forasyncio.Server\nobjects. (Contributed by Yury Selivanov in bpo-32415 and Srinivas Reddy Thatiparthy in bpo-32418.)It is now possible to control how instances of\nasyncio.Server\nbegin serving. Previously, the server would start serving immediately when created. The new start_serving keyword argument toloop.create_server()\nandloop.create_unix_server()\n, as well asServer.start_serving()\n, andServer.serve_forever()\ncan be used to decouple server instantiation and serving. The newServer.is_serving()\nmethod returnsTrue\nif the server is serving.Server\nobjects are now asynchronous context managers:srv = await loop.create_server(...) async with srv: # some code # At this point, srv is closed and no longer accepts new connections.\n(Contributed by Yury Selivanov in bpo-32662.)\nCallback objects returned by\nloop.call_later()\ngained the newwhen()\nmethod which returns an absolute scheduled callback timestamp. (Contributed by Andrew Svetlov in bpo-32741.)The\nloop.create_datagram_endpoint()\nmethod gained support for Unix sockets. (Contributed by Quentin Dawans in bpo-31245.)The\nasyncio.open_connection()\n,asyncio.start_server()\nfunctions,loop.create_connection()\n,loop.create_server()\n,loop.create_accepted_socket()\nmethods and their corresponding UNIX socket variants now accept the ssl_handshake_timeout keyword argument. (Contributed by Neil Aspinall in bpo-29970.)The new\nHandle.cancelled()\nmethod returnsTrue\nif the callback was cancelled. (Contributed by Marat Sharafutdinov in bpo-31943.)The asyncio source has been converted to use the\nasync\n/await\nsyntax. (Contributed by Andrew Svetlov in bpo-32193.)The new\nReadTransport.is_reading()\nmethod can be used to determine the reading state of the transport. Additionally, calls toReadTransport.resume_reading()\nandReadTransport.pause_reading()\nare now idempotent. (Contributed by Yury Selivanov in bpo-32356.)Loop methods which accept socket paths now support passing path-like objects. (Contributed by Yury Selivanov in bpo-32066.)\nIn\nasyncio\nTCP sockets on Linux are now created withTCP_NODELAY\nflag set by default. (Contributed by Yury Selivanov and Victor Stinner in bpo-27456.)Exceptions occurring in cancelled tasks are no longer logged. (Contributed by Yury Selivanov in bpo-30508.)\nNew\nWindowsSelectorEventLoopPolicy\nandWindowsProactorEventLoopPolicy\nclasses. (Contributed by Yury Selivanov in bpo-33792.)\nSeveral asyncio\nAPIs have been\ndeprecated.\nbinascii\u00b6\nThe b2a_uu()\nfunction now accepts an optional backtick\nkeyword argument. When it\u2019s true, zeros are represented by '`'\ninstead of spaces. (Contributed by Xiang Zhang in bpo-30103.)\ncalendar\u00b6\nThe HTMLCalendar\nclass has new class attributes which ease\nthe customization of CSS classes in the produced HTML calendar.\n(Contributed by Oz Tiram in bpo-30095.)\ncollections\u00b6\ncollections.namedtuple()\nnow supports default values.\n(Contributed by Raymond Hettinger in bpo-32320.)\ncompileall\u00b6\ncompileall.compile_dir()\nlearned the new invalidation_mode parameter,\nwhich can be used to enable\nhash-based .pyc invalidation. The invalidation\nmode can also be specified on the command line using the new\n--invalidation-mode\nargument.\n(Contributed by Benjamin Peterson in bpo-31650.)\nconcurrent.futures\u00b6\nProcessPoolExecutor\nand\nThreadPoolExecutor\nnow\nsupport the new initializer and initargs constructor arguments.\n(Contributed by Antoine Pitrou in bpo-21423.)\nThe ProcessPoolExecutor\ncan now take the multiprocessing context via the new mp_context argument.\n(Contributed by Thomas Moreau in bpo-31540.)\ncontextlib\u00b6\nThe new nullcontext()\nis a simpler and faster no-op\ncontext manager than ExitStack\n.\n(Contributed by Jesse-Bakker in bpo-10049.)\nThe new asynccontextmanager()\n,\nAbstractAsyncContextManager\n, and\nAsyncExitStack\nhave been added to\ncomplement their synchronous counterparts. (Contributed\nby Jelle Zijlstra in bpo-29679 and bpo-30241,\nand by Alexander Mohr and Ilya Kulakov in bpo-29302.)\ncProfile\u00b6\nThe cProfile\ncommand line now accepts -m module_name\nas an\nalternative to script path. (Contributed by Sanyam Khurana in bpo-21862.)\ncrypt\u00b6\nThe crypt\nmodule now supports the Blowfish hashing method.\n(Contributed by Serhiy Storchaka in bpo-31664.)\nThe mksalt()\nfunction now allows specifying the number of rounds\nfor hashing. (Contributed by Serhiy Storchaka in bpo-31702.)\ndatetime\u00b6\nThe new datetime.fromisoformat()\nmethod constructs a datetime\nobject from a string\nin one of the formats output by\ndatetime.isoformat()\n.\n(Contributed by Paul Ganssle in bpo-15873.)\nThe tzinfo\nclass now supports sub-minute offsets.\n(Contributed by Alexander Belopolsky in bpo-5288.)\ndbm\u00b6\ndbm.dumb\nnow supports reading read-only files and no longer writes the\nindex file when it is not changed.\ndecimal\u00b6\nThe decimal\nmodule now uses context variables\nto store the decimal context.\n(Contributed by Yury Selivanov in bpo-32630.)\ndis\u00b6\nThe dis()\nfunction is now able to\ndisassemble nested code objects (the code of comprehensions, generator\nexpressions and nested functions, and the code used for building nested\nclasses). The maximum depth of disassembly recursion is controlled by\nthe new depth parameter.\n(Contributed by Serhiy Storchaka in bpo-11822.)\ndistutils\u00b6\nREADME.rst\nis now included in the list of distutils standard READMEs and\ntherefore included in source distributions.\n(Contributed by Ryan Gonzalez in bpo-11913.)\nenum\u00b6\nThe Enum\nlearned the new _ignore_\nclass property,\nwhich allows listing the names of properties which should not become\nenum members.\n(Contributed by Ethan Furman in bpo-31801.)\nIn Python 3.8, attempting to check for non-Enum objects in Enum\nclasses will raise a TypeError\n(e.g. 1 in Color\n); similarly,\nattempting to check for non-Flag objects in a Flag\nmember will\nraise TypeError\n(e.g. 1 in Perm.RW\n); currently, both operations\nreturn False\ninstead and are deprecated.\n(Contributed by Ethan Furman in bpo-33217.)\nfunctools\u00b6\nfunctools.singledispatch()\nnow supports registering implementations\nusing type annotations.\n(Contributed by \u0141ukasz Langa in bpo-32227.)\ngc\u00b6\nThe new gc.freeze()\nfunction allows freezing all objects tracked\nby the garbage collector and excluding them from future collections.\nThis can be used before a POSIX fork()\ncall to make the GC copy-on-write\nfriendly or to speed up collection. The new gc.unfreeze()\nfunctions\nreverses this operation. Additionally, gc.get_freeze_count()\ncan\nbe used to obtain the number of frozen objects.\n(Contributed by Li Zekun in bpo-31558.)\nhmac\u00b6\nThe hmac\nmodule now has an optimized one-shot digest()\nfunction, which is up to three times faster than HMAC()\n.\n(Contributed by Christian Heimes in bpo-32433.)\nhttp.client\u00b6\nHTTPConnection\nand HTTPSConnection\nnow support the new blocksize argument for improved upload throughput.\n(Contributed by Nir Soffer in bpo-31945.)\nhttp.server\u00b6\nSimpleHTTPRequestHandler\nnow supports the HTTP\nIf-Modified-Since\nheader. The server returns the 304 response status if\nthe target file was not modified after the time specified in the header.\n(Contributed by Pierre Quentel in bpo-29654.)\nSimpleHTTPRequestHandler\naccepts the new directory\nargument, in addition to the new --directory\ncommand line argument.\nWith this parameter, the server serves the specified directory, by default it\nuses the current working directory.\n(Contributed by St\u00e9phane Wirtel and Julien Palard in bpo-28707.)\nThe new ThreadingHTTPServer\nclass\nuses threads to handle requests using ThreadingMixIn\n.\nIt is used when http.server\nis run with -m\n.\n(Contributed by Julien Palard in bpo-31639.)\nidlelib and IDLE\u00b6\nMultiple fixes for autocompletion. (Contributed by Louie Lu in bpo-15786.)\nModule Browser (on the File menu, formerly called Class Browser), now displays nested functions and classes in addition to top-level functions and classes. (Contributed by Guilherme Polo, Cheryl Sabella, and Terry Jan Reedy in bpo-1612262.)\nThe Settings dialog (Options, Configure IDLE) has been partly rewritten to improve both appearance and function. (Contributed by Cheryl Sabella and Terry Jan Reedy in multiple issues.)\nThe font sample now includes a selection of non-Latin characters so that users can better see the effect of selecting a particular font. (Contributed by Terry Jan Reedy in bpo-13802.) The sample can be edited to include other characters. (Contributed by Serhiy Storchaka in bpo-31860.)\nThe IDLE features formerly implemented as extensions have been reimplemented as normal features. Their settings have been moved from the Extensions tab to other dialog tabs. (Contributed by Charles Wohlganger and Terry Jan Reedy in bpo-27099.)\nEditor code context option revised. Box displays all context lines up to maxlines. Clicking on a context line jumps the editor to that line. Context colors for custom themes is added to Highlights tab of Settings dialog. (Contributed by Cheryl Sabella and Terry Jan Reedy in bpo-33642, bpo-33768, and bpo-33679.)\nOn Windows, a new API call tells Windows that tk scales for DPI. On Windows 8.1+ or 10, with DPI compatibility properties of the Python binary unchanged, and a monitor resolution greater than 96 DPI, this should make text and lines sharper. It should otherwise have no effect. (Contributed by Terry Jan Reedy in bpo-33656.)\nNew in 3.7.1:\nOutput over N lines (50 by default) is squeezed down to a button. N can be changed in the PyShell section of the General page of the Settings dialog. Fewer, but possibly extra long, lines can be squeezed by right clicking on the output. Squeezed output can be expanded in place by double-clicking the button or into the clipboard or a separate window by right-clicking the button. (Contributed by Tal Einat in bpo-1529353.)\nThe changes above have been backported to 3.6 maintenance releases.\nNEW in 3.7.4:\nAdd \u201cRun Customized\u201d to the Run menu to run a module with customized settings. Any command line arguments entered are added to sys.argv. They re-appear in the box for the next customized run. One can also suppress the normal Shell main module restart. (Contributed by Cheryl Sabella, Terry Jan Reedy, and others in bpo-5680 and bpo-37627.)\nNew in 3.7.5:\nAdd optional line numbers for IDLE editor windows. Windows open without line numbers unless set otherwise in the General tab of the configuration dialog. Line numbers for an existing window are shown and hidden in the Options menu. (Contributed by Tal Einat and Saimadhav Heblikar in bpo-17535.)\nimportlib\u00b6\nThe importlib.abc.ResourceReader\nABC was introduced to\nsupport the loading of resources from packages. See also\nimportlib.resources.\n(Contributed by Barry Warsaw, Brett Cannon in bpo-32248.)\nimportlib.reload()\nnow raises ModuleNotFoundError\nif the module\nlacks a spec.\n(Contributed by Garvit Khatri in bpo-29851.)\nimportlib.util.find_spec()\nnow raises ModuleNotFoundError\ninstead of\nAttributeError\nif the specified parent module is not a package (i.e.\nlacks a __path__\nattribute).\n(Contributed by Milan Oberkirch in bpo-30436.)\nThe new importlib.util.source_hash()\ncan be used to compute the hash of\nthe passed source. A hash-based .pyc file\nembeds the value returned by this function.\nio\u00b6\nThe new TextIOWrapper.reconfigure()\nmethod can be used to reconfigure the text stream with the new settings.\n(Contributed by Antoine Pitrou in bpo-30526 and\nINADA Naoki in bpo-15216.)\nipaddress\u00b6\nThe new subnet_of()\nand supernet_of()\nmethods of\nipaddress.IPv6Network\nand ipaddress.IPv4Network\ncan\nbe used for network containment tests.\n(Contributed by Michel Albert and Cheryl Sabella in bpo-20825.)\nitertools\u00b6\nitertools.islice()\nnow accepts\ninteger-like objects\nas start, stop,\nand slice arguments.\n(Contributed by Will Roberts in bpo-30537.)\nlocale\u00b6\nThe new monetary argument to locale.format_string()\ncan be used\nto make the conversion use monetary thousands separators and\ngrouping strings. (Contributed by Garvit in bpo-10379.)\nThe locale.getpreferredencoding()\nfunction now always returns 'UTF-8'\non Android or when in the forced UTF-8 mode.\nlogging\u00b6\nLogger\ninstances can now be pickled.\n(Contributed by Vinay Sajip in bpo-30520.)\nThe new StreamHandler.setStream()\nmethod can be used to replace the logger stream after handler creation.\n(Contributed by Vinay Sajip in bpo-30522.)\nIt is now possible to specify keyword arguments to handler constructors in\nconfiguration passed to logging.config.fileConfig()\n.\n(Contributed by Preston Landers in bpo-31080.)\nmath\u00b6\nThe new math.remainder()\nfunction implements the IEEE 754-style remainder\noperation. (Contributed by Mark Dickinson in bpo-29962.)\nmimetypes\u00b6\nThe MIME type of .bmp has been changed from 'image/x-ms-bmp'\nto\n'image/bmp'\n.\n(Contributed by Nitish Chandra in bpo-22589.)\nmsilib\u00b6\nThe new Database.Close()\nmethod can be used\nto close the MSI database.\n(Contributed by Berker Peksag in bpo-20486.)\nmultiprocessing\u00b6\nThe new Process.close()\nmethod\nexplicitly closes the process object and releases all resources associated\nwith it. ValueError\nis raised if the underlying process is still\nrunning.\n(Contributed by Antoine Pitrou in bpo-30596.)\nThe new Process.kill()\nmethod can\nbe used to terminate the process using the SIGKILL\nsignal on Unix.\n(Contributed by Vitor Pereira in bpo-30794.)\nNon-daemonic threads created by Process\nare now\njoined on process exit.\n(Contributed by Antoine Pitrou in bpo-18966.)\nos\u00b6\nos.fwalk()\nnow accepts the path argument as bytes\n.\n(Contributed by Serhiy Storchaka in bpo-28682.)\nos.scandir()\ngained support for file descriptors.\n(Contributed by Serhiy Storchaka in bpo-25996.)\nThe new register_at_fork()\nfunction allows registering Python\ncallbacks to be executed at process fork.\n(Contributed by Antoine Pitrou in bpo-16500.)\nAdded os.preadv()\n(combine the functionality of os.readv()\nand\nos.pread()\n) and os.pwritev()\nfunctions (combine the functionality\nof os.writev()\nand os.pwrite()\n). (Contributed by Pablo Galindo in\nbpo-31368.)\nThe mode argument of os.makedirs()\nno longer affects the file\npermission bits of newly created intermediate-level directories.\n(Contributed by Serhiy Storchaka in bpo-19930.)\nos.dup2()\nnow returns the new file descriptor. Previously, None\nwas always returned.\n(Contributed by Benjamin Peterson in bpo-32441.)\nThe structure returned by os.stat()\nnow contains the\nst_fstype\nattribute on Solaris and its derivatives.\n(Contributed by Jes\u00fas Cea Avi\u00f3n in bpo-32659.)\npathlib\u00b6\nThe new Path.is_mount()\nmethod is now available\non POSIX systems and can be used to determine whether a path is a mount point.\n(Contributed by Cooper Ry Lees in bpo-30897.)\npdb\u00b6\npdb.set_trace()\nnow takes an optional header keyword-only\nargument. If given, it is printed to the console just before debugging\nbegins. (Contributed by Barry Warsaw in bpo-31389.)\npdb\ncommand line now accepts -m module_name\nas an alternative to\nscript file. (Contributed by Mario Corchero in bpo-32206.)\npy_compile\u00b6\npy_compile.compile()\n\u2013 and by extension, compileall\n\u2013 now\nrespects the SOURCE_DATE_EPOCH\nenvironment variable by\nunconditionally creating .pyc\nfiles for hash-based validation.\nThis allows for guaranteeing\nreproducible builds of .pyc\nfiles when they are created eagerly. (Contributed by Bernhard M. Wiedemann\nin bpo-29708.)\npydoc\u00b6\nThe pydoc server can now bind to an arbitrary hostname specified by the\nnew -n\ncommand-line argument.\n(Contributed by Feanil Patel in bpo-31128.)\nqueue\u00b6\nThe new SimpleQueue\nclass is an unbounded FIFO queue.\n(Contributed by Antoine Pitrou in bpo-14976.)\nre\u00b6\nThe flags re.ASCII\n, re.LOCALE\nand re.UNICODE\ncan be set within the scope of a group.\n(Contributed by Serhiy Storchaka in bpo-31690.)\nre.split()\nnow supports splitting on a pattern like r'\\b'\n,\n'^$'\nor (?=-)\nthat matches an empty string.\n(Contributed by Serhiy Storchaka in bpo-25054.)\nRegular expressions compiled with the re.LOCALE\nflag no longer\ndepend on the locale at compile time. Locale settings are applied only\nwhen the compiled regular expression is used.\n(Contributed by Serhiy Storchaka in bpo-30215.)\nFutureWarning\nis now emitted if a regular expression contains\ncharacter set constructs that will change semantically in the future,\nsuch as nested sets and set operations.\n(Contributed by Serhiy Storchaka in bpo-30349.)\nCompiled regular expression and match objects can now be copied\nusing copy.copy()\nand copy.deepcopy()\n.\n(Contributed by Serhiy Storchaka in bpo-10076.)\nsignal\u00b6\nThe new warn_on_full_buffer argument to the signal.set_wakeup_fd()\nfunction makes it possible to specify whether Python prints a warning on\nstderr when the wakeup buffer overflows.\n(Contributed by Nathaniel J. Smith in bpo-30050.)\nsocket\u00b6\nThe new socket.getblocking()\nmethod\nreturns True\nif the socket is in blocking mode and False\notherwise.\n(Contributed by Yury Selivanov in bpo-32373.)\nThe new socket.close()\nfunction closes the passed socket file descriptor.\nThis function should be used instead of os.close()\nfor better\ncompatibility across platforms.\n(Contributed by Christian Heimes in bpo-32454.)\nThe socket\nmodule now exposes the socket.TCP_CONGESTION (Linux 2.6.13), socket.TCP_USER_TIMEOUT (Linux 2.6.37), and socket.TCP_NOTSENT_LOWAT (Linux 3.12) constants.\n(Contributed by Omar Sandoval in bpo-26273 and\nNathaniel J. Smith in bpo-29728.)\nSupport for socket.AF_VSOCK\nsockets has been added to allow\ncommunication between virtual machines and their hosts.\n(Contributed by Cathy Avery in bpo-27584.)\nSockets now auto-detect family, type and protocol from file descriptor by default. (Contributed by Christian Heimes in bpo-28134.)\nsocketserver\u00b6\nsocketserver.ThreadingMixIn.server_close\nnow waits until all non-daemon\nthreads complete. socketserver.ForkingMixIn.server_close\nnow waits\nuntil all child processes complete.\nAdd a new socketserver.ForkingMixIn.block_on_close\nclass attribute to\nsocketserver.ForkingMixIn\nand socketserver.ThreadingMixIn\nclasses. Set the class attribute to False\nto get the pre-3.7 behaviour.\nsqlite3\u00b6\nsqlite3.Connection\nnow exposes the backup()\nmethod when the underlying SQLite library is at version 3.6.11 or higher.\n(Contributed by Lele Gaifax in bpo-27645.)\nThe database argument of sqlite3.connect()\nnow accepts any\npath-like object, instead of just a string.\n(Contributed by Anders Lorentsen in bpo-31843.)\nssl\u00b6\nThe ssl\nmodule now uses OpenSSL\u2019s builtin API instead of\nmatch_hostname()\nto check a host name or an IP address. Values\nare validated during TLS handshake. Any certificate validation error\nincluding failing the host name check now raises\nSSLCertVerificationError\nand aborts the handshake with a proper\nTLS Alert message. The new exception contains additional information.\nHost name validation can be customized with\nSSLContext.hostname_checks_common_name\n.\n(Contributed by Christian Heimes in bpo-31399.)\nNote\nThe improved host name check requires a libssl implementation compatible with OpenSSL 1.0.2 or 1.1. Consequently, OpenSSL 0.9.8 and 1.0.1 are no longer supported (see Platform Support Removals for more details). The ssl module is mostly compatible with LibreSSL 2.7.2 and newer.\nThe ssl\nmodule no longer sends IP addresses in SNI TLS extension.\n(Contributed by Christian Heimes in bpo-32185.)\nmatch_hostname()\nno longer supports partial wildcards like\nwww*.example.org\n.\n(Contributed by Mandeep Singh in bpo-23033 and Christian Heimes in\nbpo-31399.)\nThe default cipher suite selection of the ssl\nmodule now uses a blacklist\napproach rather than a hard-coded whitelist. Python no longer re-enables\nciphers that have been blocked by OpenSSL security updates. Default cipher\nsuite selection can be configured at compile time.\n(Contributed by Christian Heimes in bpo-31429.)\nValidation of server certificates containing internationalized domain names\n(IDNs) is now supported. As part of this change, the\nSSLSocket.server_hostname\nattribute\nnow stores the expected hostname in A-label form (\"xn--pythn-mua.org\"\n),\nrather than the U-label form (\"pyth\u00f6n.org\"\n). (Contributed by\nNathaniel J. Smith and Christian Heimes in bpo-28414.)\nThe ssl\nmodule has preliminary and experimental support for TLS 1.3 and\nOpenSSL 1.1.1. At the time of Python 3.7.0 release, OpenSSL 1.1.1 is still\nunder development and TLS 1.3 hasn\u2019t been finalized yet. The TLS 1.3\nhandshake and protocol behaves slightly differently than TLS 1.2 and earlier,\nsee TLS 1.3.\n(Contributed by Christian Heimes in bpo-32947, bpo-20995,\nbpo-29136, bpo-30622 and bpo-33618)\nSSLSocket\nand SSLObject\nno longer have a public\nconstructor. Direct instantiation was never a documented and supported\nfeature. Instances must be created with SSLContext\nmethods\nwrap_socket()\nand wrap_bio()\n.\n(Contributed by Christian Heimes in bpo-32951)\nOpenSSL 1.1 APIs for setting the minimum and maximum TLS protocol version are\navailable as SSLContext.minimum_version\nand SSLContext.maximum_version\n.\nSupported protocols are indicated by several new flags, such as\nHAS_TLSv1_1\n.\n(Contributed by Christian Heimes in bpo-32609.)\nAdded ssl.SSLContext.post_handshake_auth\nto enable and\nssl.SSLSocket.verify_client_post_handshake()\nto initiate TLS 1.3\npost-handshake authentication.\n(Contributed by Christian Heimes in gh-78851.)\nstring\u00b6\nstring.Template\nnow lets you to optionally modify the regular\nexpression pattern for braced placeholders and non-braced placeholders\nseparately. (Contributed by Barry Warsaw in bpo-1198569.)\nsubprocess\u00b6\nThe subprocess.run()\nfunction accepts the new capture_output\nkeyword argument. When true, stdout and stderr will be captured.\nThis is equivalent to passing subprocess.PIPE\nas stdout and\nstderr arguments.\n(Contributed by Bo Bayles in bpo-32102.)\nThe subprocess.run\nfunction and the subprocess.Popen\nconstructor\nnow accept the text keyword argument as an alias\nto universal_newlines.\n(Contributed by Andrew Clegg in bpo-31756.)\nOn Windows the default for close_fds was changed from False\nto\nTrue\nwhen redirecting the standard handles. It\u2019s now possible to set\nclose_fds to true when redirecting the standard handles. See\nsubprocess.Popen\n. This means that close_fds now defaults to\nTrue\non all supported platforms.\n(Contributed by Segev Finer in bpo-19764.)\nThe subprocess module is now more graceful when handling\nKeyboardInterrupt\nduring subprocess.call()\n,\nsubprocess.run()\n, or in a Popen\ncontext manager. It now waits a short amount of time for the child\nto exit, before continuing the handling of the KeyboardInterrupt\nexception.\n(Contributed by Gregory P. Smith in bpo-25942.)\nsys\u00b6\nThe new sys.breakpointhook()\nhook function is called by the\nbuilt-in breakpoint()\n.\n(Contributed by Barry Warsaw in bpo-31353.)\nOn Android, the new sys.getandroidapilevel()\nreturns the build-time\nAndroid API version.\n(Contributed by Victor Stinner in bpo-28740.)\nThe new sys.get_coroutine_origin_tracking_depth()\nfunction returns\nthe current coroutine origin tracking depth, as set by\nthe new sys.set_coroutine_origin_tracking_depth()\n. asyncio\nhas been converted to use this new API instead of\nthe deprecated sys.set_coroutine_wrapper()\n.\n(Contributed by Nathaniel J. Smith in bpo-32591.)\ntime\u00b6\nPEP 564 adds six new functions with nanosecond resolution to the\ntime\nmodule:\nNew clock identifiers have been added:\ntime.CLOCK_BOOTTIME\n(Linux): Identical totime.CLOCK_MONOTONIC\n, except it also includes any time that the system is suspended.time.CLOCK_PROF\n(FreeBSD, NetBSD and OpenBSD): High-resolution per-process CPU timer.time.CLOCK_UPTIME\n(FreeBSD, OpenBSD): Time whose absolute value is the time the system has been running and not suspended, providing accurate uptime measurement.\nThe new time.thread_time()\nand time.thread_time_ns()\nfunctions\ncan be used to get per-thread CPU time measurements.\n(Contributed by Antoine Pitrou in bpo-32025.)\nThe new time.pthread_getcpuclockid()\nfunction returns the clock ID\nof the thread-specific CPU-time clock.\ntkinter\u00b6\nThe new tkinter.ttk.Spinbox\nclass is now available.\n(Contributed by Alan Moore in bpo-32585.)\ntracemalloc\u00b6\ntracemalloc.Traceback\nbehaves more like regular tracebacks,\nsorting the frames from oldest to most recent.\nTraceback.format()\nnow accepts negative limit, truncating the result to the\nabs(limit)\noldest frames. To get the old behaviour, use\nthe new most_recent_first argument to Traceback.format()\n.\n(Contributed by Jesse Bakker in bpo-32121.)\ntypes\u00b6\nThe new WrapperDescriptorType\n,\nMethodWrapperType\n, MethodDescriptorType\n,\nand ClassMethodDescriptorType\nclasses are now available.\n(Contributed by Manuel Krebber and Guido van Rossum in bpo-29377,\nand Serhiy Storchaka in bpo-32265.)\nThe new types.resolve_bases()\nfunction resolves MRO entries\ndynamically as specified by PEP 560.\n(Contributed by Ivan Levkivskyi in bpo-32717.)\nunicodedata\u00b6\nThe internal unicodedata\ndatabase has been upgraded to use Unicode 11. (Contributed by Benjamin\nPeterson.)\nunittest\u00b6\nThe new -k\ncommand-line option allows filtering tests by a name\nsubstring or a Unix shell-like pattern.\nFor example, python -m unittest -k foo\nruns\nfoo_tests.SomeTest.test_something\n, bar_tests.SomeTest.test_foo\n,\nbut not bar_tests.FooTest.test_something\n.\n(Contributed by Jonas Haag in bpo-32071.)\nunittest.mock\u00b6\nThe sentinel\nattributes now preserve their identity\nwhen they are copied\nor pickled\n. (Contributed by\nSerhiy Storchaka in bpo-20804.)\nThe new seal()\nfunction allows sealing\nMock\ninstances, which will disallow further creation\nof attribute mocks. The seal is applied recursively to all attributes that\nare themselves mocks.\n(Contributed by Mario Corchero in bpo-30541.)\nurllib.parse\u00b6\nurllib.parse.quote()\nhas been updated from RFC 2396 to RFC 3986,\nadding ~\nto the set of characters that are never quoted by default.\n(Contributed by Christian Theune and Ratnadeep Debnath in bpo-16285.)\nuu\u00b6\nThe uu.encode()\nfunction now accepts an optional backtick\nkeyword argument. When it\u2019s true, zeros are represented by '`'\ninstead of spaces. (Contributed by Xiang Zhang in bpo-30103.)\nuuid\u00b6\nThe new UUID.is_safe\nattribute relays information\nfrom the platform about whether generated UUIDs are generated with a\nmultiprocessing-safe method.\n(Contributed by Barry Warsaw in bpo-22807.)\nuuid.getnode()\nnow prefers universally administered\nMAC addresses over locally administered MAC addresses.\nThis makes a better guarantee for global uniqueness of UUIDs returned\nfrom uuid.uuid1()\n. If only locally administered MAC addresses are\navailable, the first such one found is returned.\n(Contributed by Barry Warsaw in bpo-32107.)\nwarnings\u00b6\nThe initialization of the default warnings filters has changed as follows:\nwarnings enabled via command line options (including those for\n-b\nand the new CPython-specific-X\ndev\noption) are always passed to the warnings machinery via thesys.warnoptions\nattribute.warnings filters enabled via the command line or the environment now have the following order of precedence:\nthe\nBytesWarning\nfilter for-b\n(or-bb\n)any filters specified with the\n-W\noptionany filters specified with the\nPYTHONWARNINGS\nenvironment variableany other CPython specific filters (e.g. the\ndefault\nfilter added for the new-X dev\nmode)any implicit filters defined directly by the warnings machinery\nin CPython debug builds, all warnings are now displayed by default (the implicit filter list is empty)\n(Contributed by Nick Coghlan and Victor Stinner in bpo-20361, bpo-32043, and bpo-32230.)\nDeprecation warnings are once again shown by default in single-file scripts and at the interactive prompt. See PEP 565: Show DeprecationWarning in __main__ for details. (Contributed by Nick Coghlan in bpo-31975.)\nxml\u00b6\nAs mitigation against DTD and external entity retrieval, the\nxml.dom.minidom\nand xml.sax\nmodules no longer process\nexternal entities by default.\n(Contributed by Christian Heimes in gh-61441.)\nxml.etree\u00b6\nElementPath predicates in the find()\nmethods can now compare text of the current node with [. = \"text\"]\n,\nnot only text in children. Predicates also allow adding spaces for\nbetter readability. (Contributed by Stefan Behnel in bpo-31648.)\nxmlrpc.server\u00b6\nSimpleXMLRPCDispatcher.register_function()\ncan now be used as a decorator. (Contributed by Xiang Zhang in\nbpo-7769.)\nzipapp\u00b6\nFunction create_archive()\nnow accepts an optional filter\nargument to allow the user to select which files should be included in the\narchive. (Contributed by Irmen de Jong in bpo-31072.)\nFunction create_archive()\nnow accepts an optional compressed\nargument to generate a compressed archive. A command line option\n--compress\nhas also been added to support compression.\n(Contributed by Zhiming Wang in bpo-31638.)\nzipfile\u00b6\nZipFile\nnow accepts the new compresslevel parameter to\ncontrol the compression level.\n(Contributed by Bo Bayles in bpo-21417.)\nSubdirectories in archives created by ZipFile\nare now stored in\nalphabetical order.\n(Contributed by Bernhard M. Wiedemann in bpo-30693.)\nC API Changes\u00b6\nA new API for thread-local storage has been implemented. See PEP 539: New C API for Thread-Local Storage for an overview and Thread Specific Storage (TSS) API for a complete reference. (Contributed by Masayuki Yamamoto in bpo-25658.)\nThe new context variables functionality exposes a number of new C APIs.\nThe new PyImport_GetModule()\nfunction returns the previously\nimported module with the given name.\n(Contributed by Eric Snow in bpo-28411.)\nThe new Py_RETURN_RICHCOMPARE\nmacro eases writing rich\ncomparison functions.\n(Contributed by Petr Victorin in bpo-23699.)\nThe new Py_UNREACHABLE\nmacro can be used to mark unreachable\ncode paths.\n(Contributed by Barry Warsaw in bpo-31338.)\nThe tracemalloc\nnow exposes a C API through the new\nPyTraceMalloc_Track()\nand PyTraceMalloc_Untrack()\nfunctions.\n(Contributed by Victor Stinner in bpo-30054.)\nThe new import__find__load__start and import__find__load__done static markers can be used to trace module imports. (Contributed by Christian Heimes in bpo-31574.)\nThe fields name\nand doc\nof structures\nPyMemberDef\n, PyGetSetDef\n,\nPyStructSequence_Field\n, PyStructSequence_Desc\n,\nand wrapperbase\nare now of type const char *\nrather of\nchar *\n. (Contributed by Serhiy Storchaka in bpo-28761.)\nThe result of PyUnicode_AsUTF8AndSize()\nand PyUnicode_AsUTF8()\nis now of type const char *\nrather of char *\n. (Contributed by Serhiy\nStorchaka in bpo-28769.)\nThe result of PyMapping_Keys()\n, PyMapping_Values()\nand\nPyMapping_Items()\nis now always a list, rather than a list or a\ntuple. (Contributed by Oren Milman in bpo-28280.)\nAdded functions PySlice_Unpack()\nand PySlice_AdjustIndices()\n.\n(Contributed by Serhiy Storchaka in bpo-27867.)\nPyOS_AfterFork()\nis deprecated in favour of the new functions\nPyOS_BeforeFork()\n, PyOS_AfterFork_Parent()\nand\nPyOS_AfterFork_Child()\n. (Contributed by Antoine Pitrou in\nbpo-16500.)\nThe PyExc_RecursionErrorInst\nsingleton that was part of the public API\nhas been removed as its members being never cleared may cause a segfault\nduring finalization of the interpreter. Contributed by Xavier de Gaye in\nbpo-22898 and bpo-30697.\nAdded C API support for timezones with timezone constructors\nPyTimeZone_FromOffset()\nand PyTimeZone_FromOffsetAndName()\n,\nand access to the UTC singleton with PyDateTime_TimeZone_UTC\n.\nContributed by Paul Ganssle in bpo-10381.\nThe type of results of PyThread_start_new_thread()\nand\nPyThread_get_thread_ident()\n, and the id parameter of\nPyThreadState_SetAsyncExc()\nchanged from long to\nunsigned long.\n(Contributed by Serhiy Storchaka in bpo-6532.)\nPyUnicode_AsWideCharString()\nnow raises a ValueError\nif the\nsecond argument is NULL\nand the wchar_t* string contains null\ncharacters. (Contributed by Serhiy Storchaka in bpo-30708.)\nChanges to the startup sequence and the management of dynamic memory\nallocators mean that the long documented requirement to call\nPy_Initialize()\nbefore calling most C API functions is now\nrelied on more heavily, and failing to abide by it may lead to segfaults in\nembedding applications. See the Porting to Python 3.7 section in this\ndocument and the Before Python Initialization section in the C API documentation\nfor more details.\nThe new PyInterpreterState_GetID()\nreturns the unique ID for a\ngiven interpreter.\n(Contributed by Eric Snow in bpo-29102.)\nPy_DecodeLocale()\n, Py_EncodeLocale()\nnow use the UTF-8\nencoding when the UTF-8 mode is enabled.\n(Contributed by Victor Stinner in bpo-29240.)\nPyUnicode_DecodeLocaleAndSize()\nand PyUnicode_EncodeLocale()\nnow use the current locale encoding for surrogateescape\nerror handler.\n(Contributed by Victor Stinner in bpo-29240.)\nThe start and end parameters of PyUnicode_FindChar()\nare\nnow adjusted to behave like string slices.\n(Contributed by Xiang Zhang in bpo-28822.)\nBuild Changes\u00b6\nSupport for building --without-threads\nhas been removed. The\nthreading\nmodule is now always available.\n(Contributed by Antoine Pitrou in bpo-31370.).\nA full copy of libffi is no longer bundled for use when building the\n_ctypes\nmodule on non-OSX UNIX platforms. An installed copy\nof libffi is now required when building _ctypes\non such platforms.\n(Contributed by Zachary Ware in bpo-27979.)\nThe Windows build process no longer depends on Subversion to pull in external\nsources, a Python script is used to download zipfiles from GitHub instead.\nIf Python 3.6 is not found on the system (via py -3.6\n), NuGet is used to\ndownload a copy of 32-bit Python for this purpose. (Contributed by Zachary\nWare in bpo-30450.)\nThe ssl\nmodule requires OpenSSL 1.0.2 or 1.1 compatible libssl.\nOpenSSL 1.0.1 has reached end of lifetime on 2016-12-31 and is no longer\nsupported. LibreSSL is temporarily not supported as well. LibreSSL releases\nup to version 2.6.4 are missing required OpenSSL 1.0.2 APIs.\nOptimizations\u00b6\nThe overhead of calling many methods of various standard library classes\nimplemented in C has been significantly reduced by porting more code\nto use the METH_FASTCALL\nconvention.\n(Contributed by Victor Stinner in bpo-29300, bpo-29507,\nbpo-29452, and bpo-29286.)\nVarious optimizations have reduced Python startup time by 10% on Linux and up to 30% on macOS. (Contributed by Victor Stinner, INADA Naoki in bpo-29585, and Ivan Levkivskyi in bpo-31333.)\nMethod calls are now up to 20% faster due to the bytecode changes which avoid creating bound method instances. (Contributed by Yury Selivanov and INADA Naoki in bpo-26110.)\nThe asyncio\nmodule received a number of notable optimizations for\ncommonly used functions:\nThe\nasyncio.get_event_loop()\nfunction has been reimplemented in C to make it up to 15 times faster. (Contributed by Yury Selivanov in bpo-32296.)asyncio.Future\ncallback management has been optimized. (Contributed by Yury Selivanov in bpo-32348.)asyncio.gather()\nis now up to 15% faster. (Contributed by Yury Selivanov in bpo-32355.)asyncio.sleep()\nis now up to 2 times faster when the delay argument is zero or negative. (Contributed by Andrew Svetlov in bpo-32351.)The performance overhead of asyncio debug mode has been reduced. (Contributed by Antoine Pitrou in bpo-31970.)\nAs a result of PEP 560 work, the import time\nof typing\nhas been reduced by a factor of 7, and many typing operations\nare now faster.\n(Contributed by Ivan Levkivskyi in bpo-32226.)\nsorted()\nand list.sort()\nhave been optimized for common cases\nto be up to 40-75% faster.\n(Contributed by Elliot Gorokhovsky in bpo-28685.)\ndict.copy()\nis now up to 5.5 times faster.\n(Contributed by Yury Selivanov in bpo-31179.)\nhasattr()\nand getattr()\nare now about 4 times faster when\nname is not found and obj does not override object.__getattr__()\nor object.__getattribute__()\n.\n(Contributed by INADA Naoki in bpo-32544.)\nSearching for certain Unicode characters (like Ukrainian capital \u201c\u0404\u201d) in a string was up to 25 times slower than searching for other characters. It is now only 3 times slower in the worst case. (Contributed by Serhiy Storchaka in bpo-24821.)\nThe collections.namedtuple()\nfactory has been reimplemented to\nmake the creation of named tuples 4 to 6 times faster.\n(Contributed by Jelle Zijlstra with further improvements by INADA Naoki,\nSerhiy Storchaka, and Raymond Hettinger in bpo-28638.)\ndatetime.date.fromordinal()\nand datetime.date.fromtimestamp()\nare now up to 30% faster in the common case.\n(Contributed by Paul Ganssle in bpo-32403.)\nThe os.fwalk()\nfunction is now up to 2 times faster thanks to\nthe use of os.scandir()\n.\n(Contributed by Serhiy Storchaka in bpo-25996.)\nThe speed of the shutil.rmtree()\nfunction has been improved by\n20\u201340% thanks to the use of the os.scandir()\nfunction.\n(Contributed by Serhiy Storchaka in bpo-28564.)\nOptimized case-insensitive matching and searching of regular\nexpressions\n. Searching some patterns can now be up to 20 times faster.\n(Contributed by Serhiy Storchaka in bpo-30285.)\nre.compile()\nnow converts flags\nparameter to int object if\nit is RegexFlag\n. It is now as fast as Python 3.5, and faster than\nPython 3.6 by about 10% depending on the pattern.\n(Contributed by INADA Naoki in bpo-31671.)\nThe modify()\nmethods of classes\nselectors.EpollSelector\n, selectors.PollSelector\nand selectors.DevpollSelector\nmay be around 10% faster under\nheavy loads. (Contributed by Giampaolo Rodola\u2019 in bpo-30014)\nConstant folding has been moved from the peephole optimizer to the new AST optimizer, which is able perform optimizations more consistently. (Contributed by Eugene Toder and INADA Naoki in bpo-29469 and bpo-11549.)\nMost functions and methods in abc\nhave been rewritten in C.\nThis makes creation of abstract base classes, and calling isinstance()\nand issubclass()\non them 1.5x faster. This also reduces Python\nstart-up time by up to 10%. (Contributed by Ivan Levkivskyi and INADA Naoki\nin bpo-31333)\nSignificant speed improvements to alternate constructors for\ndatetime.date\nand datetime.datetime\nby using fast-path\nconstructors when not constructing subclasses. (Contributed by Paul Ganssle\nin bpo-32403)\nThe speed of comparison of array.array\ninstances has been\nimproved considerably in certain cases. It is now from 10x to 70x faster\nwhen comparing arrays holding values of the same integer type.\n(Contributed by Adrian Wielgosik in bpo-24700.)\nThe math.erf()\nand math.erfc()\nfunctions now use the (faster)\nC library implementation on most platforms.\n(Contributed by Serhiy Storchaka in bpo-26121.)\nOther CPython Implementation Changes\u00b6\nTrace hooks may now opt out of receiving the\nline\nand opt into receiving theopcode\nevents from the interpreter by setting the corresponding newf_trace_lines\nandf_trace_opcodes\nattributes on the frame being traced. (Contributed by Nick Coghlan in bpo-31344.)Fixed some consistency problems with namespace package module attributes. Namespace module objects now have an\n__file__\nthat is set toNone\n(previously unset), and their__spec__.origin\nis also set toNone\n(previously the string\"namespace\"\n). See bpo-32305. Also, the namespace module object\u2019s__spec__.loader\nis set to the same value as__loader__\n(previously, the former was set toNone\n). See bpo-32303.The\nlocals()\ndictionary now displays in the lexical order that variables were defined. Previously, the order was undefined. (Contributed by Raymond Hettinger in bpo-32690.)The\ndistutils\nupload\ncommand no longer tries to change CR end-of-line characters to CRLF. This fixes a corruption issue with sdists that ended with a byte equivalent to CR. (Contributed by Bo Bayles in bpo-32304.)\nDeprecated Python Behavior\u00b6\nYield expressions (both yield\nand yield from\nclauses) are now deprecated\nin comprehensions and generator expressions (aside from the iterable expression\nin the leftmost for\nclause). This ensures that comprehensions\nalways immediately return a container of the appropriate type (rather than\npotentially returning a generator iterator object), while generator\nexpressions won\u2019t attempt to interleave their implicit output with the output\nfrom any explicit yield expressions. In Python 3.7, such expressions emit\nDeprecationWarning\nwhen compiled, in Python 3.8 this will be a\nSyntaxError\n.\n(Contributed by Serhiy Storchaka in bpo-10544.)\nReturning a subclass of complex\nfrom object.__complex__()\nis\ndeprecated and will be an error in future Python versions. This makes\n__complex__()\nconsistent with object.__int__()\nand\nobject.__float__()\n.\n(Contributed by Serhiy Storchaka in bpo-28894.)\nDeprecated Python modules, functions and methods\u00b6\naifc\u00b6\naifc.openfp()\nhas been deprecated and will be removed in Python 3.9.\nUse aifc.open()\ninstead.\n(Contributed by Brian Curtin in bpo-31985.)\nasyncio\u00b6\nSupport for directly await\n-ing instances of asyncio.Lock\nand\nother asyncio synchronization primitives has been deprecated. An\nasynchronous context manager must be used in order to acquire and release\nthe synchronization resource.\n(Contributed by Andrew Svetlov in bpo-32253.)\nThe asyncio.Task.current_task()\nand asyncio.Task.all_tasks()\nmethods have been deprecated.\n(Contributed by Andrew Svetlov in bpo-32250.)\ncollections\u00b6\nIn Python 3.8, the abstract base classes in collections.abc\nwill no\nlonger be exposed in the regular collections\nmodule. This will help\ncreate a clearer distinction between the concrete classes and the abstract\nbase classes.\n(Contributed by Serhiy Storchaka in bpo-25988.)\ndbm\u00b6\ndbm.dumb\nnow supports reading read-only files and no longer writes the\nindex file when it is not changed. A deprecation warning is now emitted\nif the index file is missing and recreated in the 'r'\nand 'w'\nmodes (this will be an error in future Python releases).\n(Contributed by Serhiy Storchaka in bpo-28847.)\nenum\u00b6\nIn Python 3.8, attempting to check for non-Enum objects in Enum\nclasses will raise a TypeError\n(e.g. 1 in Color\n); similarly,\nattempting to check for non-Flag objects in a Flag\nmember will\nraise TypeError\n(e.g. 1 in Perm.RW\n); currently, both operations\nreturn False\ninstead.\n(Contributed by Ethan Furman in bpo-33217.)\ngettext\u00b6\nUsing non-integer value for selecting a plural form in gettext\nis\nnow deprecated. It never correctly worked. (Contributed by Serhiy Storchaka\nin bpo-28692.)\nimportlib\u00b6\nMethods\nMetaPathFinder.find_module()\n(replaced by\nMetaPathFinder.find_spec()\n)\nand\nPathEntryFinder.find_loader()\n(replaced by\nPathEntryFinder.find_spec()\n)\nboth deprecated in Python 3.4 now emit DeprecationWarning\n.\n(Contributed by Matthias Bussonnier in bpo-29576.)\nThe importlib.abc.ResourceLoader\nABC has been deprecated in\nfavour of importlib.abc.ResourceReader\n.\nlocale\u00b6\nlocale.format()\nhas been deprecated, use locale.format_string()\ninstead. (Contributed by Garvit in bpo-10379.)\nmacpath\u00b6\nThe macpath\nis now deprecated and will be removed in Python 3.8.\n(Contributed by Chi Hsuan Yen in bpo-9850.)\nthreading\u00b6\ndummy_threading\nand _dummy_thread\nhave been deprecated. It is\nno longer possible to build Python with threading disabled.\nUse threading\ninstead.\n(Contributed by Antoine Pitrou in bpo-31370.)\nsocket\u00b6\nThe silent argument value truncation in socket.htons()\nand\nsocket.ntohs()\nhas been deprecated. In future versions of Python,\nif the passed argument is larger than 16 bits, an exception will be raised.\n(Contributed by Oren Milman in bpo-28332.)\nssl\u00b6\nssl.wrap_socket()\nis deprecated. Use\nssl.SSLContext.wrap_socket()\ninstead.\n(Contributed by Christian Heimes in bpo-28124.)\nsunau\u00b6\nsunau.openfp()\nhas been deprecated and will be removed in Python 3.9.\nUse sunau.open()\ninstead.\n(Contributed by Brian Curtin in bpo-31985.)\nsys\u00b6\nDeprecated sys.set_coroutine_wrapper()\nand\nsys.get_coroutine_wrapper()\n.\nThe undocumented sys.callstats()\nfunction has been deprecated and\nwill be removed in a future Python version.\n(Contributed by Victor Stinner in bpo-28799.)\nwave\u00b6\nwave.openfp()\nhas been deprecated and will be removed in Python 3.9.\nUse wave.open()\ninstead.\n(Contributed by Brian Curtin in bpo-31985.)\nDeprecated functions and types of the C API\u00b6\nFunction PySlice_GetIndicesEx()\nis deprecated and replaced with\na macro if Py_LIMITED_API\nis not set or set to a value in the range\nbetween 0x03050400\nand 0x03060000\n(not inclusive), or is 0x03060100\nor higher. (Contributed by Serhiy Storchaka in bpo-27867.)\nPyOS_AfterFork()\nhas been deprecated. Use PyOS_BeforeFork()\n,\nPyOS_AfterFork_Parent()\nor PyOS_AfterFork_Child()\ninstead.\n(Contributed by Antoine Pitrou in bpo-16500.)\nPlatform Support Removals\u00b6\nFreeBSD 9 and older are no longer officially supported.\nFor full Unicode support, including within extension modules, *nix platforms are now expected to provide at least one of\nC.UTF-8\n(full locale),C.utf8\n(full locale) orUTF-8\n(LC_CTYPE\n-only locale) as an alternative to the legacyASCII\n-basedC\nlocale.OpenSSL 0.9.8 and 1.0.1 are no longer supported, which means building CPython 3.7 with SSL/TLS support on older platforms still using these versions requires custom build options that link to a more recent version of OpenSSL.\nNotably, this issue affects the Debian 8 (aka \u201cjessie\u201d) and Ubuntu 14.04 (aka \u201cTrusty\u201d) LTS Linux distributions, as they still use OpenSSL 1.0.1 by default.\nDebian 9 (\u201cstretch\u201d) and Ubuntu 16.04 (\u201cxenial\u201d), as well as recent releases of other LTS Linux releases (e.g. RHEL/CentOS 7.5, SLES 12-SP3), use OpenSSL 1.0.2 or later, and remain supported in the default build configuration.\nCPython\u2019s own CI configuration file provides an example of using the SSL compatibility testing infrastructure in CPython\u2019s test suite to build and link against OpenSSL 1.1.0 rather than an outdated system provided OpenSSL.\nAPI and Feature Removals\u00b6\nThe following features and APIs have been removed from Python 3.7:\nThe\nos.stat_float_times()\nfunction has been removed. It was introduced in Python 2.3 for backward compatibility with Python 2.2, and was deprecated since Python 3.1.Unknown escapes consisting of\n'\\'\nand an ASCII letter in replacement templates forre.sub()\nwere deprecated in Python 3.5, and will now cause an error.Removed support of the exclude argument in\ntarfile.TarFile.add()\n. It was deprecated in Python 2.7 and 3.2. Use the filter argument instead.The\nntpath.splitunc()\nfunction was deprecated in Python 3.1, and has now been removed. Usesplitdrive()\ninstead.collections.namedtuple()\nno longer supports the verbose parameter or_source\nattribute which showed the generated source code for the named tuple class. This was part of an optimization designed to speed-up class creation. (Contributed by Jelle Zijlstra with further improvements by INADA Naoki, Serhiy Storchaka, and Raymond Hettinger in bpo-28638.)Functions\nbool()\n,float()\n,list()\nandtuple()\nno longer take keyword arguments. The first argument ofint()\ncan now be passed only as positional argument.Removed previously deprecated in Python 2.4 classes\nPlist\n,Dict\nand_InternalDict\nin theplistlib\nmodule. Dict values in the result of functionsreadPlist()\nandreadPlistFromBytes()\nare now normal dicts. You no longer can use attribute access to access items of these dictionaries.The\nasyncio.windows_utils.socketpair()\nfunction has been removed. Use thesocket.socketpair()\nfunction instead, it is available on all platforms since Python 3.5.asyncio.windows_utils.socketpair\nwas just an alias tosocket.socketpair\non Python 3.5 and newer.asyncio\nno longer exports theselectors\nand_overlapped\nmodules asasyncio.selectors\nandasyncio._overlapped\n. Replacefrom asyncio import selectors\nwithimport selectors\n.Direct instantiation of\nssl.SSLSocket\nandssl.SSLObject\nobjects is now prohibited. The constructors were never documented, tested, or designed as public constructors. Users were supposed to usessl.wrap_socket()\norssl.SSLContext\n. (Contributed by Christian Heimes in bpo-32951.)The unused\ndistutils\ninstall_misc\ncommand has been removed. (Contributed by Eric N. Vander Weele in bpo-29218.)\nModule Removals\u00b6\nThe fpectl\nmodule has been removed. It was never enabled by\ndefault, never worked correctly on x86-64, and it changed the Python\nABI in ways that caused unexpected breakage of C extensions.\n(Contributed by Nathaniel J. Smith in bpo-29137.)\nWindows-only Changes\u00b6\nThe python launcher, (py.exe), can accept 32 & 64 bit specifiers without\nhaving to specify a minor version as well. So py -3-32\nand py -3-64\nbecome valid as well as py -3.7-32\n, also the -m-64 and -m.n-64 forms\nare now accepted to force 64 bit python even if 32 bit would have otherwise\nbeen used. If the specified version is not available py.exe will error exit.\n(Contributed by Steve Barnes in bpo-30291.)\nThe launcher can be run as py -0\nto produce a list of the installed pythons,\nwith default marked with an asterisk. Running py -0p\nwill include the paths.\nIf py is run with a version specifier that cannot be matched it will also print\nthe short form list of available specifiers.\n(Contributed by Steve Barnes in bpo-30362.)\nPorting to Python 3.7\u00b6\nThis section lists previously described changes and other bugfixes that may require changes to your code.\nChanges in Python Behavior\u00b6\nasync\nandawait\nnames are now reserved keywords. Code using these names as identifiers will now raise aSyntaxError\n. (Contributed by Jelle Zijlstra in bpo-30406.)PEP 479 is enabled for all code in Python 3.7, meaning that\nStopIteration\nexceptions raised directly or indirectly in coroutines and generators are transformed intoRuntimeError\nexceptions. (Contributed by Yury Selivanov in bpo-32670.)object.__aiter__()\nmethods can no longer be declared as asynchronous. (Contributed by Yury Selivanov in bpo-31709.)Due to an oversight, earlier Python versions erroneously accepted the following syntax:\nf(1 for x in [1],) class C(1 for x in [1]): pass\nPython 3.7 now correctly raises a\nSyntaxError\n, as a generator expression always needs to be directly inside a set of parentheses and cannot have a comma on either side, and the duplication of the parentheses can be omitted only on calls. (Contributed by Serhiy Storchaka in bpo-32012 and bpo-32023.)When using the\n-m\nswitch, the initial working directory is now added tosys.path\n, rather than an empty string (which dynamically denoted the current working directory at the time of each import). Any programs that are checking for the empty string, or otherwise relying on the previous behaviour, will need to be updated accordingly (e.g. by also checking foros.getcwd()\noros.path.dirname(__main__.__file__)\n, depending on why the code was checking for the empty string in the first place).\nChanges in the Python API\u00b6\nsocketserver.ThreadingMixIn.server_close\nnow waits until all non-daemon threads complete. Set the newsocketserver.ThreadingMixIn.block_on_close\nclass attribute toFalse\nto get the pre-3.7 behaviour. (Contributed by Victor Stinner in bpo-31233 and bpo-33540.)socketserver.ForkingMixIn.server_close\nnow waits until all child processes complete. Set the newsocketserver.ForkingMixIn.block_on_close\nclass attribute toFalse\nto get the pre-3.7 behaviour. (Contributed by Victor Stinner in bpo-31151 and bpo-33540.)The\nlocale.localeconv()\nfunction now temporarily sets theLC_CTYPE\nlocale to the value ofLC_NUMERIC\nin some cases. (Contributed by Victor Stinner in bpo-31900.)pkgutil.walk_packages()\nnow raises aValueError\nif path is a string. Previously an empty list was returned. (Contributed by Sanyam Khurana in bpo-24744.)A format string argument for\nstring.Formatter.format()\nis now positional-only. Passing it as a keyword argument was deprecated in Python 3.5. (Contributed by Serhiy Storchaka in bpo-29193.)Attributes\nkey\n,value\nandcoded_value\nof classhttp.cookies.Morsel\nare now read-only. Assigning to them was deprecated in Python 3.5. Use theset()\nmethod for setting them. (Contributed by Serhiy Storchaka in bpo-29192.)The mode argument of\nos.makedirs()\nno longer affects the file permission bits of newly created intermediate-level directories. To set their file permission bits you can set the umask before invokingmakedirs()\n. (Contributed by Serhiy Storchaka in bpo-19930.)The\nstruct.Struct.format\ntype is nowstr\ninstead ofbytes\n. (Contributed by Victor Stinner in bpo-21071.)cgi.parse_multipart()\nnow accepts the encoding and errors arguments and returns the same results asFieldStorage\n: for non-file fields, the value associated to a key is a list of strings, not bytes. (Contributed by Pierre Quentel in bpo-29979.)Due to internal changes in\nsocket\n, callingsocket.fromshare()\non a socket created bysocket.share\nin older Python versions is not supported.repr\nforBaseException\nhas changed to not include the trailing comma. Most exceptions are affected by this change. (Contributed by Serhiy Storchaka in bpo-30399.)repr\nfordatetime.timedelta\nhas changed to include the keyword arguments in the output. (Contributed by Utkarsh Upadhyay in bpo-30302.)Because\nshutil.rmtree()\nis now implemented using theos.scandir()\nfunction, the user specified handler onerror is now called with the first argumentos.scandir\ninstead ofos.listdir\nwhen listing the directory is failed.Support for nested sets and set operations in regular expressions as in Unicode Technical Standard #18 might be added in the future. This would change the syntax. To facilitate this future change a\nFutureWarning\nwill be raised in ambiguous cases for the time being. That include sets starting with a literal'['\nor containing literal character sequences'--'\n,'&&'\n,'~~'\n, and'||'\n. To avoid a warning, escape them with a backslash. (Contributed by Serhiy Storchaka in bpo-30349.)The result of splitting a string on a\nregular expression\nthat could match an empty string has been changed. For example splitting onr'\\s*'\nwill now split not only on whitespaces as it did previously, but also on empty strings before all non-whitespace characters and just before the end of the string. The previous behavior can be restored by changing the pattern tor'\\s+'\n. AFutureWarning\nwas emitted for such patterns since Python 3.5.For patterns that match both empty and non-empty strings, the result of searching for all matches may also be changed in other cases. For example in the string\n'a\\n\\n'\n, the patternr'(?m)^\\s*?$'\nwill not only match empty strings at positions 2 and 3, but also the string'\\n'\nat positions 2\u20133. To match only blank lines, the pattern should be rewritten asr'(?m)^[^\\S\\n]*$'\n.re.sub()\nnow replaces empty matches adjacent to a previous non-empty match. For examplere.sub('x*', '-', 'abxd')\nreturns now'-a-b--d-'\ninstead of'-a-b-d-'\n(the first minus between \u2018b\u2019 and \u2018d\u2019 replaces \u2018x\u2019, and the second minus replaces an empty string between \u2018x\u2019 and \u2018d\u2019).(Contributed by Serhiy Storchaka in bpo-25054 and bpo-32308.)\nChange\nre.escape()\nto only escape regex special characters instead of escaping all characters other than ASCII letters, numbers, and'_'\n. (Contributed by Serhiy Storchaka in bpo-29995.)tracemalloc.Traceback\nframes are now sorted from oldest to most recent to be more consistent withtraceback\n. (Contributed by Jesse Bakker in bpo-32121.)On OSes that support\nsocket.SOCK_NONBLOCK\norsocket.SOCK_CLOEXEC\nbit flags, thesocket.type\nno longer has them applied. Therefore, checks likeif sock.type == socket.SOCK_STREAM\nwork as expected on all platforms. (Contributed by Yury Selivanov in bpo-32331.)On Windows the default for the close_fds argument of\nsubprocess.Popen\nwas changed fromFalse\ntoTrue\nwhen redirecting the standard handles. If you previously depended on handles being inherited when usingsubprocess.Popen\nwith standard io redirection, you will have to passclose_fds=False\nto preserve the previous behaviour, or useSTARTUPINFO.lpAttributeList\n.importlib.machinery.PathFinder.invalidate_caches()\n\u2013 which implicitly affectsimportlib.invalidate_caches()\n\u2013 now deletes entries insys.path_importer_cache\nwhich are set toNone\n. (Contributed by Brett Cannon in bpo-33169.)In\nasyncio\n,loop.sock_recv()\n,loop.sock_sendall()\n,loop.sock_accept()\n,loop.getaddrinfo()\n,loop.getnameinfo()\nhave been changed to be proper coroutine methods to match their documentation. Previously, these methods returnedasyncio.Future\ninstances. (Contributed by Yury Selivanov in bpo-32327.)asyncio.Server.sockets\nnow returns a copy of the internal list of server sockets, instead of returning it directly. (Contributed by Yury Selivanov in bpo-32662.)Struct.format\nis now astr\ninstance instead of abytes\ninstance. (Contributed by Victor Stinner in bpo-21071.)argparse\nsubparsers can now be made mandatory by passingrequired=True\ntoArgumentParser.add_subparsers()\n. (Contributed by Anthony Sottile in bpo-26510.)ast.literal_eval()\nis now stricter. Addition and subtraction of arbitrary numbers are no longer allowed. (Contributed by Serhiy Storchaka in bpo-31778.)Calendar.itermonthdates\nwill now consistently raise an exception when a date falls outside of the0001-01-01\nthrough9999-12-31\nrange. To support applications that cannot tolerate such exceptions, the newCalendar.itermonthdays3\nandCalendar.itermonthdays4\ncan be used. The new methods return tuples and are not restricted by the range supported bydatetime.date\n. (Contributed by Alexander Belopolsky in bpo-28292.)collections.ChainMap\nnow preserves the order of the underlying mappings. (Contributed by Raymond Hettinger in bpo-32792.)The\nsubmit()\nmethod ofconcurrent.futures.ThreadPoolExecutor\nandconcurrent.futures.ProcessPoolExecutor\nnow raises aRuntimeError\nif called during interpreter shutdown. (Contributed by Mark Nemec in bpo-33097.)The\nconfigparser.ConfigParser\nconstructor now usesread_dict()\nto process the default values, making its behavior consistent with the rest of the parser. Non-string keys and values in the defaults dictionary are now being implicitly converted to strings. (Contributed by James Tocknell in bpo-23835.)Several undocumented internal imports were removed. One example is that\nos.errno\nis no longer available; useimport errno\ndirectly instead. Note that such undocumented internal imports may be removed any time without notice, even in micro version releases.\nChanges in the C API\u00b6\nThe function PySlice_GetIndicesEx()\nis considered unsafe for\nresizable sequences. If the slice indices are not instances of int\n,\nbut objects that implement the __index__()\nmethod, the sequence can be\nresized after passing its length to PySlice_GetIndicesEx()\n. This\ncan lead to returning indices out of the length of the sequence. For\navoiding possible problems use new functions PySlice_Unpack()\nand\nPySlice_AdjustIndices()\n.\n(Contributed by Serhiy Storchaka in bpo-27867.)\nCPython bytecode changes\u00b6\nThere are two new opcodes: LOAD_METHOD\nand CALL_METHOD\n.\n(Contributed by Yury Selivanov and INADA Naoki in bpo-26110.)\nThe STORE_ANNOTATION\nopcode has been removed.\n(Contributed by Mark Shannon in bpo-32550.)\nWindows-only Changes\u00b6\nThe file used to override sys.path\nis now called\n._pth\ninstead of 'sys.path'\n.\nSee Finding modules for more information.\n(Contributed by Steve Dower in bpo-28137.)\nOther CPython implementation changes\u00b6\nIn preparation for potential future changes to the public CPython runtime initialization API (see PEP 432 for an initial, but somewhat outdated, draft), CPython\u2019s internal startup and configuration management logic has been significantly refactored. While these updates are intended to be entirely transparent to both embedding applications and users of the regular CPython CLI, they\u2019re being mentioned here as the refactoring changes the internal order of various operations during interpreter startup, and hence may uncover previously latent defects, either in embedding applications, or in CPython itself. (Initially contributed by Nick Coghlan and Eric Snow as part of bpo-22257, and further updated by Nick, Eric, and Victor Stinner in a number of other issues). Some known details affected:\nPySys_AddWarnOptionUnicode()\nis not currently usable by embedding applications due to the requirement to create a Unicode object prior to callingPy_Initialize\n. UsePySys_AddWarnOption()\ninstead.warnings filters added by an embedding application with\nPySys_AddWarnOption()\nshould now more consistently take precedence over the default filters set by the interpreter\nDue to changes in the way the default warnings filters are configured,\nsetting Py_BytesWarningFlag\nto a value greater than one is no longer\nsufficient to both emit BytesWarning\nmessages and have them converted\nto exceptions. Instead, the flag must be set (to cause the warnings to be\nemitted in the first place), and an explicit error::BytesWarning\nwarnings filter added to convert them to exceptions.\nDue to a change in the way docstrings are handled by the compiler, the\nimplicit return None\nin a function body consisting solely of a docstring\nis now marked as occurring on the same line as the docstring, not on the\nfunction\u2019s header line.\nThe current exception state has been moved from the frame object to the co-routine. This simplified the interpreter and fixed a couple of obscure bugs caused by having swap exception state when entering or exiting a generator. (Contributed by Mark Shannon in bpo-25612.)\nNotable changes in Python 3.7.1\u00b6\nStarting in 3.7.1, Py_Initialize()\nnow consistently reads and respects\nall of the same environment settings as Py_Main()\n(in earlier Python\nversions, it respected an ill-defined subset of those environment variables,\nwhile in Python 3.7.0 it didn\u2019t read any of them due to bpo-34247). If\nthis behavior is unwanted, set Py_IgnoreEnvironmentFlag\nto 1 before\ncalling Py_Initialize()\n.\nIn 3.7.1 the C API for Context Variables\nwas updated to use\nPyObject\npointers. See also bpo-34762.\nIn 3.7.1 the tokenize\nmodule now implicitly emits a NEWLINE\ntoken\nwhen provided with input that does not have a trailing new line. This behavior\nnow matches what the C tokenizer does internally.\n(Contributed by Ammar Askar in bpo-33899.)\nNotable changes in Python 3.7.2\u00b6\nIn 3.7.2, venv\non Windows no longer copies the original binaries, but\ncreates redirector scripts named python.exe\nand pythonw.exe\ninstead.\nThis resolves a long standing issue where all virtual environments would have\nto be upgraded or recreated with each Python update. However, note that this\nrelease will still require recreation of virtual environments in order to get\nthe new scripts.\nNotable changes in Python 3.7.6\u00b6\nDue to significant security concerns, the reuse_address parameter of\nasyncio.loop.create_datagram_endpoint()\nis no longer supported. This is\nbecause of the behavior of the socket option SO_REUSEADDR\nin UDP. For more\ndetails, see the documentation for loop.create_datagram_endpoint()\n.\n(Contributed by Kyle Stanley, Antoine Pitrou, and Yury Selivanov in\nbpo-37228.)\nNotable changes in Python 3.7.10\u00b6\nEarlier Python versions allowed using both ;\nand &\nas\nquery parameter separators in urllib.parse.parse_qs()\nand\nurllib.parse.parse_qsl()\n. Due to security concerns, and to conform with\nnewer W3C recommendations, this has been changed to allow only a single\nseparator key, with &\nas the default. This change also affects\ncgi.parse()\nand cgi.parse_multipart()\nas they use the affected\nfunctions internally. For more details, please see their respective\ndocumentation.\n(Contributed by Adam Goldschmidt, Senthil Kumaran and Ken Jin in bpo-42967.)\nNotable changes in Python 3.7.11\u00b6\nA security fix alters the ftplib.FTP\nbehavior to not trust the\nIPv4 address sent from the remote server when setting up a passive data\nchannel. We reuse the ftp server IP address instead. For unusual code\nrequiring the old behavior, set a trust_server_pasv_ipv4_address\nattribute on your FTP instance to True\n. (See gh-87451)\nThe presence of newline or tab characters in parts of a URL allows for some\nforms of attacks. Following the WHATWG specification that updates RFC 3986,\nASCII newline \\n\n, \\r\nand tab \\t\ncharacters are stripped from the\nURL by the parser urllib.parse()\npreventing such attacks. The removal\ncharacters are controlled by a new module level variable\nurllib.parse._UNSAFE_URL_BYTES_TO_REMOVE\n. (See gh-88048)\nNotable security feature in 3.7.14\u00b6\nConverting between int\nand str\nin bases other than 2\n(binary), 4, 8 (octal), 16 (hexadecimal), or 32 such as base 10 (decimal)\nnow raises a ValueError\nif the number of digits in string form is\nabove a limit to avoid potential denial of service attacks due to the\nalgorithmic complexity. This is a mitigation for CVE 2020-10735.\nThis limit can be configured or disabled by environment variable, command\nline flag, or sys\nAPIs. See the integer string conversion\nlength limitation documentation. The default limit\nis 4300 digits in string form.", "code_snippets": ["\n ", "\n ", " ", " ", " ", " ", "\n ", "\n\n ", " ", " ", " ", " ", "\n ", "\n\n", "\n ", "\n", " ", "\n", "\n", "\n ", " ", "\n ", " ", "\n ", " ", " ", " ", "\n\n", " ", " ", " ", "\n", " ", "\n", " ", " ", " ", "\n\n", " ", " ", "\n ", "\n\n", "\n", " ", " ", " ", " ", "\n\n", " ", " ", " ", " ", "\n ", "\n"], "language": "Python", "source": "python.org", "token_count": 21032} +{"url": "https://docs.python.org/3/tutorial/appendix.html", "title": "Appendix", "content": "16. Appendix\u00b6\n16.1. Interactive Mode\u00b6\nThere are two variants of the interactive REPL. The classic basic interpreter is supported on all platforms with minimal line control capabilities.\nSince Python 3.13, a new interactive shell is used by default.\nThis one supports color, multiline editing, history browsing, and\npaste mode. To disable color, see Controlling color for\ndetails. Function keys provide some additional functionality.\nF1 enters the interactive help browser pydoc\n.\nF2 allows for browsing command-line history with neither output nor the\n>>> and \u2026 prompts. F3 enters \u201cpaste mode\u201d, which\nmakes pasting larger blocks of code easier. Press F3 to return to\nthe regular prompt.\nWhen using the new interactive shell, exit the shell by typing exit or quit. Adding call parentheses after those commands is not required.\nIf the new interactive shell is not desired, it can be disabled via\nthe PYTHON_BASIC_REPL\nenvironment variable.\n16.1.1. Error Handling\u00b6\nWhen an error occurs, the interpreter prints an error message and a stack trace.\nIn interactive mode, it then returns to the primary prompt; when input came from\na file, it exits with a nonzero exit status after printing the stack trace.\n(Exceptions handled by an except\nclause in a try\nstatement\nare not errors in this context.) Some errors are unconditionally fatal and\ncause an exit with a nonzero exit status; this applies to internal inconsistencies and\nsome cases of running out of memory. All error messages are written to the\nstandard error stream; normal output from executed commands is written to\nstandard output.\nTyping the interrupt character (usually Control-C or Delete) to the primary or\nsecondary prompt cancels the input and returns to the primary prompt. [1]\nTyping an interrupt while a command is executing raises the\nKeyboardInterrupt\nexception, which may be handled by a try\nstatement.\n16.1.2. Executable Python Scripts\u00b6\nOn BSD\u2019ish Unix systems, Python scripts can be made directly executable, like shell scripts, by putting the line\n#!/usr/bin/env python3\n(assuming that the interpreter is on the user\u2019s PATH\n) at the beginning\nof the script and giving the file an executable mode. The #!\nmust be the\nfirst two characters of the file. On some platforms, this first line must end\nwith a Unix-style line ending ('\\n'\n), not a Windows ('\\r\\n'\n) line\nending. Note that the hash, or pound, character, '#'\n, is used to start a\ncomment in Python.\nThe script can be given an executable mode, or permission, using the chmod command.\n$ chmod +x myscript.py\nOn Windows systems, there is no notion of an \u201cexecutable mode\u201d. The Python\ninstaller automatically associates .py\nfiles with python.exe\nso that\na double-click on a Python file will run it as a script. The extension can\nalso be .pyw\n, in that case, the console window that normally appears is\nsuppressed.\n16.1.3. The Interactive Startup File\u00b6\nWhen you use Python interactively, it is frequently handy to have some standard\ncommands executed every time the interpreter is started. You can do this by\nsetting an environment variable named PYTHONSTARTUP\nto the name of a\nfile containing your start-up commands. This is similar to the .profile\nfeature of the Unix shells.\nThis file is only read in interactive sessions, not when Python reads commands\nfrom a script, and not when /dev/tty\nis given as the explicit source of\ncommands (which otherwise behaves like an interactive session). It is executed\nin the same namespace where interactive commands are executed, so that objects\nthat it defines or imports can be used without qualification in the interactive\nsession. You can also change the prompts sys.ps1\nand sys.ps2\nin this\nfile.\nIf you want to read an additional start-up file from the current directory, you\ncan program this in the global start-up file using code like if\nos.path.isfile('.pythonrc.py'): exec(open('.pythonrc.py').read())\n.\nIf you want to use the startup file in a script, you must do this explicitly\nin the script:\nimport os\nfilename = os.environ.get('PYTHONSTARTUP')\nif filename and os.path.isfile(filename):\nwith open(filename) as fobj:\nstartup_file = fobj.read()\nexec(startup_file)\n16.1.4. The Customization Modules\u00b6\nPython provides two hooks to let you customize it: sitecustomize and usercustomize. To see how it works, you need first to find the location of your user site-packages directory. Start Python and run this code:\n>>> import site\n>>> site.getusersitepackages()\n'/home/user/.local/lib/python3.x/site-packages'\nNow you can create a file named usercustomize.py\nin that directory and\nput anything you want in it. It will affect every invocation of Python, unless\nit is started with the -s\noption to disable the automatic import.\nsitecustomize works in the same way, but is typically created by an\nadministrator of the computer in the global site-packages directory, and is\nimported before usercustomize. See the documentation of the site\nmodule for more details.\nFootnotes", "code_snippets": ["\n", "\n", " ", " ", "\n", " ", " ", " ", "\n ", " ", " ", " ", "\n ", " ", " ", "\n ", "\n", "\n", "\n", "\n"], "language": "Python", "source": "python.org", "token_count": 1234} +{"url": "https://docs.python.org/3/library/imp.html", "title": " \u2014 Access the import internals", "content": "imp\n\u2014 Access the import internals\u00b6\nDeprecated since version 3.4, removed in version 3.12.\nThis module is no longer part of the Python standard library. It was removed in Python 3.12 after being deprecated in Python 3.4.\nThe removal notice includes guidance for\nmigrating code from imp\nto importlib\n.\nThe last version of Python that provided the imp\nmodule was\nPython 3.11.", "code_snippets": [], "language": "Python", "source": "python.org", "token_count": 93} +{"url": "https://docs.python.org/3/faq/extending.html", "title": null, "content": "Extending/Embedding FAQ\u00b6\nCan I create my own functions in C?\u00b6\nYes, you can create built-in modules containing functions, variables, exceptions and even new types in C. This is explained in the document Extending and Embedding the Python Interpreter.\nMost intermediate or advanced Python books will also cover this topic.\nCan I create my own functions in C++?\u00b6\nYes, using the C compatibility features found in C++. Place extern \"C\" {\n... }\naround the Python include files and put extern \"C\"\nbefore each\nfunction that is going to be called by the Python interpreter. Global or static\nC++ objects with constructors are probably not a good idea.\nWriting C is hard; are there any alternatives?\u00b6\nThere are a number of alternatives to writing your own C extensions, depending on what you\u2019re trying to do. Recommended third party tools offer both simpler and more sophisticated approaches to creating C and C++ extensions for Python.\nHow can I execute arbitrary Python statements from C?\u00b6\nThe highest-level function to do this is PyRun_SimpleString()\nwhich takes\na single string argument to be executed in the context of the module\n__main__\nand returns 0\nfor success and -1\nwhen an exception occurred\n(including SyntaxError\n). If you want more control, use\nPyRun_String()\n; see the source for PyRun_SimpleString()\nin\nPython/pythonrun.c\n.\nHow can I evaluate an arbitrary Python expression from C?\u00b6\nCall the function PyRun_String()\nfrom the previous question with the\nstart symbol Py_eval_input\n; it parses an expression, evaluates it and\nreturns its value.\nHow do I extract C values from a Python object?\u00b6\nThat depends on the object\u2019s type. If it\u2019s a tuple, PyTuple_Size()\nreturns its length and PyTuple_GetItem()\nreturns the item at a specified\nindex. Lists have similar functions, PyList_Size()\nand\nPyList_GetItem()\n.\nFor bytes, PyBytes_Size()\nreturns its length and\nPyBytes_AsStringAndSize()\nprovides a pointer to its value and its\nlength. Note that Python bytes objects may contain null bytes so C\u2019s\nstrlen()\nshould not be used.\nTo test the type of an object, first make sure it isn\u2019t NULL\n, and then use\nPyBytes_Check()\n, PyTuple_Check()\n, PyList_Check()\n, etc.\nThere is also a high-level API to Python objects which is provided by the\nso-called \u2018abstract\u2019 interface \u2013 read Include/abstract.h\nfor further\ndetails. It allows interfacing with any kind of Python sequence using calls\nlike PySequence_Length()\n, PySequence_GetItem()\n, etc. as well\nas many other useful protocols such as numbers (PyNumber_Index()\net\nal.) and mappings in the PyMapping APIs.\nHow do I use Py_BuildValue() to create a tuple of arbitrary length?\u00b6\nYou can\u2019t. Use PyTuple_Pack()\ninstead.\nHow do I call an object\u2019s method from C?\u00b6\nThe PyObject_CallMethod()\nfunction can be used to call an arbitrary\nmethod of an object. The parameters are the object, the name of the method to\ncall, a format string like that used with Py_BuildValue()\n, and the\nargument values:\nPyObject *\nPyObject_CallMethod(PyObject *object, const char *method_name,\nconst char *arg_format, ...);\nThis works for any object that has methods \u2013 whether built-in or user-defined.\nYou are responsible for eventually Py_DECREF()\n\u2018ing the return value.\nTo call, e.g., a file object\u2019s \u201cseek\u201d method with arguments 10, 0 (assuming the file object pointer is \u201cf\u201d):\nres = PyObject_CallMethod(f, \"seek\", \"(ii)\", 10, 0);\nif (res == NULL) {\n... an exception occurred ...\n}\nelse {\nPy_DECREF(res);\n}\nNote that since PyObject_CallObject()\nalways wants a tuple for the\nargument list, to call a function without arguments, pass \u201c()\u201d for the format,\nand to call a function with one argument, surround the argument in parentheses,\ne.g. \u201c(i)\u201d.\nHow do I catch the output from PyErr_Print() (or anything that prints to stdout/stderr)?\u00b6\nIn Python code, define an object that supports the write()\nmethod. Assign\nthis object to sys.stdout\nand sys.stderr\n. Call print_error, or\njust allow the standard traceback mechanism to work. Then, the output will go\nwherever your write()\nmethod sends it.\nThe easiest way to do this is to use the io.StringIO\nclass:\n>>> import io, sys\n>>> sys.stdout = io.StringIO()\n>>> print('foo')\n>>> print('hello world!')\n>>> sys.stderr.write(sys.stdout.getvalue())\nfoo\nhello world!\nA custom object to do the same would look like this:\n>>> import io, sys\n>>> class StdoutCatcher(io.TextIOBase):\n... def __init__(self):\n... self.data = []\n... def write(self, stuff):\n... self.data.append(stuff)\n...\n>>> import sys\n>>> sys.stdout = StdoutCatcher()\n>>> print('foo')\n>>> print('hello world!')\n>>> sys.stderr.write(''.join(sys.stdout.data))\nfoo\nhello world!\nHow do I access a module written in Python from C?\u00b6\nYou can get a pointer to the module object as follows:\nmodule = PyImport_ImportModule(\"\");\nIf the module hasn\u2019t been imported yet (i.e. it is not yet present in\nsys.modules\n), this initializes the module; otherwise it simply returns\nthe value of sys.modules[\"\"]\n. Note that it doesn\u2019t enter the\nmodule into any namespace \u2013 it only ensures it has been initialized and is\nstored in sys.modules\n.\nYou can then access the module\u2019s attributes (i.e. any name defined in the module) as follows:\nattr = PyObject_GetAttrString(module, \"\");\nCalling PyObject_SetAttrString()\nto assign to variables in the module\nalso works.\nHow do I interface to C++ objects from Python?\u00b6\nDepending on your requirements, there are many approaches. To do this manually, begin by reading the \u201cExtending and Embedding\u201d document. Realize that for the Python run-time system, there isn\u2019t a whole lot of difference between C and C++ \u2013 so the strategy of building a new Python type around a C structure (pointer) type will also work for C++ objects.\nFor C++ libraries, see Writing C is hard; are there any alternatives?.\nI added a module using the Setup file and the make fails; why?\u00b6\nSetup must end in a newline, if there is no newline there, the build process fails. (Fixing this requires some ugly shell script hackery, and this bug is so minor that it doesn\u2019t seem worth the effort.)\nHow do I debug an extension?\u00b6\nWhen using GDB with dynamically loaded extensions, you can\u2019t set a breakpoint in your extension until your extension is loaded.\nIn your .gdbinit\nfile (or interactively), add the command:\nbr _PyImport_LoadDynamicModule\nThen, when you run GDB:\n$ gdb /local/bin/python\ngdb) run myscript.py\ngdb) continue # repeat until your extension is loaded\ngdb) finish # so that your extension is loaded\ngdb) br myfunction.c:50\ngdb) continue\nI want to compile a Python module on my Linux system, but some files are missing. Why?\u00b6\nMost packaged versions of Python omit some files required for compiling Python extensions.\nFor Red Hat, install the python3-devel RPM to get the necessary files.\nFor Debian, run apt-get install python3-dev\n.\nHow do I tell \u201cincomplete input\u201d from \u201cinvalid input\u201d?\u00b6\nSometimes you want to emulate the Python interactive interpreter\u2019s behavior, where it gives you a continuation prompt when the input is incomplete (e.g. you typed the start of an \u201cif\u201d statement or you didn\u2019t close your parentheses or triple string quotes), but it gives you a syntax error message immediately when the input is invalid.\nIn Python you can use the codeop\nmodule, which approximates the parser\u2019s\nbehavior sufficiently. IDLE uses this, for example.\nThe easiest way to do it in C is to call PyRun_InteractiveLoop()\n(perhaps\nin a separate thread) and let the Python interpreter handle the input for\nyou. You can also set the PyOS_ReadlineFunctionPointer()\nto point at your\ncustom input function. See Modules/readline.c\nand Parser/myreadline.c\nfor more hints.\nHow do I find undefined g++ symbols __builtin_new or __pure_virtual?\u00b6\nTo dynamically load g++ extension modules, you must recompile Python, relink it\nusing g++ (change LINKCC in the Python Modules Makefile), and link your\nextension module using g++ (e.g., g++ -shared -o mymodule.so mymodule.o\n).\nCan I create an object class with some methods implemented in C and others in Python (e.g. through inheritance)?\u00b6\nYes, you can inherit from built-in classes such as int\n, list\n,\ndict\n, etc.\nThe Boost Python Library (BPL, https://www.boost.org/libs/python/doc/index.html) provides a way of doing this from C++ (i.e. you can inherit from an extension class written in C++ using the BPL).", "code_snippets": [], "language": "Python", "source": "python.org", "token_count": 2071} +{"url": "https://docs.python.org/3/howto/curses.html", "title": "Curses Programming with Python", "content": "Curses Programming with Python\u00b6\n- Author:\nA.M. Kuchling, Eric S. Raymond\n- Release:\n2.04\nWhat is curses?\u00b6\nThe curses library supplies a terminal-independent screen-painting and keyboard-handling facility for text-based terminals; such terminals include VT100s, the Linux console, and the simulated terminal provided by various programs. Display terminals support various control codes to perform common operations such as moving the cursor, scrolling the screen, and erasing areas. Different terminals use widely differing codes, and often have their own minor quirks.\nIn a world of graphical displays, one might ask \u201cwhy bother\u201d? It\u2019s true that character-cell display terminals are an obsolete technology, but there are niches in which being able to do fancy things with them are still valuable. One niche is on small-footprint or embedded Unixes that don\u2019t run an X server. Another is tools such as OS installers and kernel configurators that may have to run before any graphical support is available.\nThe curses library provides fairly basic functionality, providing the programmer with an abstraction of a display containing multiple non-overlapping windows of text. The contents of a window can be changed in various ways\u2014adding text, erasing it, changing its appearance\u2014and the curses library will figure out what control codes need to be sent to the terminal to produce the right output. curses doesn\u2019t provide many user-interface concepts such as buttons, checkboxes, or dialogs; if you need such features, consider a user interface library such as Urwid.\nThe curses library was originally written for BSD Unix; the later System V versions of Unix from AT&T added many enhancements and new functions. BSD curses is no longer maintained, having been replaced by ncurses, which is an open-source implementation of the AT&T interface. If you\u2019re using an open-source Unix such as Linux or FreeBSD, your system almost certainly uses ncurses. Since most current commercial Unix versions are based on System V code, all the functions described here will probably be available. The older versions of curses carried by some proprietary Unixes may not support everything, though.\nThe Windows version of Python doesn\u2019t include the curses\nmodule. A ported version called UniCurses is available.\nThe Python curses module\u00b6\nThe Python module is a fairly simple wrapper over the C functions provided by\ncurses; if you\u2019re already familiar with curses programming in C, it\u2019s really\neasy to transfer that knowledge to Python. The biggest difference is that the\nPython interface makes things simpler by merging different C functions such as\naddstr()\n, mvaddstr()\n, and mvwaddstr()\ninto a single\naddstr()\nmethod. You\u2019ll see this covered in more\ndetail later.\nThis HOWTO is an introduction to writing text-mode programs with curses and Python. It doesn\u2019t attempt to be a complete guide to the curses API; for that, see the Python library guide\u2019s section on ncurses, and the C manual pages for ncurses. It will, however, give you the basic ideas.\nStarting and ending a curses application\u00b6\nBefore doing anything, curses must be initialized. This is done by\ncalling the initscr()\nfunction, which will determine the\nterminal type, send any required setup codes to the terminal, and\ncreate various internal data structures. If successful,\ninitscr()\nreturns a window object representing the entire\nscreen; this is usually called stdscr\nafter the name of the\ncorresponding C variable.\nimport curses\nstdscr = curses.initscr()\nUsually curses applications turn off automatic echoing of keys to the\nscreen, in order to be able to read keys and only display them under\ncertain circumstances. This requires calling the\nnoecho()\nfunction.\ncurses.noecho()\nApplications will also commonly need to react to keys instantly, without requiring the Enter key to be pressed; this is called cbreak mode, as opposed to the usual buffered input mode.\ncurses.cbreak()\nTerminals usually return special keys, such as the cursor keys or navigation\nkeys such as Page Up and Home, as a multibyte escape sequence. While you could\nwrite your application to expect such sequences and process them accordingly,\ncurses can do it for you, returning a special value such as\ncurses.KEY_LEFT\n. To get curses to do the job, you\u2019ll have to enable\nkeypad mode.\nstdscr.keypad(True)\nTerminating a curses application is much easier than starting one. You\u2019ll need to call:\ncurses.nocbreak()\nstdscr.keypad(False)\ncurses.echo()\nto reverse the curses-friendly terminal settings. Then call the\nendwin()\nfunction to restore the terminal to its original\noperating mode.\ncurses.endwin()\nA common problem when debugging a curses application is to get your terminal messed up when the application dies without restoring the terminal to its previous state. In Python this commonly happens when your code is buggy and raises an uncaught exception. Keys are no longer echoed to the screen when you type them, for example, which makes using the shell difficult.\nIn Python you can avoid these complications and make debugging much easier by\nimporting the curses.wrapper()\nfunction and using it like this:\nfrom curses import wrapper\ndef main(stdscr):\n# Clear screen\nstdscr.clear()\n# This raises ZeroDivisionError when i == 10.\nfor i in range(0, 11):\nv = i-10\nstdscr.addstr(i, 0, '10 divided by {} is {}'.format(v, 10/v))\nstdscr.refresh()\nstdscr.getkey()\nwrapper(main)\nThe wrapper()\nfunction takes a callable object and does the\ninitializations described above, also initializing colors if color\nsupport is present. wrapper()\nthen runs your provided callable.\nOnce the callable returns, wrapper()\nwill restore the original\nstate of the terminal. The callable is called inside a\ntry\n\u2026except\nthat catches exceptions, restores\nthe state of the terminal, and then re-raises the exception. Therefore\nyour terminal won\u2019t be left in a funny state on exception and you\u2019ll be\nable to read the exception\u2019s message and traceback.\nWindows and Pads\u00b6\nWindows are the basic abstraction in curses. A window object represents a rectangular area of the screen, and supports methods to display text, erase it, allow the user to input strings, and so forth.\nThe stdscr\nobject returned by the initscr()\nfunction is a\nwindow object that covers the entire screen. Many programs may need\nonly this single window, but you might wish to divide the screen into\nsmaller windows, in order to redraw or clear them separately. The\nnewwin()\nfunction creates a new window of a given size,\nreturning the new window object.\nbegin_x = 20; begin_y = 7\nheight = 5; width = 40\nwin = curses.newwin(height, width, begin_y, begin_x)\nNote that the coordinate system used in curses is unusual. Coordinates are always passed in the order y,x, and the top-left corner of a window is coordinate (0,0). This breaks the normal convention for handling coordinates where the x coordinate comes first. This is an unfortunate difference from most other computer applications, but it\u2019s been part of curses since it was first written, and it\u2019s too late to change things now.\nYour application can determine the size of the screen by using the\ncurses.LINES\nand curses.COLS\nvariables to obtain the y and\nx sizes. Legal coordinates will then extend from (0,0)\nto\n(curses.LINES - 1, curses.COLS - 1)\n.\nWhen you call a method to display or erase text, the effect doesn\u2019t\nimmediately show up on the display. Instead you must call the\nrefresh()\nmethod of window objects to update the\nscreen.\nThis is because curses was originally written with slow 300-baud\nterminal connections in mind; with these terminals, minimizing the\ntime required to redraw the screen was very important. Instead curses\naccumulates changes to the screen and displays them in the most\nefficient manner when you call refresh()\n. For example, if your\nprogram displays some text in a window and then clears the window,\nthere\u2019s no need to send the original text because they\u2019re never\nvisible.\nIn practice, explicitly telling curses to redraw a window doesn\u2019t\nreally complicate programming with curses much. Most programs go into a flurry\nof activity, and then pause waiting for a keypress or some other action on the\npart of the user. All you have to do is to be sure that the screen has been\nredrawn before pausing to wait for user input, by first calling\nstdscr.refresh()\nor the refresh()\nmethod of some other relevant\nwindow.\nA pad is a special case of a window; it can be larger than the actual display screen, and only a portion of the pad displayed at a time. Creating a pad requires the pad\u2019s height and width, while refreshing a pad requires giving the coordinates of the on-screen area where a subsection of the pad will be displayed.\npad = curses.newpad(100, 100)\n# These loops fill the pad with letters; addch() is\n# explained in the next section\nfor y in range(0, 99):\nfor x in range(0, 99):\npad.addch(y,x, ord('a') + (x*x+y*y) % 26)\n# Displays a section of the pad in the middle of the screen.\n# (0,0) : coordinate of upper-left corner of pad area to display.\n# (5,5) : coordinate of upper-left corner of window area to be filled\n# with pad content.\n# (20, 75) : coordinate of lower-right corner of window area to be\n# : filled with pad content.\npad.refresh( 0,0, 5,5, 20,75)\nThe refresh()\ncall displays a section of the pad in the rectangle\nextending from coordinate (5,5) to coordinate (20,75) on the screen; the upper\nleft corner of the displayed section is coordinate (0,0) on the pad. Beyond\nthat difference, pads are exactly like ordinary windows and support the same\nmethods.\nIf you have multiple windows and pads on screen there is a more\nefficient way to update the screen and prevent annoying screen flicker\nas each part of the screen gets updated. refresh()\nactually\ndoes two things:\nCalls the\nnoutrefresh()\nmethod of each window to update an underlying data structure representing the desired state of the screen.Calls the function\ndoupdate()\nfunction to change the physical screen to match the desired state recorded in the data structure.\nInstead you can call noutrefresh()\non a number of windows to\nupdate the data structure, and then call doupdate()\nto update\nthe screen.\nDisplaying Text\u00b6\nFrom a C programmer\u2019s point of view, curses may sometimes look like a\ntwisty maze of functions, all subtly different. For example,\naddstr()\ndisplays a string at the current cursor location in\nthe stdscr\nwindow, while mvaddstr()\nmoves to a given y,x\ncoordinate first before displaying the string. waddstr()\nis just\nlike addstr()\n, but allows specifying a window to use instead of\nusing stdscr\nby default. mvwaddstr()\nallows specifying both\na window and a coordinate.\nFortunately the Python interface hides all these details. stdscr\nis a window object like any other, and methods such as\naddstr()\naccept multiple argument forms. Usually there\nare four different forms.\nForm |\nDescription |\n|---|---|\nstr or ch |\nDisplay the string str or character ch at the current position |\nstr or ch, attr |\nDisplay the string str or character ch, using attribute attr at the current position |\ny, x, str or ch |\nMove to position y,x within the window, and display str or ch |\ny, x, str or ch, attr |\nMove to position y,x within the window, and display str or ch, using attribute attr |\nAttributes allow displaying text in highlighted forms such as boldface, underline, reverse code, or in color. They\u2019ll be explained in more detail in the next subsection.\nThe addstr()\nmethod takes a Python string or\nbytestring as the value to be displayed. The contents of bytestrings\nare sent to the terminal as-is. Strings are encoded to bytes using\nthe value of the window\u2019s encoding\nattribute; this defaults to\nthe default system encoding as returned by locale.getencoding()\n.\nThe addch()\nmethods take a character, which can be\neither a string of length 1, a bytestring of length 1, or an integer.\nConstants are provided for extension characters; these constants are\nintegers greater than 255. For example, ACS_PLMINUS\nis a +/-\nsymbol, and ACS_ULCORNER\nis the upper left corner of a box\n(handy for drawing borders). You can also use the appropriate Unicode\ncharacter.\nWindows remember where the cursor was left after the last operation, so if you\nleave out the y,x coordinates, the string or character will be displayed\nwherever the last operation left off. You can also move the cursor with the\nmove(y,x)\nmethod. Because some terminals always display a flashing cursor,\nyou may want to ensure that the cursor is positioned in some location where it\nwon\u2019t be distracting; it can be confusing to have the cursor blinking at some\napparently random location.\nIf your application doesn\u2019t need a blinking cursor at all, you can\ncall curs_set(False)\nto make it invisible. For compatibility\nwith older curses versions, there\u2019s a leaveok(bool)\nfunction\nthat\u2019s a synonym for curs_set()\n. When bool is true, the\ncurses library will attempt to suppress the flashing cursor, and you\nwon\u2019t need to worry about leaving it in odd locations.\nAttributes and Color\u00b6\nCharacters can be displayed in different ways. Status lines in a text-based application are commonly shown in reverse video, or a text viewer may need to highlight certain words. curses supports this by allowing you to specify an attribute for each cell on the screen.\nAn attribute is an integer, each bit representing a different attribute. You can try to display text with multiple attribute bits set, but curses doesn\u2019t guarantee that all the possible combinations are available, or that they\u2019re all visually distinct. That depends on the ability of the terminal being used, so it\u2019s safest to stick to the most commonly available attributes, listed here.\nAttribute |\nDescription |\n|---|---|\nBlinking text |\n|\nExtra bright or bold text |\n|\nHalf bright text |\n|\nReverse-video text |\n|\nThe best highlighting mode available |\n|\nUnderlined text |\nSo, to display a reverse-video status line on the top line of the screen, you could code:\nstdscr.addstr(0, 0, \"Current mode: Typing mode\",\ncurses.A_REVERSE)\nstdscr.refresh()\nThe curses library also supports color on those terminals that provide it. The most common such terminal is probably the Linux console, followed by color xterms.\nTo use color, you must call the start_color()\nfunction soon\nafter calling initscr()\n, to initialize the default color set\n(the curses.wrapper()\nfunction does this automatically). Once that\u2019s\ndone, the has_colors()\nfunction returns TRUE if the terminal\nin use can\nactually display color. (Note: curses uses the American spelling \u2018color\u2019,\ninstead of the Canadian/British spelling \u2018colour\u2019. If you\u2019re used to the\nBritish spelling, you\u2019ll have to resign yourself to misspelling it for the sake\nof these functions.)\nThe curses library maintains a finite number of color pairs, containing a\nforeground (or text) color and a background color. You can get the attribute\nvalue corresponding to a color pair with the color_pair()\nfunction; this can be bitwise-OR\u2019ed with other attributes such as\nA_REVERSE\n, but again, such combinations are not guaranteed to work\non all terminals.\nAn example, which displays a line of text using color pair 1:\nstdscr.addstr(\"Pretty text\", curses.color_pair(1))\nstdscr.refresh()\nAs I said before, a color pair consists of a foreground and background color.\nThe init_pair(n, f, b)\nfunction changes the definition of color pair n, to\nforeground color f and background color b. Color pair 0 is hard-wired to white\non black, and cannot be changed.\nColors are numbered, and start_color()\ninitializes 8 basic\ncolors when it activates color mode. They are: 0:black, 1:red,\n2:green, 3:yellow, 4:blue, 5:magenta, 6:cyan, and 7:white. The curses\nmodule defines named constants for each of these colors:\ncurses.COLOR_BLACK\n, curses.COLOR_RED\n, and so forth.\nLet\u2019s put all this together. To change color 1 to red text on a white background, you would call:\ncurses.init_pair(1, curses.COLOR_RED, curses.COLOR_WHITE)\nWhen you change a color pair, any text already displayed using that color pair will change to the new colors. You can also display new text in this color with:\nstdscr.addstr(0,0, \"RED ALERT!\", curses.color_pair(1))\nVery fancy terminals can change the definitions of the actual colors to a given\nRGB value. This lets you change color 1, which is usually red, to purple or\nblue or any other color you like. Unfortunately, the Linux console doesn\u2019t\nsupport this, so I\u2019m unable to try it out, and can\u2019t provide any examples. You\ncan check if your terminal can do this by calling\ncan_change_color()\n, which returns True\nif the capability is\nthere. If you\u2019re lucky enough to have such a talented terminal, consult your\nsystem\u2019s man pages for more information.\nUser Input\u00b6\nThe C curses library offers only very simple input mechanisms. Python\u2019s\ncurses\nmodule adds a basic text-input widget. (Other libraries\nsuch as Urwid have more extensive collections of widgets.)\nThere are two methods for getting input from a window:\ngetch()\nrefreshes the screen and then waits for the user to hit a key, displaying the key ifecho()\nhas been called earlier. You can optionally specify a coordinate to which the cursor should be moved before pausing.getkey()\ndoes the same thing but converts the integer to a string. Individual characters are returned as 1-character strings, and special keys such as function keys return longer strings containing a key name such asKEY_UP\nor^G\n.\nIt\u2019s possible to not wait for the user using the\nnodelay()\nwindow method. After nodelay(True)\n,\ngetch()\nand getkey()\nfor the window become\nnon-blocking. To signal that no input is ready, getch()\nreturns\ncurses.ERR\n(a value of -1) and getkey()\nraises an exception.\nThere\u2019s also a halfdelay()\nfunction, which can be used to (in\neffect) set a timer on each getch()\n; if no input becomes\navailable within a specified delay (measured in tenths of a second),\ncurses raises an exception.\nThe getch()\nmethod returns an integer; if it\u2019s between 0 and 255, it\nrepresents the ASCII code of the key pressed. Values greater than 255 are\nspecial keys such as Page Up, Home, or the cursor keys. You can compare the\nvalue returned to constants such as curses.KEY_PPAGE\n,\ncurses.KEY_HOME\n, or curses.KEY_LEFT\n. The main loop of\nyour program may look something like this:\nwhile True:\nc = stdscr.getch()\nif c == ord('p'):\nPrintDocument()\nelif c == ord('q'):\nbreak # Exit the while loop\nelif c == curses.KEY_HOME:\nx = y = 0\nThe curses.ascii\nmodule supplies ASCII class membership functions that\ntake either integer or 1-character string arguments; these may be useful in\nwriting more readable tests for such loops. It also supplies\nconversion functions that take either integer or 1-character-string arguments\nand return the same type. For example, curses.ascii.ctrl()\nreturns the\ncontrol character corresponding to its argument.\nThere\u2019s also a method to retrieve an entire string,\ngetstr()\n. It isn\u2019t used very often, because its\nfunctionality is quite limited; the only editing keys available are\nthe backspace key and the Enter key, which terminates the string. It\ncan optionally be limited to a fixed number of characters.\ncurses.echo() # Enable echoing of characters\n# Get a 15-character string, with the cursor on the top line\ns = stdscr.getstr(0,0, 15)\nThe curses.textpad\nmodule supplies a text box that supports an\nEmacs-like set of keybindings. Various methods of the\nTextbox\nclass support editing with input\nvalidation and gathering the edit results either with or without\ntrailing spaces. Here\u2019s an example:\nimport curses\nfrom curses.textpad import Textbox, rectangle\ndef main(stdscr):\nstdscr.addstr(0, 0, \"Enter IM message: (hit Ctrl-G to send)\")\neditwin = curses.newwin(5,30, 2,1)\nrectangle(stdscr, 1,0, 1+5+1, 1+30+1)\nstdscr.refresh()\nbox = Textbox(editwin)\n# Let the user edit until Ctrl-G is struck.\nbox.edit()\n# Get resulting contents\nmessage = box.gather()\nSee the library documentation on curses.textpad\nfor more details.\nFor More Information\u00b6\nThis HOWTO doesn\u2019t cover some advanced topics, such as reading the\ncontents of the screen or capturing mouse events from an xterm\ninstance, but the Python library page for the curses\nmodule is now\nreasonably complete. You should browse it next.\nIf you\u2019re in doubt about the detailed behavior of the curses functions, consult the manual pages for your curses implementation, whether it\u2019s ncurses or a proprietary Unix vendor\u2019s. The manual pages will document any quirks, and provide complete lists of all the functions, attributes, and ACS_* characters available to you.\nBecause the curses API is so large, some functions aren\u2019t supported in the Python interface. Often this isn\u2019t because they\u2019re difficult to implement, but because no one has needed them yet. Also, Python doesn\u2019t yet support the menu library associated with ncurses. Patches adding support for these would be welcome; see the Python Developer\u2019s Guide to learn more about submitting patches to Python.\nWriting Programs with NCURSES: a lengthy tutorial for C programmers.\n\u201cUse curses\u2026 don\u2019t swear\u201d: video of a PyCon 2013 talk on controlling terminals using curses or Urwid.\n\u201cConsole Applications with Urwid\u201d: video of a PyCon CA 2012 talk demonstrating some applications written using Urwid.", "code_snippets": ["\n", " ", " ", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", " ", "\n\n", "\n ", "\n ", "\n\n ", "\n ", " ", " ", " ", " ", "\n ", " ", " ", "\n ", " ", " ", " ", "\n\n ", "\n ", "\n\n", "\n", " ", " ", " ", " ", " ", "\n", " ", " ", " ", " ", " ", "\n", " ", " ", " ", " ", " ", "\n", " ", " ", " ", "\n", "\n", "\n", " ", " ", " ", " ", "\n ", " ", " ", " ", " ", "\n ", " ", " ", " ", " ", " ", "\n\n", "\n", "\n", "\n", "\n", "\n", "\n", " ", " ", " ", "\n", " ", " ", "\n ", "\n", "\n", " ", "\n", "\n", " ", " ", "\n", " ", " ", "\n", " ", "\n ", " ", " ", "\n ", " ", " ", " ", "\n ", "\n ", " ", " ", " ", "\n ", " ", "\n ", " ", " ", " ", "\n ", " ", " ", " ", " ", "\n", " ", "\n\n", "\n", " ", " ", " ", "\n", "\n", " ", " ", "\n\n", "\n ", " ", " ", "\n\n ", " ", " ", " ", "\n ", " ", " ", " ", "\n ", "\n\n ", " ", " ", "\n\n ", "\n ", "\n\n ", "\n ", " ", " ", "\n"], "language": "Python", "source": "python.org", "token_count": 5284} +{"url": "https://docs.python.org/3/library/email.generator.html", "title": ": Generating MIME documents", "content": "email.generator\n: Generating MIME documents\u00b6\nSource code: Lib/email/generator.py\nOne of the most common tasks is to generate the flat (serialized) version of\nthe email message represented by a message object structure. You will need to\ndo this if you want to send your message via smtplib.SMTP.sendmail()\n,\nor print the message on the console. Taking a\nmessage object structure and producing a serialized representation is the job\nof the generator classes.\nAs with the email.parser\nmodule, you aren\u2019t limited to the functionality\nof the bundled generator; you could write one from scratch yourself. However\nthe bundled generator knows how to generate most email in a standards-compliant\nway, should handle MIME and non-MIME email messages just fine, and is designed\nso that the bytes-oriented parsing and generation operations are inverses,\nassuming the same non-transforming policy\nis used for both. That\nis, parsing the serialized byte stream via the\nBytesParser\nclass and then regenerating the serialized\nbyte stream using BytesGenerator\nshould produce output identical to\nthe input [1]. (On the other hand, using the generator on an\nEmailMessage\nconstructed by program may result in\nchanges to the EmailMessage\nobject as defaults are\nfilled in.)\nThe Generator\nclass can be used to flatten a message into a text (as\nopposed to binary) serialized representation, but since Unicode cannot\nrepresent binary data directly, the message is of necessity transformed into\nsomething that contains only ASCII characters, using the standard email RFC\nContent Transfer Encoding techniques for encoding email messages for transport\nover channels that are not \u201c8 bit clean\u201d.\nTo accommodate reproducible processing of SMIME-signed messages\nGenerator\ndisables header folding for message parts of type\nmultipart/signed\nand all subparts.\n- class email.generator.BytesGenerator(outfp, mangle_from_=None, maxheaderlen=None, *, policy=None)\u00b6\nReturn a\nBytesGenerator\nobject that will write any message provided to theflatten()\nmethod, or any surrogateescape encoded text provided to thewrite()\nmethod, to the file-like object outfp. outfp must support awrite\nmethod that accepts binary data.If optional mangle_from_ is\nTrue\n, put a>\ncharacter in front of any line in the body that starts with the exact string\"From \"\n, that isFrom\nfollowed by a space at the beginning of a line. mangle_from_ defaults to the value of themangle_from_\nsetting of the policy (which isTrue\nfor thecompat32\npolicy andFalse\nfor all others). mangle_from_ is intended for use when messages are stored in Unix mbox format (seemailbox\nand WHY THE CONTENT-LENGTH FORMAT IS BAD).If maxheaderlen is not\nNone\n, refold any header lines that are longer than maxheaderlen, or if0\n, do not rewrap any headers. If manheaderlen isNone\n(the default), wrap headers and other message lines according to the policy settings.If policy is specified, use that policy to control message generation. If policy is\nNone\n(the default), use the policy associated with theMessage\norEmailMessage\nobject passed toflatten\nto control the message generation. Seeemail.policy\nfor details on what policy controls.Added in version 3.2.\nChanged in version 3.3: Added the policy keyword.\nChanged in version 3.6: The default behavior of the mangle_from_ and maxheaderlen parameters is to follow the policy.\n- flatten(msg, unixfrom=False, linesep=None)\u00b6\nPrint the textual representation of the message object structure rooted at msg to the output file specified when the\nBytesGenerator\ninstance was created.If the\npolicy\noptioncte_type\nis8bit\n(the default), copy any headers in the original parsed message that have not been modified to the output with any bytes with the high bit set reproduced as in the original, and preserve the non-ASCII Content-Transfer-Encoding of any body parts that have them. Ifcte_type\nis7bit\n, convert the bytes with the high bit set as needed using an ASCII-compatible Content-Transfer-Encoding. That is, transform parts with non-ASCII Content-Transfer-Encoding (Content-Transfer-Encoding: 8bit) to an ASCII compatible Content-Transfer-Encoding, and encode RFC-invalid non-ASCII bytes in headers using the MIMEunknown-8bit\ncharacter set, thus rendering them RFC-compliant.If unixfrom is\nTrue\n, print the envelope header delimiter used by the Unix mailbox format (seemailbox\n) before the first of the RFC 5322 headers of the root message object. If the root object has no envelope header, craft a standard one. The default isFalse\n. Note that for subparts, no envelope header is ever printed.If linesep is not\nNone\n, use it as the separator character between all the lines of the flattened message. If linesep isNone\n(the default), use the value specified in the policy.\n- clone(fp)\u00b6\nReturn an independent clone of this\nBytesGenerator\ninstance with the exact same option settings, and fp as the new outfp.\n- write(s)\u00b6\nEncode s using the\nASCII\ncodec and thesurrogateescape\nerror handler, and pass it to the write method of the outfp passed to theBytesGenerator\n\u2019s constructor.\nAs a convenience, EmailMessage\nprovides the methods\nas_bytes()\nand bytes(aMessage)\n(a.k.a.\n__bytes__()\n), which simplify the generation of\na serialized binary representation of a message object. For more detail, see\nemail.message\n.\nBecause strings cannot represent binary data, the Generator\nclass must\nconvert any binary data in any message it flattens to an ASCII compatible\nformat, by converting them to an ASCII compatible\nContent-Transfer_Encoding. Using the terminology of the email\nRFCs, you can think of this as Generator\nserializing to an I/O stream\nthat is not \u201c8 bit clean\u201d. In other words, most applications will want\nto be using BytesGenerator\n, and not Generator\n.\n- class email.generator.Generator(outfp, mangle_from_=None, maxheaderlen=None, *, policy=None)\u00b6\nReturn a\nGenerator\nobject that will write any message provided to theflatten()\nmethod, or any text provided to thewrite()\nmethod, to the file-like object outfp. outfp must support awrite\nmethod that accepts string data.If optional mangle_from_ is\nTrue\n, put a>\ncharacter in front of any line in the body that starts with the exact string\"From \"\n, that isFrom\nfollowed by a space at the beginning of a line. mangle_from_ defaults to the value of themangle_from_\nsetting of the policy (which isTrue\nfor thecompat32\npolicy andFalse\nfor all others). mangle_from_ is intended for use when messages are stored in Unix mbox format (seemailbox\nand WHY THE CONTENT-LENGTH FORMAT IS BAD).If maxheaderlen is not\nNone\n, refold any header lines that are longer than maxheaderlen, or if0\n, do not rewrap any headers. If manheaderlen isNone\n(the default), wrap headers and other message lines according to the policy settings.If policy is specified, use that policy to control message generation. If policy is\nNone\n(the default), use the policy associated with theMessage\norEmailMessage\nobject passed toflatten\nto control the message generation. Seeemail.policy\nfor details on what policy controls.Changed in version 3.3: Added the policy keyword.\nChanged in version 3.6: The default behavior of the mangle_from_ and maxheaderlen parameters is to follow the policy.\n- flatten(msg, unixfrom=False, linesep=None)\u00b6\nPrint the textual representation of the message object structure rooted at msg to the output file specified when the\nGenerator\ninstance was created.If the\npolicy\noptioncte_type\nis8bit\n, generate the message as if the option were set to7bit\n. (This is required because strings cannot represent non-ASCII bytes.) Convert any bytes with the high bit set as needed using an ASCII-compatible Content-Transfer-Encoding. That is, transform parts with non-ASCII Content-Transfer-Encoding (Content-Transfer-Encoding: 8bit) to an ASCII compatible Content-Transfer-Encoding, and encode RFC-invalid non-ASCII bytes in headers using the MIMEunknown-8bit\ncharacter set, thus rendering them RFC-compliant.If unixfrom is\nTrue\n, print the envelope header delimiter used by the Unix mailbox format (seemailbox\n) before the first of the RFC 5322 headers of the root message object. If the root object has no envelope header, craft a standard one. The default isFalse\n. Note that for subparts, no envelope header is ever printed.If linesep is not\nNone\n, use it as the separator character between all the lines of the flattened message. If linesep isNone\n(the default), use the value specified in the policy.Changed in version 3.2: Added support for re-encoding\n8bit\nmessage bodies, and the linesep argument.\nAs a convenience, EmailMessage\nprovides the methods\nas_string()\nand str(aMessage)\n(a.k.a.\n__str__()\n), which simplify the generation of\na formatted string representation of a message object. For more detail, see\nemail.message\n.\nThe email.generator\nmodule also provides a derived class,\nDecodedGenerator\n, which is like the Generator\nbase class,\nexcept that non-text parts are not serialized, but are instead\nrepresented in the output stream by a string derived from a template filled\nin with information about the part.\n- class email.generator.DecodedGenerator(outfp, mangle_from_=None, maxheaderlen=None, fmt=None, *, policy=None)\u00b6\nAct like\nGenerator\n, except that for any subpart of the message passed toGenerator.flatten()\n, if the subpart is of main type text, print the decoded payload of the subpart, and if the main type is not text, instead of printing it fill in the string fmt using information from the part and print the resulting filled-in string.To fill in fmt, execute\nfmt % part_info\n, wherepart_info\nis a dictionary composed of the following keys and values:type\n\u2013 Full MIME type of the non-text partmaintype\n\u2013 Main MIME type of the non-text partsubtype\n\u2013 Sub-MIME type of the non-text partfilename\n\u2013 Filename of the non-text partdescription\n\u2013 Description associated with the non-text partencoding\n\u2013 Content transfer encoding of the non-text part\nIf fmt is\nNone\n, use the following default fmt:\u201c[Non-text (%(type)s) part of message omitted, filename %(filename)s]\u201d\nOptional _mangle_from_ and maxheaderlen are as with the\nGenerator\nbase class.\nFootnotes", "code_snippets": [], "language": "Python", "source": "python.org", "token_count": 2513} +{"url": "https://docs.python.org/3/c-api/memory.html", "title": "Memory Management", "content": "Memory Management\u00b6\nOverview\u00b6\nMemory management in Python involves a private heap containing all Python objects and data structures. The management of this private heap is ensured internally by the Python memory manager. The Python memory manager has different components which deal with various dynamic storage management aspects, like sharing, segmentation, preallocation or caching.\nAt the lowest level, a raw memory allocator ensures that there is enough room in the private heap for storing all Python-related data by interacting with the memory manager of the operating system. On top of the raw memory allocator, several object-specific allocators operate on the same heap and implement distinct memory management policies adapted to the peculiarities of every object type. For example, integer objects are managed differently within the heap than strings, tuples or dictionaries because integers imply different storage requirements and speed/space tradeoffs. The Python memory manager thus delegates some of the work to the object-specific allocators, but ensures that the latter operate within the bounds of the private heap.\nIt is important to understand that the management of the Python heap is performed by the interpreter itself and that the user has no control over it, even if they regularly manipulate object pointers to memory blocks inside that heap. The allocation of heap space for Python objects and other internal buffers is performed on demand by the Python memory manager through the Python/C API functions listed in this document.\nTo avoid memory corruption, extension writers should never try to operate on\nPython objects with the functions exported by the C library: malloc()\n,\ncalloc()\n, realloc()\nand free()\n. This will result in mixed\ncalls between the C allocator and the Python memory manager with fatal\nconsequences, because they implement different algorithms and operate on\ndifferent heaps. However, one may safely allocate and release memory blocks\nwith the C library allocator for individual purposes, as shown in the following\nexample:\nPyObject *res;\nchar *buf = (char *) malloc(BUFSIZ); /* for I/O */\nif (buf == NULL)\nreturn PyErr_NoMemory();\n...Do some I/O operation involving buf...\nres = PyBytes_FromString(buf);\nfree(buf); /* malloc'ed */\nreturn res;\nIn this example, the memory request for the I/O buffer is handled by the C library allocator. The Python memory manager is involved only in the allocation of the bytes object returned as a result.\nIn most situations, however, it is recommended to allocate memory from the Python heap specifically because the latter is under control of the Python memory manager. For example, this is required when the interpreter is extended with new object types written in C. Another reason for using the Python heap is the desire to inform the Python memory manager about the memory needs of the extension module. Even when the requested memory is used exclusively for internal, highly specific purposes, delegating all memory requests to the Python memory manager causes the interpreter to have a more accurate image of its memory footprint as a whole. Consequently, under certain circumstances, the Python memory manager may or may not trigger appropriate actions, like garbage collection, memory compaction or other preventive procedures. Note that by using the C library allocator as shown in the previous example, the allocated memory for the I/O buffer escapes completely the Python memory manager.\nSee also\nThe PYTHONMALLOC\nenvironment variable can be used to configure\nthe memory allocators used by Python.\nThe PYTHONMALLOCSTATS\nenvironment variable can be used to print\nstatistics of the pymalloc memory allocator every time a\nnew pymalloc object arena is created, and on shutdown.\nAllocator Domains\u00b6\nAll allocating functions belong to one of three different \u201cdomains\u201d (see also\nPyMemAllocatorDomain\n). These domains represent different allocation\nstrategies and are optimized for different purposes. The specific details on\nhow every domain allocates memory or what internal functions each domain calls\nis considered an implementation detail, but for debugging purposes a simplified\ntable can be found at Default Memory Allocators.\nThe APIs used to allocate and free a block of memory must be from the same domain.\nFor example, PyMem_Free()\nmust be used to free memory allocated using PyMem_Malloc()\n.\nThe three allocation domains are:\nRaw domain: intended for allocating memory for general-purpose memory buffers where the allocation must go to the system allocator or where the allocator can operate without an attached thread state. The memory is requested directly from the system. See Raw Memory Interface.\n\u201cMem\u201d domain: intended for allocating memory for Python buffers and general-purpose memory buffers where the allocation must be performed with an attached thread state. The memory is taken from the Python private heap. See Memory Interface.\nObject domain: intended for allocating memory for Python objects. The memory is taken from the Python private heap. See Object allocators.\nNote\nThe free-threaded build requires that only Python objects are allocated using the \u201cobject\u201d domain and that all Python objects are allocated using that domain. This differs from the prior Python versions, where this was only a best practice and not a hard requirement.\nFor example, buffers (non-Python objects) should be allocated using PyMem_Malloc()\n,\nPyMem_RawMalloc()\n, or malloc()\n, but not PyObject_Malloc()\n.\nRaw Memory Interface\u00b6\nThe following function sets are wrappers to the system allocator. These functions are thread-safe, so a thread state does not need to be attached.\nThe default raw memory allocator uses\nthe following functions: malloc()\n, calloc()\n, realloc()\nand free()\n; call malloc(1)\n(or calloc(1, 1)\n) when requesting\nzero bytes.\nAdded in version 3.4.\n-\nvoid *PyMem_RawMalloc(size_t n)\u00b6\n- Part of the Stable ABI since version 3.13.\nAllocates n bytes and returns a pointer of type void* to the allocated memory, or\nNULL\nif the request fails.Requesting zero bytes returns a distinct non-\nNULL\npointer if possible, as ifPyMem_RawMalloc(1)\nhad been called instead. The memory will not have been initialized in any way.\n-\nvoid *PyMem_RawCalloc(size_t nelem, size_t elsize)\u00b6\n- Part of the Stable ABI since version 3.13.\nAllocates nelem elements each whose size in bytes is elsize and returns a pointer of type void* to the allocated memory, or\nNULL\nif the request fails. The memory is initialized to zeros.Requesting zero elements or elements of size zero bytes returns a distinct non-\nNULL\npointer if possible, as ifPyMem_RawCalloc(1, 1)\nhad been called instead.Added in version 3.5.\n-\nvoid *PyMem_RawRealloc(void *p, size_t n)\u00b6\n- Part of the Stable ABI since version 3.13.\nResizes the memory block pointed to by p to n bytes. The contents will be unchanged to the minimum of the old and the new sizes.\nIf p is\nNULL\n, the call is equivalent toPyMem_RawMalloc(n)\n; else if n is equal to zero, the memory block is resized but is not freed, and the returned pointer is non-NULL\n.Unless p is\nNULL\n, it must have been returned by a previous call toPyMem_RawMalloc()\n,PyMem_RawRealloc()\norPyMem_RawCalloc()\n.If the request fails,\nPyMem_RawRealloc()\nreturnsNULL\nand p remains a valid pointer to the previous memory area.\n-\nvoid PyMem_RawFree(void *p)\u00b6\n- Part of the Stable ABI since version 3.13.\nFrees the memory block pointed to by p, which must have been returned by a previous call to\nPyMem_RawMalloc()\n,PyMem_RawRealloc()\norPyMem_RawCalloc()\n. Otherwise, or ifPyMem_RawFree(p)\nhas been called before, undefined behavior occurs.If p is\nNULL\n, no operation is performed.\nMemory Interface\u00b6\nThe following function sets, modeled after the ANSI C standard, but specifying behavior when requesting zero bytes, are available for allocating and releasing memory from the Python heap.\nThe default memory allocator uses the pymalloc memory allocator.\nWarning\nThere must be an attached thread state when using these functions.\nChanged in version 3.6: The default allocator is now pymalloc instead of system malloc()\n.\n-\nvoid *PyMem_Malloc(size_t n)\u00b6\n- Part of the Stable ABI.\nAllocates n bytes and returns a pointer of type void* to the allocated memory, or\nNULL\nif the request fails.Requesting zero bytes returns a distinct non-\nNULL\npointer if possible, as ifPyMem_Malloc(1)\nhad been called instead. The memory will not have been initialized in any way.\n-\nvoid *PyMem_Calloc(size_t nelem, size_t elsize)\u00b6\n- Part of the Stable ABI since version 3.7.\nAllocates nelem elements each whose size in bytes is elsize and returns a pointer of type void* to the allocated memory, or\nNULL\nif the request fails. The memory is initialized to zeros.Requesting zero elements or elements of size zero bytes returns a distinct non-\nNULL\npointer if possible, as ifPyMem_Calloc(1, 1)\nhad been called instead.Added in version 3.5.\n-\nvoid *PyMem_Realloc(void *p, size_t n)\u00b6\n- Part of the Stable ABI.\nResizes the memory block pointed to by p to n bytes. The contents will be unchanged to the minimum of the old and the new sizes.\nIf p is\nNULL\n, the call is equivalent toPyMem_Malloc(n)\n; else if n is equal to zero, the memory block is resized but is not freed, and the returned pointer is non-NULL\n.Unless p is\nNULL\n, it must have been returned by a previous call toPyMem_Malloc()\n,PyMem_Realloc()\norPyMem_Calloc()\n.If the request fails,\nPyMem_Realloc()\nreturnsNULL\nand p remains a valid pointer to the previous memory area.\n-\nvoid PyMem_Free(void *p)\u00b6\n- Part of the Stable ABI.\nFrees the memory block pointed to by p, which must have been returned by a previous call to\nPyMem_Malloc()\n,PyMem_Realloc()\norPyMem_Calloc()\n. Otherwise, or ifPyMem_Free(p)\nhas been called before, undefined behavior occurs.If p is\nNULL\n, no operation is performed.\nThe following type-oriented macros are provided for convenience. Note that TYPE refers to any C type.\n-\nPyMem_New(TYPE, n)\u00b6\nSame as\nPyMem_Malloc()\n, but allocates(n * sizeof(TYPE))\nbytes of memory. Returns a pointer cast toTYPE*\n. The memory will not have been initialized in any way.\n-\nPyMem_Resize(p, TYPE, n)\u00b6\nSame as\nPyMem_Realloc()\n, but the memory block is resized to(n * sizeof(TYPE))\nbytes. Returns a pointer cast toTYPE*\n. On return, p will be a pointer to the new memory area, orNULL\nin the event of failure.This is a C preprocessor macro; p is always reassigned. Save the original value of p to avoid losing memory when handling errors.\n-\nvoid PyMem_Del(void *p)\u00b6\nSame as\nPyMem_Free()\n.\nDeprecated aliases\u00b6\nThese are soft deprecated aliases to existing functions and macros. They exist solely for backwards compatibility.\nDeprecated alias |\nCorresponding function or macro |\n|---|---|\n|\n|\n|\n|\n|\n|\n|\n|\n|\n|\n|\nChanged in version 3.4: The macros are now aliases of the corresponding functions and macros. Previously, their behavior was the same, but their use did not necessarily preserve binary compatibility across Python versions.\nDeprecated since version 2.0.\nObject allocators\u00b6\nThe following function sets, modeled after the ANSI C standard, but specifying behavior when requesting zero bytes, are available for allocating and releasing memory from the Python heap.\nNote\nThere is no guarantee that the memory returned by these allocators can be successfully cast to a Python object when intercepting the allocating functions in this domain by the methods described in the Customize Memory Allocators section.\nThe default object allocator uses the pymalloc memory allocator.\nWarning\nThere must be an attached thread state when using these functions.\n-\nvoid *PyObject_Malloc(size_t n)\u00b6\n- Part of the Stable ABI.\nAllocates n bytes and returns a pointer of type void* to the allocated memory, or\nNULL\nif the request fails.Requesting zero bytes returns a distinct non-\nNULL\npointer if possible, as ifPyObject_Malloc(1)\nhad been called instead. The memory will not have been initialized in any way.\n-\nvoid *PyObject_Calloc(size_t nelem, size_t elsize)\u00b6\n- Part of the Stable ABI since version 3.7.\nAllocates nelem elements each whose size in bytes is elsize and returns a pointer of type void* to the allocated memory, or\nNULL\nif the request fails. The memory is initialized to zeros.Requesting zero elements or elements of size zero bytes returns a distinct non-\nNULL\npointer if possible, as ifPyObject_Calloc(1, 1)\nhad been called instead.Added in version 3.5.\n-\nvoid *PyObject_Realloc(void *p, size_t n)\u00b6\n- Part of the Stable ABI.\nResizes the memory block pointed to by p to n bytes. The contents will be unchanged to the minimum of the old and the new sizes.\nIf p is\nNULL\n, the call is equivalent toPyObject_Malloc(n)\n; else if n is equal to zero, the memory block is resized but is not freed, and the returned pointer is non-NULL\n.Unless p is\nNULL\n, it must have been returned by a previous call toPyObject_Malloc()\n,PyObject_Realloc()\norPyObject_Calloc()\n.If the request fails,\nPyObject_Realloc()\nreturnsNULL\nand p remains a valid pointer to the previous memory area.\n-\nvoid PyObject_Free(void *p)\u00b6\n- Part of the Stable ABI.\nFrees the memory block pointed to by p, which must have been returned by a previous call to\nPyObject_Malloc()\n,PyObject_Realloc()\norPyObject_Calloc()\n. Otherwise, or ifPyObject_Free(p)\nhas been called before, undefined behavior occurs.If p is\nNULL\n, no operation is performed.Do not call this directly to free an object\u2019s memory; call the type\u2019s\ntp_free\nslot instead.Do not use this for memory allocated by\nPyObject_GC_New\norPyObject_GC_NewVar\n; usePyObject_GC_Del()\ninstead.See also\nPyObject_GC_Del()\nis the equivalent of this function for memory allocated by types that support garbage collection.\nDefault Memory Allocators\u00b6\nDefault memory allocators:\nConfiguration |\nName |\nPyMem_RawMalloc |\nPyMem_Malloc |\nPyObject_Malloc |\n|---|---|---|---|---|\nRelease build |\n|\n|\n|\n|\nDebug build |\n|\n|\n|\n|\nRelease build, without pymalloc |\n|\n|\n|\n|\nDebug build, without pymalloc |\n|\n|\n|\n|\nLegend:\nName: value for\nPYTHONMALLOC\nenvironment variable.malloc\n: system allocators from the standard C library, C functions:malloc()\n,calloc()\n,realloc()\nandfree()\n.pymalloc\n: pymalloc memory allocator.mimalloc\n: mimalloc memory allocator. The pymalloc allocator will be used if mimalloc support isn\u2019t available.\u201c+ debug\u201d: with debug hooks on the Python memory allocators.\n\u201cDebug build\u201d: Python build in debug mode.\nCustomize Memory Allocators\u00b6\nAdded in version 3.4.\n-\ntype PyMemAllocatorEx\u00b6\nStructure used to describe a memory block allocator. The structure has the following fields:\nField\nMeaning\nvoid *ctx\nuser context passed as first argument\nvoid* malloc(void *ctx, size_t size)\nallocate a memory block\nvoid* calloc(void *ctx, size_t nelem, size_t elsize)\nallocate a memory block initialized with zeros\nvoid* realloc(void *ctx, void *ptr, size_t new_size)\nallocate or resize a memory block\nvoid free(void *ctx, void *ptr)\nfree a memory block\nChanged in version 3.5: The\nPyMemAllocator\nstructure was renamed toPyMemAllocatorEx\nand a newcalloc\nfield was added.\n-\ntype PyMemAllocatorDomain\u00b6\nEnum used to identify an allocator domain. Domains:\n-\nPYMEM_DOMAIN_RAW\u00b6\nFunctions:\n-\nPYMEM_DOMAIN_MEM\u00b6\nFunctions:\n-\nPYMEM_DOMAIN_OBJ\u00b6\nFunctions:\n-\nPYMEM_DOMAIN_RAW\u00b6\n-\nvoid PyMem_GetAllocator(PyMemAllocatorDomain domain, PyMemAllocatorEx *allocator)\u00b6\nGet the memory block allocator of the specified domain.\n-\nvoid PyMem_SetAllocator(PyMemAllocatorDomain domain, PyMemAllocatorEx *allocator)\u00b6\nSet the memory block allocator of the specified domain.\nThe new allocator must return a distinct non-\nNULL\npointer when requesting zero bytes.For the\nPYMEM_DOMAIN_RAW\ndomain, the allocator must be thread-safe: a thread state is not attached when the allocator is called.For the remaining domains, the allocator must also be thread-safe: the allocator may be called in different interpreters that do not share a GIL.\nIf the new allocator is not a hook (does not call the previous allocator), the\nPyMem_SetupDebugHooks()\nfunction must be called to reinstall the debug hooks on top on the new allocator.See also\nPyPreConfig.allocator\nand Preinitialize Python with PyPreConfig.Warning\nPyMem_SetAllocator()\ndoes have the following contract:It can be called after\nPy_PreInitialize()\nand beforePy_InitializeFromConfig()\nto install a custom memory allocator. There are no restrictions over the installed allocator other than the ones imposed by the domain (for instance, the Raw Domain allows the allocator to be called without an attached thread state). See the section on allocator domains for more information.If called after Python has finish initializing (after\nPy_InitializeFromConfig()\nhas been called) the allocator must wrap the existing allocator. Substituting the current allocator for some other arbitrary one is not supported.\nChanged in version 3.12: All allocators must be thread-safe.\n-\nvoid PyMem_SetupDebugHooks(void)\u00b6\nSetup debug hooks in the Python memory allocators to detect memory errors.\nDebug hooks on the Python memory allocators\u00b6\nWhen Python is built in debug mode, the\nPyMem_SetupDebugHooks()\nfunction is called at the Python\npreinitialization to setup debug hooks on Python memory allocators\nto detect memory errors.\nThe PYTHONMALLOC\nenvironment variable can be used to install debug\nhooks on a Python compiled in release mode (ex: PYTHONMALLOC=debug\n).\nThe PyMem_SetupDebugHooks()\nfunction can be used to set debug hooks\nafter calling PyMem_SetAllocator()\n.\nThese debug hooks fill dynamically allocated memory blocks with special,\nrecognizable bit patterns. Newly allocated memory is filled with the byte\n0xCD\n(PYMEM_CLEANBYTE\n), freed memory is filled with the byte 0xDD\n(PYMEM_DEADBYTE\n). Memory blocks are surrounded by \u201cforbidden bytes\u201d\nfilled with the byte 0xFD\n(PYMEM_FORBIDDENBYTE\n). Strings of these bytes\nare unlikely to be valid addresses, floats, or ASCII strings.\nRuntime checks:\nDetect API violations. For example, detect if\nPyObject_Free()\nis called on a memory block allocated byPyMem_Malloc()\n.Detect write before the start of the buffer (buffer underflow).\nDetect write after the end of the buffer (buffer overflow).\nCheck that there is an attached thread state when allocator functions of\nPYMEM_DOMAIN_OBJ\n(ex:PyObject_Malloc()\n) andPYMEM_DOMAIN_MEM\n(ex:PyMem_Malloc()\n) domains are called.\nOn error, the debug hooks use the tracemalloc\nmodule to get the\ntraceback where a memory block was allocated. The traceback is only displayed\nif tracemalloc\nis tracing Python memory allocations and the memory block\nwas traced.\nLet S = sizeof(size_t)\n. 2*S\nbytes are added at each end of each block\nof N bytes requested. The memory layout is like so, where p represents the\naddress returned by a malloc-like or realloc-like function (p[i:j]\nmeans\nthe slice of bytes from *(p+i)\ninclusive up to *(p+j)\nexclusive; note\nthat the treatment of negative indices differs from a Python slice):\np[-2*S:-S]\nNumber of bytes originally asked for. This is a size_t, big-endian (easier to read in a memory dump).\np[-S]\nAPI identifier (ASCII character):\n'r'\nforPYMEM_DOMAIN_RAW\n.'m'\nforPYMEM_DOMAIN_MEM\n.'o'\nforPYMEM_DOMAIN_OBJ\n.\np[-S+1:0]\nCopies of PYMEM_FORBIDDENBYTE. Used to catch under- writes and reads.\np[0:N]\nThe requested memory, filled with copies of PYMEM_CLEANBYTE, used to catch reference to uninitialized memory. When a realloc-like function is called requesting a larger memory block, the new excess bytes are also filled with PYMEM_CLEANBYTE. When a free-like function is called, these are overwritten with PYMEM_DEADBYTE, to catch reference to freed memory. When a realloc- like function is called requesting a smaller memory block, the excess old bytes are also filled with PYMEM_DEADBYTE.\np[N:N+S]\nCopies of PYMEM_FORBIDDENBYTE. Used to catch over- writes and reads.\np[N+S:N+2*S]\nOnly used if the\nPYMEM_DEBUG_SERIALNO\nmacro is defined (not defined by default).A serial number, incremented by 1 on each call to a malloc-like or realloc-like function. Big-endian\nsize_t\n. If \u201cbad memory\u201d is detected later, the serial number gives an excellent way to set a breakpoint on the next run, to capture the instant at which this block was passed out. The static function bumpserialno() in obmalloc.c is the only place the serial number is incremented, and exists so you can set such a breakpoint easily.\nA realloc-like or free-like function first checks that the PYMEM_FORBIDDENBYTE bytes at each end are intact. If they\u2019ve been altered, diagnostic output is written to stderr, and the program is aborted via Py_FatalError(). The other main failure mode is provoking a memory error when a program reads up one of the special bit patterns and tries to use it as an address. If you get in a debugger then and look at the object, you\u2019re likely to see that it\u2019s entirely filled with PYMEM_DEADBYTE (meaning freed memory is getting used) or PYMEM_CLEANBYTE (meaning uninitialized memory is getting used).\nChanged in version 3.6: The PyMem_SetupDebugHooks()\nfunction now also works on Python\ncompiled in release mode. On error, the debug hooks now use\ntracemalloc\nto get the traceback where a memory block was allocated.\nThe debug hooks now also check if there is an attached thread state when\nfunctions of PYMEM_DOMAIN_OBJ\nand PYMEM_DOMAIN_MEM\ndomains are\ncalled.\nChanged in version 3.8: Byte patterns 0xCB\n(PYMEM_CLEANBYTE\n), 0xDB\n(PYMEM_DEADBYTE\n)\nand 0xFB\n(PYMEM_FORBIDDENBYTE\n) have been replaced with 0xCD\n,\n0xDD\nand 0xFD\nto use the same values than Windows CRT debug\nmalloc()\nand free()\n.\nThe pymalloc allocator\u00b6\nPython has a pymalloc allocator optimized for small objects (smaller or equal\nto 512 bytes) with a short lifetime. It uses memory mappings called \u201carenas\u201d\nwith a fixed size of either 256 KiB on 32-bit platforms or 1 MiB on 64-bit\nplatforms. It falls back to PyMem_RawMalloc()\nand\nPyMem_RawRealloc()\nfor allocations larger than 512 bytes.\npymalloc is the default allocator of the\nPYMEM_DOMAIN_MEM\n(ex: PyMem_Malloc()\n) and\nPYMEM_DOMAIN_OBJ\n(ex: PyObject_Malloc()\n) domains.\nThe arena allocator uses the following functions:\nVirtualAlloc()\nandVirtualFree()\non Windows,mmap()\nandmunmap()\nif available,malloc()\nandfree()\notherwise.\nThis allocator is disabled if Python is configured with the\n--without-pymalloc\noption. It can also be disabled at runtime using\nthe PYTHONMALLOC\nenvironment variable (ex: PYTHONMALLOC=malloc\n).\nTypically, it makes sense to disable the pymalloc allocator when building\nPython with AddressSanitizer (--with-address-sanitizer\n) which helps\nuncover low level bugs within the C code.\nCustomize pymalloc Arena Allocator\u00b6\nAdded in version 3.4.\n-\ntype PyObjectArenaAllocator\u00b6\nStructure used to describe an arena allocator. The structure has three fields:\nField\nMeaning\nvoid *ctx\nuser context passed as first argument\nvoid* alloc(void *ctx, size_t size)\nallocate an arena of size bytes\nvoid free(void *ctx, void *ptr, size_t size)\nfree an arena\n-\nvoid PyObject_GetArenaAllocator(PyObjectArenaAllocator *allocator)\u00b6\nGet the arena allocator.\n-\nvoid PyObject_SetArenaAllocator(PyObjectArenaAllocator *allocator)\u00b6\nSet the arena allocator.\nThe mimalloc allocator\u00b6\nAdded in version 3.13.\nPython supports the mimalloc allocator when the underlying platform support is available. mimalloc \u201cis a general purpose allocator with excellent performance characteristics. Initially developed by Daan Leijen for the runtime systems of the Koka and Lean languages.\u201d\ntracemalloc C API\u00b6\nAdded in version 3.7.\n-\nint PyTraceMalloc_Track(unsigned int domain, uintptr_t ptr, size_t size)\u00b6\nTrack an allocated memory block in the\ntracemalloc\nmodule.Return\n0\non success, return-1\non error (failed to allocate memory to store the trace). Return-2\nif tracemalloc is disabled.If memory block is already tracked, update the existing trace.\n-\nint PyTraceMalloc_Untrack(unsigned int domain, uintptr_t ptr)\u00b6\nUntrack an allocated memory block in the\ntracemalloc\nmodule. Do nothing if the block was not tracked.Return\n-2\nif tracemalloc is disabled, otherwise return0\n.\nExamples\u00b6\nHere is the example from section Overview, rewritten so that the I/O buffer is allocated from the Python heap by using the first function set:\nPyObject *res;\nchar *buf = (char *) PyMem_Malloc(BUFSIZ); /* for I/O */\nif (buf == NULL)\nreturn PyErr_NoMemory();\n/* ...Do some I/O operation involving buf... */\nres = PyBytes_FromString(buf);\nPyMem_Free(buf); /* allocated with PyMem_Malloc */\nreturn res;\nThe same code using the type-oriented function set:\nPyObject *res;\nchar *buf = PyMem_New(char, BUFSIZ); /* for I/O */\nif (buf == NULL)\nreturn PyErr_NoMemory();\n/* ...Do some I/O operation involving buf... */\nres = PyBytes_FromString(buf);\nPyMem_Free(buf); /* allocated with PyMem_New */\nreturn res;\nNote that in the two examples above, the buffer is always manipulated via functions belonging to the same set. Indeed, it is required to use the same memory API family for a given memory block, so that the risk of mixing different allocators is reduced to a minimum. The following code sequence contains two errors, one of which is labeled as fatal because it mixes two different allocators operating on different heaps.\nchar *buf1 = PyMem_New(char, BUFSIZ);\nchar *buf2 = (char *) malloc(BUFSIZ);\nchar *buf3 = (char *) PyMem_Malloc(BUFSIZ);\n...\nPyMem_Del(buf3); /* Wrong -- should be PyMem_Free() */\nfree(buf2); /* Right -- allocated via malloc() */\nfree(buf1); /* Fatal -- should be PyMem_Free() */\nIn addition to the functions aimed at handling raw memory blocks from the Python\nheap, objects in Python are allocated and released with PyObject_New\n,\nPyObject_NewVar\nand PyObject_Free()\n.\nThese will be explained in the next chapter on defining and implementing new object types in C.", "code_snippets": [], "language": "Python", "source": "python.org", "token_count": 6401} +{"url": "https://docs.python.org/3/faq/programming.html", "title": null, "content": "Programming FAQ\u00b6\nGeneral Questions\u00b6\nIs there a source code level debugger with breakpoints, single-stepping, etc.?\u00b6\nYes.\nSeveral debuggers for Python are described below, and the built-in function\nbreakpoint()\nallows you to drop into any of them.\nThe pdb module is a simple but adequate console-mode debugger for Python. It is\npart of the standard Python library, and is documented in the Library\nReference Manual\n. You can also write your own debugger by using the code\nfor pdb as an example.\nThe IDLE interactive development environment, which is part of the standard Python distribution (normally available as Tools/scripts/idle3), includes a graphical debugger.\nPythonWin is a Python IDE that includes a GUI debugger based on pdb. The PythonWin debugger colors breakpoints and has quite a few cool features such as debugging non-PythonWin programs. PythonWin is available as part of pywin32 project and as a part of the ActivePython distribution.\nEric is an IDE built on PyQt and the Scintilla editing component.\ntrepan3k is a gdb-like debugger.\nVisual Studio Code is an IDE with debugging tools that integrates with version-control software.\nThere are a number of commercial Python IDEs that include graphical debuggers. They include:\nAre there tools to help find bugs or perform static analysis?\u00b6\nYes.\nPylint and Pyflakes do basic checking that will help you catch bugs sooner.\nStatic type checkers such as Mypy, Pyre, and Pytype can check type hints in Python source code.\nHow can I create a stand-alone binary from a Python script?\u00b6\nYou don\u2019t need the ability to compile Python to C code if all you want is a stand-alone program that users can download and run without having to install the Python distribution first. There are a number of tools that determine the set of modules required by a program and bind these modules together with a Python binary to produce a single executable.\nOne is to use the freeze tool, which is included in the Python source tree as Tools/freeze. It converts Python byte code to C arrays; with a C compiler you can embed all your modules into a new program, which is then linked with the standard Python modules.\nIt works by scanning your source recursively for import statements (in both forms) and looking for the modules in the standard Python path as well as in the source directory (for built-in modules). It then turns the bytecode for modules written in Python into C code (array initializers that can be turned into code objects using the marshal module) and creates a custom-made config file that only contains those built-in modules which are actually used in the program. It then compiles the generated C code and links it with the rest of the Python interpreter to form a self-contained binary which acts exactly like your script.\nThe following packages can help with the creation of console and GUI executables:\nNuitka (Cross-platform)\nPyInstaller (Cross-platform)\nPyOxidizer (Cross-platform)\ncx_Freeze (Cross-platform)\npy2app (macOS only)\npy2exe (Windows only)\nAre there coding standards or a style guide for Python programs?\u00b6\nYes. The coding style required for standard library modules is documented as PEP 8.\nCore Language\u00b6\nWhy am I getting an UnboundLocalError when the variable has a value?\u00b6\nIt can be a surprise to get the UnboundLocalError\nin previously working\ncode when it is modified by adding an assignment statement somewhere in\nthe body of a function.\nThis code:\n>>> x = 10\n>>> def bar():\n... print(x)\n...\n>>> bar()\n10\nworks, but this code:\n>>> x = 10\n>>> def foo():\n... print(x)\n... x += 1\nresults in an UnboundLocalError\n:\n>>> foo()\nTraceback (most recent call last):\n...\nUnboundLocalError: local variable 'x' referenced before assignment\nThis is because when you make an assignment to a variable in a scope, that\nvariable becomes local to that scope and shadows any similarly named variable\nin the outer scope. Since the last statement in foo assigns a new value to\nx\n, the compiler recognizes it as a local variable. Consequently when the\nearlier print(x)\nattempts to print the uninitialized local variable and\nan error results.\nIn the example above you can access the outer scope variable by declaring it global:\n>>> x = 10\n>>> def foobar():\n... global x\n... print(x)\n... x += 1\n...\n>>> foobar()\n10\nThis explicit declaration is required in order to remind you that (unlike the superficially analogous situation with class and instance variables) you are actually modifying the value of the variable in the outer scope:\n>>> print(x)\n11\nYou can do a similar thing in a nested scope using the nonlocal\nkeyword:\n>>> def foo():\n... x = 10\n... def bar():\n... nonlocal x\n... print(x)\n... x += 1\n... bar()\n... print(x)\n...\n>>> foo()\n10\n11\nWhat are the rules for local and global variables in Python?\u00b6\nIn Python, variables that are only referenced inside a function are implicitly global. If a variable is assigned a value anywhere within the function\u2019s body, it\u2019s assumed to be a local unless explicitly declared as global.\nThough a bit surprising at first, a moment\u2019s consideration explains this. On\none hand, requiring global\nfor assigned variables provides a bar\nagainst unintended side-effects. On the other hand, if global\nwas required\nfor all global references, you\u2019d be using global\nall the time. You\u2019d have\nto declare as global every reference to a built-in function or to a component of\nan imported module. This clutter would defeat the usefulness of the global\ndeclaration for identifying side-effects.\nWhy do lambdas defined in a loop with different values all return the same result?\u00b6\nAssume you use a for loop to define a few different lambdas (or even plain functions), e.g.:\n>>> squares = []\n>>> for x in range(5):\n... squares.append(lambda: x**2)\nThis gives you a list that contains 5 lambdas that calculate x**2\n. You\nmight expect that, when called, they would return, respectively, 0\n, 1\n,\n4\n, 9\n, and 16\n. However, when you actually try you will see that\nthey all return 16\n:\n>>> squares[2]()\n16\n>>> squares[4]()\n16\nThis happens because x\nis not local to the lambdas, but is defined in\nthe outer scope, and it is accessed when the lambda is called \u2014 not when it\nis defined. At the end of the loop, the value of x\nis 4\n, so all the\nfunctions now return 4**2\n, i.e. 16\n. You can also verify this by\nchanging the value of x\nand see how the results of the lambdas change:\n>>> x = 8\n>>> squares[2]()\n64\nIn order to avoid this, you need to save the values in variables local to the\nlambdas, so that they don\u2019t rely on the value of the global x\n:\n>>> squares = []\n>>> for x in range(5):\n... squares.append(lambda n=x: n**2)\nHere, n=x\ncreates a new variable n\nlocal to the lambda and computed\nwhen the lambda is defined so that it has the same value that x\nhad at\nthat point in the loop. This means that the value of n\nwill be 0\nin the first lambda, 1\nin the second, 2\nin the third, and so on.\nTherefore each lambda will now return the correct result:\n>>> squares[2]()\n4\n>>> squares[4]()\n16\nNote that this behaviour is not peculiar to lambdas, but applies to regular functions too.\nWhat are the \u201cbest practices\u201d for using import in a module?\u00b6\nIn general, don\u2019t use from modulename import *\n. Doing so clutters the\nimporter\u2019s namespace, and makes it much harder for linters to detect undefined\nnames.\nImport modules at the top of a file. Doing so makes it clear what other modules your code requires and avoids questions of whether the module name is in scope. Using one import per line makes it easy to add and delete module imports, but using multiple imports per line uses less screen space.\nIt\u2019s good practice if you import modules in the following order:\nthird-party library modules (anything installed in Python\u2019s site-packages directory) \u2013 e.g.\ndateutil\n,requests\n,PIL.Image\nlocally developed modules\nIt is sometimes necessary to move imports to a function or class to avoid problems with circular imports. Gordon McMillan says:\nCircular imports are fine where both modules use the \u201cimport \u201d form of import. They fail when the 2nd module wants to grab a name out of the first (\u201cfrom module import name\u201d) and the import is at the top level. That\u2019s because names in the 1st are not yet available, because the first module is busy importing the 2nd.\nIn this case, if the second module is only used in one function, then the import can easily be moved into that function. By the time the import is called, the first module will have finished initializing, and the second module can do its import.\nIt may also be necessary to move imports out of the top level of code if some of the modules are platform-specific. In that case, it may not even be possible to import all of the modules at the top of the file. In this case, importing the correct modules in the corresponding platform-specific code is a good option.\nOnly move imports into a local scope, such as inside a function definition, if\nit\u2019s necessary to solve a problem such as avoiding a circular import or are\ntrying to reduce the initialization time of a module. This technique is\nespecially helpful if many of the imports are unnecessary depending on how the\nprogram executes. You may also want to move imports into a function if the\nmodules are only ever used in that function. Note that loading a module the\nfirst time may be expensive because of the one time initialization of the\nmodule, but loading a module multiple times is virtually free, costing only a\ncouple of dictionary lookups. Even if the module name has gone out of scope,\nthe module is probably available in sys.modules\n.\nHow can I pass optional or keyword parameters from one function to another?\u00b6\nCollect the arguments using the *\nand **\nspecifiers in the function\u2019s\nparameter list; this gives you the positional arguments as a tuple and the\nkeyword arguments as a dictionary. You can then pass these arguments when\ncalling another function by using *\nand **\n:\ndef f(x, *args, **kwargs):\n...\nkwargs['width'] = '14.3c'\n...\ng(x, *args, **kwargs)\nWhat is the difference between arguments and parameters?\u00b6\nParameters are defined by the names that appear in a function definition, whereas arguments are the values actually passed to a function when calling it. Parameters define what kind of arguments a function can accept. For example, given the function definition:\ndef func(foo, bar=None, **kwargs):\npass\nfoo, bar and kwargs are parameters of func\n. However, when calling\nfunc\n, for example:\nfunc(42, bar=314, extra=somevar)\nthe values 42\n, 314\n, and somevar\nare arguments.\nWhy did changing list \u2018y\u2019 also change list \u2018x\u2019?\u00b6\nIf you wrote code like:\n>>> x = []\n>>> y = x\n>>> y.append(10)\n>>> y\n[10]\n>>> x\n[10]\nyou might be wondering why appending an element to y\nchanged x\ntoo.\nThere are two factors that produce this result:\nVariables are simply names that refer to objects. Doing\ny = x\ndoesn\u2019t create a copy of the list \u2013 it creates a new variabley\nthat refers to the same objectx\nrefers to. This means that there is only one object (the list), and bothx\nandy\nrefer to it.Lists are mutable, which means that you can change their content.\nAfter the call to append()\n, the content of the mutable object has\nchanged from []\nto [10]\n. Since both the variables refer to the same\nobject, using either name accesses the modified value [10]\n.\nIf we instead assign an immutable object to x\n:\n>>> x = 5 # ints are immutable\n>>> y = x\n>>> x = x + 1 # 5 can't be mutated, we are creating a new object here\n>>> x\n6\n>>> y\n5\nwe can see that in this case x\nand y\nare not equal anymore. This is\nbecause integers are immutable, and when we do x = x + 1\nwe are not\nmutating the int 5\nby incrementing its value; instead, we are creating a\nnew object (the int 6\n) and assigning it to x\n(that is, changing which\nobject x\nrefers to). After this assignment we have two objects (the ints\n6\nand 5\n) and two variables that refer to them (x\nnow refers to\n6\nbut y\nstill refers to 5\n).\nSome operations (for example y.append(10)\nand y.sort()\n) mutate the\nobject, whereas superficially similar operations (for example y = y + [10]\nand sorted(y)\n) create a new object. In general in Python (and in all cases\nin the standard library) a method that mutates an object will return None\nto help avoid getting the two types of operations confused. So if you\nmistakenly write y.sort()\nthinking it will give you a sorted copy of y\n,\nyou\u2019ll instead end up with None\n, which will likely cause your program to\ngenerate an easily diagnosed error.\nHowever, there is one class of operations where the same operation sometimes\nhas different behaviors with different types: the augmented assignment\noperators. For example, +=\nmutates lists but not tuples or ints (a_list\n+= [1, 2, 3]\nis equivalent to a_list.extend([1, 2, 3])\nand mutates\na_list\n, whereas some_tuple += (1, 2, 3)\nand some_int += 1\ncreate\nnew objects).\nIn other words:\nIf we have a mutable object (\nlist\n,dict\n,set\n, etc.), we can use some specific operations to mutate it and all the variables that refer to it will see the change.If we have an immutable object (\nstr\n,int\n,tuple\n, etc.), all the variables that refer to it will always see the same value, but operations that transform that value into a new value always return a new object.\nIf you want to know if two variables refer to the same object or not, you can\nuse the is\noperator, or the built-in function id()\n.\nHow do I write a function with output parameters (call by reference)?\u00b6\nRemember that arguments are passed by assignment in Python. Since assignment just creates references to objects, there\u2019s no alias between an argument name in the caller and callee, and so no call-by-reference per se. You can achieve the desired effect in a number of ways.\nBy returning a tuple of the results:\n>>> def func1(a, b): ... a = 'new-value' # a and b are local names ... b = b + 1 # assigned to new objects ... return a, b # return new values ... >>> x, y = 'old-value', 99 >>> func1(x, y) ('new-value', 100)\nThis is almost always the clearest solution.\nBy using global variables. This isn\u2019t thread-safe, and is not recommended.\nBy passing a mutable (changeable in-place) object:\n>>> def func2(a): ... a[0] = 'new-value' # 'a' references a mutable list ... a[1] = a[1] + 1 # changes a shared object ... >>> args = ['old-value', 99] >>> func2(args) >>> args ['new-value', 100]\nBy passing in a dictionary that gets mutated:\n>>> def func3(args): ... args['a'] = 'new-value' # args is a mutable dictionary ... args['b'] = args['b'] + 1 # change it in-place ... >>> args = {'a': 'old-value', 'b': 99} >>> func3(args) >>> args {'a': 'new-value', 'b': 100}\nOr bundle up values in a class instance:\n>>> class Namespace: ... def __init__(self, /, **args): ... for key, value in args.items(): ... setattr(self, key, value) ... >>> def func4(args): ... args.a = 'new-value' # args is a mutable Namespace ... args.b = args.b + 1 # change object in-place ... >>> args = Namespace(a='old-value', b=99) >>> func4(args) >>> vars(args) {'a': 'new-value', 'b': 100}\nThere\u2019s almost never a good reason to get this complicated.\nYour best choice is to return a tuple containing the multiple results.\nHow do you make a higher order function in Python?\u00b6\nYou have two choices: you can use nested scopes or you can use callable objects.\nFor example, suppose you wanted to define linear(a,b)\nwhich returns a\nfunction f(x)\nthat computes the value a*x+b\n. Using nested scopes:\ndef linear(a, b):\ndef result(x):\nreturn a * x + b\nreturn result\nOr using a callable object:\nclass linear:\ndef __init__(self, a, b):\nself.a, self.b = a, b\ndef __call__(self, x):\nreturn self.a * x + self.b\nIn both cases,\ntaxes = linear(0.3, 2)\ngives a callable object where taxes(10e6) == 0.3 * 10e6 + 2\n.\nThe callable object approach has the disadvantage that it is a bit slower and results in slightly longer code. However, note that a collection of callables can share their signature via inheritance:\nclass exponential(linear):\n# __init__ inherited\ndef __call__(self, x):\nreturn self.a * (x ** self.b)\nObject can encapsulate state for several methods:\nclass counter:\nvalue = 0\ndef set(self, x):\nself.value = x\ndef up(self):\nself.value = self.value + 1\ndef down(self):\nself.value = self.value - 1\ncount = counter()\ninc, dec, reset = count.up, count.down, count.set\nHere inc()\n, dec()\nand reset()\nact like functions which share the\nsame counting variable.\nHow do I copy an object in Python?\u00b6\nIn general, try copy.copy()\nor copy.deepcopy()\nfor the general case.\nNot all objects can be copied, but most can.\nSome objects can be copied more easily. Dictionaries have a copy()\nmethod:\nnewdict = olddict.copy()\nSequences can be copied by slicing:\nnew_l = l[:]\nHow can I find the methods or attributes of an object?\u00b6\nFor an instance x\nof a user-defined class, dir(x)\nreturns an alphabetized\nlist of the names containing the instance attributes and methods and attributes\ndefined by its class.\nHow can my code discover the name of an object?\u00b6\nGenerally speaking, it can\u2019t, because objects don\u2019t really have names.\nEssentially, assignment always binds a name to a value; the same is true of\ndef\nand class\nstatements, but in that case the value is a\ncallable. Consider the following code:\n>>> class A:\n... pass\n...\n>>> B = A\n>>> a = B()\n>>> b = a\n>>> print(b)\n<__main__.A object at 0x16D07CC>\n>>> print(a)\n<__main__.A object at 0x16D07CC>\nArguably the class has a name: even though it is bound to two names and invoked\nthrough the name B\nthe created instance is still reported as an instance of\nclass A\n. However, it is impossible to say whether the instance\u2019s name is a\nor\nb\n, since both names are bound to the same value.\nGenerally speaking it should not be necessary for your code to \u201cknow the names\u201d of particular values. Unless you are deliberately writing introspective programs, this is usually an indication that a change of approach might be beneficial.\nIn comp.lang.python, Fredrik Lundh once gave an excellent analogy in answer to this question:\nThe same way as you get the name of that cat you found on your porch: the cat (object) itself cannot tell you its name, and it doesn\u2019t really care \u2013 so the only way to find out what it\u2019s called is to ask all your neighbours (namespaces) if it\u2019s their cat (object)\u2026\n\u2026.and don\u2019t be surprised if you\u2019ll find that it\u2019s known by many names, or no name at all!\nWhat\u2019s up with the comma operator\u2019s precedence?\u00b6\nComma is not an operator in Python. Consider this session:\n>>> \"a\" in \"b\", \"a\"\n(False, 'a')\nSince the comma is not an operator, but a separator between expressions the above is evaluated as if you had entered:\n(\"a\" in \"b\"), \"a\"\nnot:\n\"a\" in (\"b\", \"a\")\nThe same is true of the various assignment operators (=\n, +=\netc). They\nare not truly operators but syntactic delimiters in assignment statements.\nIs there an equivalent of C\u2019s \u201c?:\u201d ternary operator?\u00b6\nYes, there is. The syntax is as follows:\n[on_true] if [expression] else [on_false]\nx, y = 50, 25\nsmall = x if x < y else y\nBefore this syntax was introduced in Python 2.5, a common idiom was to use logical operators:\n[expression] and [on_true] or [on_false]\nHowever, this idiom is unsafe, as it can give wrong results when on_true\nhas a false boolean value. Therefore, it is always better to use\nthe ... if ... else ...\nform.\nIs it possible to write obfuscated one-liners in Python?\u00b6\nYes. Usually this is done by nesting lambda\nwithin\nlambda\n. See the following three examples, slightly adapted from Ulf Bartelt:\nfrom functools import reduce\n# Primes < 1000\nprint(list(filter(None,map(lambda y:y*reduce(lambda x,y:x*y!=0,\nmap(lambda x,y=y:y%x,range(2,int(pow(y,0.5)+1))),1),range(2,1000)))))\n# First 10 Fibonacci numbers\nprint(list(map(lambda x,f=lambda x,f:(f(x-1,f)+f(x-2,f)) if x>1 else 1:\nf(x,f), range(10))))\n# Mandelbrot set\nprint((lambda Ru,Ro,Iu,Io,IM,Sx,Sy:reduce(lambda x,y:x+'\\n'+y,map(lambda y,\nIu=Iu,Io=Io,Ru=Ru,Ro=Ro,Sy=Sy,L=lambda yc,Iu=Iu,Io=Io,Ru=Ru,Ro=Ro,i=IM,\nSx=Sx,Sy=Sy:reduce(lambda x,y:x+y,map(lambda x,xc=Ru,yc=yc,Ru=Ru,Ro=Ro,\ni=i,Sx=Sx,F=lambda xc,yc,x,y,k,f=lambda xc,yc,x,y,k,f:(k<=0)or (x*x+y*y\n>=4.0) or 1+f(xc,yc,x*x-y*y+xc,2.0*x*y+yc,k-1,f):f(xc,yc,x,y,k,f):chr(\n64+F(Ru+x*(Ro-Ru)/Sx,yc,0,0,i)),range(Sx))):L(Iu+y*(Io-Iu)/Sy),range(Sy\n))))(-2.1, 0.7, -1.2, 1.2, 30, 80, 24))\n# \\___ ___/ \\___ ___/ | | |__ lines on screen\n# V V | |______ columns on screen\n# | | |__________ maximum of \"iterations\"\n# | |_________________ range on y axis\n# |____________________________ range on x axis\nDon\u2019t try this at home, kids!\nWhat does the slash(/) in the parameter list of a function mean?\u00b6\nA slash in the argument list of a function denotes that the parameters prior to\nit are positional-only. Positional-only parameters are the ones without an\nexternally usable name. Upon calling a function that accepts positional-only\nparameters, arguments are mapped to parameters based solely on their position.\nFor example, divmod()\nis a function that accepts positional-only\nparameters. Its documentation looks like this:\n>>> help(divmod)\nHelp on built-in function divmod in module builtins:\ndivmod(x, y, /)\nReturn the tuple (x//y, x%y). Invariant: div*y + mod == x.\nThe slash at the end of the parameter list means that both parameters are\npositional-only. Thus, calling divmod()\nwith keyword arguments would lead\nto an error:\n>>> divmod(x=3, y=4)\nTraceback (most recent call last):\nFile \"\", line 1, in \nTypeError: divmod() takes no keyword arguments\nNumbers and strings\u00b6\nHow do I specify hexadecimal and octal integers?\u00b6\nTo specify an octal digit, precede the octal value with a zero, and then a lower or uppercase \u201co\u201d. For example, to set the variable \u201ca\u201d to the octal value \u201c10\u201d (8 in decimal), type:\n>>> a = 0o10\n>>> a\n8\nHexadecimal is just as easy. Simply precede the hexadecimal number with a zero, and then a lower or uppercase \u201cx\u201d. Hexadecimal digits can be specified in lower or uppercase. For example, in the Python interpreter:\n>>> a = 0xa5\n>>> a\n165\n>>> b = 0XB2\n>>> b\n178\nWhy does -22 // 10 return -3?\u00b6\nIt\u2019s primarily driven by the desire that i % j\nhave the same sign as j\n.\nIf you want that, and also want:\ni == (i // j) * j + (i % j)\nthen integer division has to return the floor. C also requires that identity to\nhold, and then compilers that truncate i // j\nneed to make i % j\nhave\nthe same sign as i\n.\nThere are few real use cases for i % j\nwhen j\nis negative. When j\nis positive, there are many, and in virtually all of them it\u2019s more useful for\ni % j\nto be >= 0\n. If the clock says 10 now, what did it say 200 hours\nago? -190 % 12 == 2\nis useful; -190 % 12 == -10\nis a bug waiting to\nbite.\nHow do I get int literal attribute instead of SyntaxError?\u00b6\nTrying to lookup an int\nliteral attribute in the normal manner gives\na SyntaxError\nbecause the period is seen as a decimal point:\n>>> 1.__class__\nFile \"\", line 1\n1.__class__\n^\nSyntaxError: invalid decimal literal\nThe solution is to separate the literal from the period with either a space or parentheses.\n>>> 1 .__class__\n\n>>> (1).__class__\n\nHow do I convert a string to a number?\u00b6\nFor integers, use the built-in int()\ntype constructor, e.g. int('144')\n== 144\n. Similarly, float()\nconverts to a floating-point number,\ne.g. float('144') == 144.0\n.\nBy default, these interpret the number as decimal, so that int('0144') ==\n144\nholds true, and int('0x144')\nraises ValueError\n. int(string,\nbase)\ntakes the base to convert from as a second optional argument, so int(\n'0x144', 16) == 324\n. If the base is specified as 0, the number is interpreted\nusing Python\u2019s rules: a leading \u20180o\u2019 indicates octal, and \u20180x\u2019 indicates a hex\nnumber.\nDo not use the built-in function eval()\nif all you need is to convert\nstrings to numbers. eval()\nwill be significantly slower and it presents a\nsecurity risk: someone could pass you a Python expression that might have\nunwanted side effects. For example, someone could pass\n__import__('os').system(\"rm -rf $HOME\")\nwhich would erase your home\ndirectory.\neval()\nalso has the effect of interpreting numbers as Python expressions,\nso that e.g. eval('09')\ngives a syntax error because Python does not allow\nleading \u20180\u2019 in a decimal number (except \u20180\u2019).\nHow do I convert a number to a string?\u00b6\nTo convert, e.g., the number 144\nto the string '144'\n, use the built-in type\nconstructor str()\n. If you want a hexadecimal or octal representation, use\nthe built-in functions hex()\nor oct()\n. For fancy formatting, see\nthe f-strings and Format String Syntax sections,\ne.g. \"{:04d}\".format(144)\nyields\n'0144'\nand \"{:.3f}\".format(1.0/3.0)\nyields '0.333'\n.\nHow do I modify a string in place?\u00b6\nYou can\u2019t, because strings are immutable. In most situations, you should\nsimply construct a new string from the various parts you want to assemble\nit from. However, if you need an object with the ability to modify in-place\nunicode data, try using an io.StringIO\nobject or the array\nmodule:\n>>> import io\n>>> s = \"Hello, world\"\n>>> sio = io.StringIO(s)\n>>> sio.getvalue()\n'Hello, world'\n>>> sio.seek(7)\n7\n>>> sio.write(\"there!\")\n6\n>>> sio.getvalue()\n'Hello, there!'\n>>> import array\n>>> a = array.array('w', s)\n>>> print(a)\narray('w', 'Hello, world')\n>>> a[0] = 'y'\n>>> print(a)\narray('w', 'yello, world')\n>>> a.tounicode()\n'yello, world'\nHow do I use strings to call functions/methods?\u00b6\nThere are various techniques.\nThe best is to use a dictionary that maps strings to functions. The primary advantage of this technique is that the strings do not need to match the names of the functions. This is also the primary technique used to emulate a case construct:\ndef a(): pass def b(): pass dispatch = {'go': a, 'stop': b} # Note lack of parens for funcs dispatch[get_input()]() # Note trailing parens to call function\nUse the built-in function\ngetattr()\n:import foo getattr(foo, 'bar')()\nNote that\ngetattr()\nworks on any object, including classes, class instances, modules, and so on.This is used in several places in the standard library, like this:\nclass Foo: def do_foo(self): ... def do_bar(self): ... f = getattr(foo_instance, 'do_' + opname) f()\nUse\nlocals()\nto resolve the function name:def myFunc(): print(\"hello\") fname = \"myFunc\" f = locals()[fname] f()\nIs there an equivalent to Perl\u2019s chomp()\nfor removing trailing newlines from strings?\u00b6\nYou can use S.rstrip(\"\\r\\n\")\nto remove all occurrences of any line\nterminator from the end of the string S\nwithout removing other trailing\nwhitespace. If the string S\nrepresents more than one line, with several\nempty lines at the end, the line terminators for all the blank lines will\nbe removed:\n>>> lines = (\"line 1 \\r\\n\"\n... \"\\r\\n\"\n... \"\\r\\n\")\n>>> lines.rstrip(\"\\n\\r\")\n'line 1 '\nSince this is typically only desired when reading text one line at a time, using\nS.rstrip()\nthis way works well.\nIs there a scanf()\nor sscanf()\nequivalent?\u00b6\nNot as such.\nFor simple input parsing, the easiest approach is usually to split the line into\nwhitespace-delimited words using the split()\nmethod of string objects\nand then convert decimal strings to numeric values using int()\nor\nfloat()\n. split()\nsupports an optional \u201csep\u201d parameter which is useful\nif the line uses something other than whitespace as a separator.\nFor more complicated input parsing, regular expressions are more powerful\nthan C\u2019s sscanf\nand better suited for the task.\nWhat does UnicodeDecodeError\nor UnicodeEncodeError\nerror mean?\u00b6\nSee the Unicode HOWTO.\nCan I end a raw string with an odd number of backslashes?\u00b6\nA raw string ending with an odd number of backslashes will escape the string\u2019s quote:\n>>> r'C:\\this\\will\\not\\work\\'\nFile \"\", line 1\nr'C:\\this\\will\\not\\work\\'\n^\nSyntaxError: unterminated string literal (detected at line 1)\nThere are several workarounds for this. One is to use regular strings and double the backslashes:\n>>> 'C:\\\\this\\\\will\\\\work\\\\'\n'C:\\\\this\\\\will\\\\work\\\\'\nAnother is to concatenate a regular string containing an escaped backslash to the raw string:\n>>> r'C:\\this\\will\\work' '\\\\'\n'C:\\\\this\\\\will\\\\work\\\\'\nIt is also possible to use os.path.join()\nto append a backslash on Windows:\n>>> os.path.join(r'C:\\this\\will\\work', '')\n'C:\\\\this\\\\will\\\\work\\\\'\nNote that while a backslash will \u201cescape\u201d a quote for the purposes of determining where the raw string ends, no escaping occurs when interpreting the value of the raw string. That is, the backslash remains present in the value of the raw string:\n>>> r'backslash\\'preserved'\n\"backslash\\\\'preserved\"\nAlso see the specification in the language reference.\nPerformance\u00b6\nMy program is too slow. How do I speed it up?\u00b6\nThat\u2019s a tough one, in general. First, here are a list of things to remember before diving further:\nPerformance characteristics vary across Python implementations. This FAQ focuses on CPython.\nBehaviour can vary across operating systems, especially when talking about I/O or multi-threading.\nYou should always find the hot spots in your program before attempting to optimize any code (see the\nprofile\nmodule).Writing benchmark scripts will allow you to iterate quickly when searching for improvements (see the\ntimeit\nmodule).It is highly recommended to have good code coverage (through unit testing or any other technique) before potentially introducing regressions hidden in sophisticated optimizations.\nThat being said, there are many tricks to speed up Python code. Here are some general principles which go a long way towards reaching acceptable performance levels:\nMaking your algorithms faster (or changing to faster ones) can yield much larger benefits than trying to sprinkle micro-optimization tricks all over your code.\nUse the right data structures. Study documentation for the Built-in Types and the\ncollections\nmodule.When the standard library provides a primitive for doing something, it is likely (although not guaranteed) to be faster than any alternative you may come up with. This is doubly true for primitives written in C, such as builtins and some extension types. For example, be sure to use either the\nlist.sort()\nbuilt-in method or the relatedsorted()\nfunction to do sorting (and see the Sorting Techniques for examples of moderately advanced usage).Abstractions tend to create indirections and force the interpreter to work more. If the levels of indirection outweigh the amount of useful work done, your program will be slower. You should avoid excessive abstraction, especially under the form of tiny functions or methods (which are also often detrimental to readability).\nIf you have reached the limit of what pure Python can allow, there are tools to take you further away. For example, Cython can compile a slightly modified version of Python code into a C extension, and can be used on many different platforms. Cython can take advantage of compilation (and optional type annotations) to make your code significantly faster than when interpreted. If you are confident in your C programming skills, you can also write a C extension module yourself.\nSee also\nThe wiki page devoted to performance tips.\nWhat is the most efficient way to concatenate many strings together?\u00b6\nstr\nand bytes\nobjects are immutable, therefore concatenating\nmany strings together is inefficient as each concatenation creates a new\nobject. In the general case, the total runtime cost is quadratic in the\ntotal string length.\nTo accumulate many str\nobjects, the recommended idiom is to place\nthem into a list and call str.join()\nat the end:\nchunks = []\nfor s in my_strings:\nchunks.append(s)\nresult = ''.join(chunks)\n(another reasonably efficient idiom is to use io.StringIO\n)\nTo accumulate many bytes\nobjects, the recommended idiom is to extend\na bytearray\nobject using in-place concatenation (the +=\noperator):\nresult = bytearray()\nfor b in my_bytes_objects:\nresult += b\nSequences (Tuples/Lists)\u00b6\nHow do I convert between tuples and lists?\u00b6\nThe type constructor tuple(seq)\nconverts any sequence (actually, any\niterable) into a tuple with the same items in the same order.\nFor example, tuple([1, 2, 3])\nyields (1, 2, 3)\nand tuple('abc')\nyields ('a', 'b', 'c')\n. If the argument is a tuple, it does not make a copy\nbut returns the same object, so it is cheap to call tuple()\nwhen you\naren\u2019t sure that an object is already a tuple.\nThe type constructor list(seq)\nconverts any sequence or iterable into a list\nwith the same items in the same order. For example, list((1, 2, 3))\nyields\n[1, 2, 3]\nand list('abc')\nyields ['a', 'b', 'c']\n. If the argument\nis a list, it makes a copy just like seq[:]\nwould.\nWhat\u2019s a negative index?\u00b6\nPython sequences are indexed with positive numbers and negative numbers. For\npositive numbers 0 is the first index 1 is the second index and so forth. For\nnegative indices -1 is the last index and -2 is the penultimate (next to last)\nindex and so forth. Think of seq[-n]\nas the same as seq[len(seq)-n]\n.\nUsing negative indices can be very convenient. For example S[:-1]\nis all of\nthe string except for its last character, which is useful for removing the\ntrailing newline from a string.\nHow do I iterate over a sequence in reverse order?\u00b6\nUse the reversed()\nbuilt-in function:\nfor x in reversed(sequence):\n... # do something with x ...\nThis won\u2019t touch your original sequence, but build a new copy with reversed order to iterate over.\nHow do you remove duplicates from a list?\u00b6\nSee the Python Cookbook for a long discussion of many ways to do this:\nIf you don\u2019t mind reordering the list, sort it and then scan from the end of the list, deleting duplicates as you go:\nif mylist:\nmylist.sort()\nlast = mylist[-1]\nfor i in range(len(mylist)-2, -1, -1):\nif last == mylist[i]:\ndel mylist[i]\nelse:\nlast = mylist[i]\nIf all elements of the list may be used as set keys (i.e. they are all hashable) this is often faster\nmylist = list(set(mylist))\nThis converts the list into a set, thereby removing duplicates, and then back into a list.\nHow do you remove multiple items from a list?\u00b6\nAs with removing duplicates, explicitly iterating in reverse with a delete condition is one possibility. However, it is easier and faster to use slice replacement with an implicit or explicit forward iteration. Here are three variations:\nmylist[:] = filter(keep_function, mylist)\nmylist[:] = (x for x in mylist if keep_condition)\nmylist[:] = [x for x in mylist if keep_condition]\nThe list comprehension may be fastest.\nHow do you make an array in Python?\u00b6\nUse a list:\n[\"this\", 1, \"is\", \"an\", \"array\"]\nLists are equivalent to C or Pascal arrays in their time complexity; the primary difference is that a Python list can contain objects of many different types.\nThe array\nmodule also provides methods for creating arrays of fixed types\nwith compact representations, but they are slower to index than lists. Also\nnote that NumPy\nand other third party packages define array-like structures with\nvarious characteristics as well.\nTo get Lisp-style linked lists, you can emulate cons cells using tuples:\nlisp_list = (\"like\", (\"this\", (\"example\", None) ) )\nIf mutability is desired, you could use lists instead of tuples. Here the\nanalogue of a Lisp car is lisp_list[0]\nand the analogue of cdr is\nlisp_list[1]\n. Only do this if you\u2019re sure you really need to, because it\u2019s\nusually a lot slower than using Python lists.\nHow do I create a multidimensional list?\u00b6\nYou probably tried to make a multidimensional array like this:\n>>> A = [[None] * 2] * 3\nThis looks correct if you print it:\n>>> A\n[[None, None], [None, None], [None, None]]\nBut when you assign a value, it shows up in multiple places:\n>>> A[0][0] = 5\n>>> A\n[[5, None], [5, None], [5, None]]\nThe reason is that replicating a list with *\ndoesn\u2019t create copies, it only\ncreates references to the existing objects. The *3\ncreates a list\ncontaining 3 references to the same list of length two. Changes to one row will\nshow in all rows, which is almost certainly not what you want.\nThe suggested approach is to create a list of the desired length first and then fill in each element with a newly created list:\nA = [None] * 3\nfor i in range(3):\nA[i] = [None] * 2\nThis generates a list containing 3 different lists of length two. You can also use a list comprehension:\nw, h = 2, 3\nA = [[None] * w for i in range(h)]\nOr, you can use an extension that provides a matrix datatype; NumPy is the best known.\nHow do I apply a method or function to a sequence of objects?\u00b6\nTo call a method or function and accumulate the return values is a list, a list comprehension is an elegant solution:\nresult = [obj.method() for obj in mylist]\nresult = [function(obj) for obj in mylist]\nTo just run the method or function without saving the return values,\na plain for\nloop will suffice:\nfor obj in mylist:\nobj.method()\nfor obj in mylist:\nfunction(obj)\nWhy does a_tuple[i] += [\u2018item\u2019] raise an exception when the addition works?\u00b6\nThis is because of a combination of the fact that augmented assignment operators are assignment operators, and the difference between mutable and immutable objects in Python.\nThis discussion applies in general when augmented assignment operators are\napplied to elements of a tuple that point to mutable objects, but we\u2019ll use\na list\nand +=\nas our exemplar.\nIf you wrote:\n>>> a_tuple = (1, 2)\n>>> a_tuple[0] += 1\nTraceback (most recent call last):\n...\nTypeError: 'tuple' object does not support item assignment\nThe reason for the exception should be immediately clear: 1\nis added to the\nobject a_tuple[0]\npoints to (1\n), producing the result object, 2\n,\nbut when we attempt to assign the result of the computation, 2\n, to element\n0\nof the tuple, we get an error because we can\u2019t change what an element of\na tuple points to.\nUnder the covers, what this augmented assignment statement is doing is approximately this:\n>>> result = a_tuple[0] + 1\n>>> a_tuple[0] = result\nTraceback (most recent call last):\n...\nTypeError: 'tuple' object does not support item assignment\nIt is the assignment part of the operation that produces the error, since a tuple is immutable.\nWhen you write something like:\n>>> a_tuple = (['foo'], 'bar')\n>>> a_tuple[0] += ['item']\nTraceback (most recent call last):\n...\nTypeError: 'tuple' object does not support item assignment\nThe exception is a bit more surprising, and even more surprising is the fact that even though there was an error, the append worked:\n>>> a_tuple[0]\n['foo', 'item']\nTo see why this happens, you need to know that (a) if an object implements an\n__iadd__()\nmagic method, it gets called when the +=\naugmented\nassignment\nis executed, and its return value is what gets used in the assignment statement;\nand (b) for lists, __iadd__()\nis equivalent to calling\nextend()\non the list and returning the list.\nThat\u2019s why we say that for lists, +=\nis a \u201cshorthand\u201d for list.extend()\n:\n>>> a_list = []\n>>> a_list += [1]\n>>> a_list\n[1]\nThis is equivalent to:\n>>> result = a_list.__iadd__([1])\n>>> a_list = result\nThe object pointed to by a_list has been mutated, and the pointer to the\nmutated object is assigned back to a_list\n. The end result of the\nassignment is a no-op, since it is a pointer to the same object that a_list\nwas previously pointing to, but the assignment still happens.\nThus, in our tuple example what is happening is equivalent to:\n>>> result = a_tuple[0].__iadd__(['item'])\n>>> a_tuple[0] = result\nTraceback (most recent call last):\n...\nTypeError: 'tuple' object does not support item assignment\nThe __iadd__()\nsucceeds, and thus the list is extended, but even though\nresult\npoints to the same object that a_tuple[0]\nalready points to,\nthat final assignment still results in an error, because tuples are immutable.\nI want to do a complicated sort: can you do a Schwartzian Transform in Python?\u00b6\nThe technique, attributed to Randal Schwartz of the Perl community, sorts the\nelements of a list by a metric which maps each element to its \u201csort value\u201d. In\nPython, use the key\nargument for the list.sort()\nmethod:\nIsorted = L[:]\nIsorted.sort(key=lambda s: int(s[10:15]))\nHow can I sort one list by values from another list?\u00b6\nMerge them into an iterator of tuples, sort the resulting list, and then pick out the element you want.\n>>> list1 = [\"what\", \"I'm\", \"sorting\", \"by\"]\n>>> list2 = [\"something\", \"else\", \"to\", \"sort\"]\n>>> pairs = zip(list1, list2)\n>>> pairs = sorted(pairs)\n>>> pairs\n[(\"I'm\", 'else'), ('by', 'sort'), ('sorting', 'to'), ('what', 'something')]\n>>> result = [x[1] for x in pairs]\n>>> result\n['else', 'sort', 'to', 'something']\nObjects\u00b6\nWhat is a class?\u00b6\nA class is the particular object type created by executing a class statement. Class objects are used as templates to create instance objects, which embody both the data (attributes) and code (methods) specific to a datatype.\nA class can be based on one or more other classes, called its base class(es). It\nthen inherits the attributes and methods of its base classes. This allows an\nobject model to be successively refined by inheritance. You might have a\ngeneric Mailbox\nclass that provides basic accessor methods for a mailbox,\nand subclasses such as MboxMailbox\n, MaildirMailbox\n, OutlookMailbox\nthat handle various specific mailbox formats.\nWhat is a method?\u00b6\nA method is a function on some object x\nthat you normally call as\nx.name(arguments...)\n. Methods are defined as functions inside the class\ndefinition:\nclass C:\ndef meth(self, arg):\nreturn arg * 2 + self.attribute\nWhat is self?\u00b6\nSelf is merely a conventional name for the first argument of a method. A method\ndefined as meth(self, a, b, c)\nshould be called as x.meth(a, b, c)\nfor\nsome instance x\nof the class in which the definition occurs; the called\nmethod will think it is called as meth(x, a, b, c)\n.\nSee also Why must \u2018self\u2019 be used explicitly in method definitions and calls?.\nHow do I check if an object is an instance of a given class or of a subclass of it?\u00b6\nUse the built-in function isinstance(obj, cls)\n. You can\ncheck if an object\nis an instance of any of a number of classes by providing a tuple instead of a\nsingle class, e.g. isinstance(obj, (class1, class2, ...))\n, and can also\ncheck whether an object is one of Python\u2019s built-in types, e.g.\nisinstance(obj, str)\nor isinstance(obj, (int, float, complex))\n.\nNote that isinstance()\nalso checks for virtual inheritance from an\nabstract base class. So, the test will return True\nfor a\nregistered class even if hasn\u2019t directly or indirectly inherited from it. To\ntest for \u201ctrue inheritance\u201d, scan the MRO of the class:\nfrom collections.abc import Mapping\nclass P:\npass\nclass C(P):\npass\nMapping.register(P)\n>>> c = C()\n>>> isinstance(c, C) # direct\nTrue\n>>> isinstance(c, P) # indirect\nTrue\n>>> isinstance(c, Mapping) # virtual\nTrue\n# Actual inheritance chain\n>>> type(c).__mro__\n(, , )\n# Test for \"true inheritance\"\n>>> Mapping in type(c).__mro__\nFalse\nNote that most programs do not use isinstance()\non user-defined classes\nvery often. If you are developing the classes yourself, a more proper\nobject-oriented style is to define methods on the classes that encapsulate a\nparticular behaviour, instead of checking the object\u2019s class and doing a\ndifferent thing based on what class it is. For example, if you have a function\nthat does something:\ndef search(obj):\nif isinstance(obj, Mailbox):\n... # code to search a mailbox\nelif isinstance(obj, Document):\n... # code to search a document\nelif ...\nA better approach is to define a search()\nmethod on all the classes and just\ncall it:\nclass Mailbox:\ndef search(self):\n... # code to search a mailbox\nclass Document:\ndef search(self):\n... # code to search a document\nobj.search()\nWhat is delegation?\u00b6\nDelegation is an object oriented technique (also called a design pattern).\nLet\u2019s say you have an object x\nand want to change the behaviour of just one\nof its methods. You can create a new class that provides a new implementation\nof the method you\u2019re interested in changing and delegates all other methods to\nthe corresponding method of x\n.\nPython programmers can easily implement delegation. For example, the following class implements a class that behaves like a file but converts all written data to uppercase:\nclass UpperOut:\ndef __init__(self, outfile):\nself._outfile = outfile\ndef write(self, s):\nself._outfile.write(s.upper())\ndef __getattr__(self, name):\nreturn getattr(self._outfile, name)\nHere the UpperOut\nclass redefines the write()\nmethod to convert the\nargument string to uppercase before calling the underlying\nself._outfile.write()\nmethod. All other methods are delegated to the\nunderlying self._outfile\nobject. The delegation is accomplished via the\n__getattr__()\nmethod; consult the language reference\nfor more information about controlling attribute access.\nNote that for more general cases delegation can get trickier. When attributes\nmust be set as well as retrieved, the class must define a __setattr__()\nmethod too, and it must do so carefully. The basic implementation of\n__setattr__()\nis roughly equivalent to the following:\nclass X:\n...\ndef __setattr__(self, name, value):\nself.__dict__[name] = value\n...\nMany __setattr__()\nimplementations call object.__setattr__()\nto set\nan attribute on self without causing infinite recursion:\nclass X:\ndef __setattr__(self, name, value):\n# Custom logic here...\nobject.__setattr__(self, name, value)\nAlternatively, it is possible to set attributes by inserting\nentries into self.__dict__\ndirectly.\nHow do I call a method defined in a base class from a derived class that extends it?\u00b6\nUse the built-in super()\nfunction:\nclass Derived(Base):\ndef meth(self):\nsuper().meth() # calls Base.meth\nIn the example, super()\nwill automatically determine the instance from\nwhich it was called (the self\nvalue), look up the method resolution\norder (MRO) with type(self).__mro__\n, and return the next in line after\nDerived\nin the MRO: Base\n.\nHow can I organize my code to make it easier to change the base class?\u00b6\nYou could assign the base class to an alias and derive from the alias. Then all you have to change is the value assigned to the alias. Incidentally, this trick is also handy if you want to decide dynamically (e.g. depending on availability of resources) which base class to use. Example:\nclass Base:\n...\nBaseAlias = Base\nclass Derived(BaseAlias):\n...\nHow do I create static class data and static class methods?\u00b6\nBoth static data and static methods (in the sense of C++ or Java) are supported in Python.\nFor static data, simply define a class attribute. To assign a new value to the attribute, you have to explicitly use the class name in the assignment:\nclass C:\ncount = 0 # number of times C.__init__ called\ndef __init__(self):\nC.count = C.count + 1\ndef getcount(self):\nreturn C.count # or return self.count\nc.count\nalso refers to C.count\nfor any c\nsuch that isinstance(c,\nC)\nholds, unless overridden by c\nitself or by some class on the base-class\nsearch path from c.__class__\nback to C\n.\nCaution: within a method of C, an assignment like self.count = 42\ncreates a\nnew and unrelated instance named \u201ccount\u201d in self\n\u2019s own dict. Rebinding of a\nclass-static data name must always specify the class whether inside a method or\nnot:\nC.count = 314\nStatic methods are possible:\nclass C:\n@staticmethod\ndef static(arg1, arg2, arg3):\n# No 'self' parameter!\n...\nHowever, a far more straightforward way to get the effect of a static method is via a simple module-level function:\ndef getcount():\nreturn C.count\nIf your code is structured so as to define one class (or tightly related class hierarchy) per module, this supplies the desired encapsulation.\nHow can I overload constructors (or methods) in Python?\u00b6\nThis answer actually applies to all methods, but the question usually comes up first in the context of constructors.\nIn C++ you\u2019d write\nclass C {\nC() { cout << \"No arguments\\n\"; }\nC(int i) { cout << \"Argument is \" << i << \"\\n\"; }\n}\nIn Python you have to write a single constructor that catches all cases using default arguments. For example:\nclass C:\ndef __init__(self, i=None):\nif i is None:\nprint(\"No arguments\")\nelse:\nprint(\"Argument is\", i)\nThis is not entirely equivalent, but close enough in practice.\nYou could also try a variable-length argument list, e.g.\ndef __init__(self, *args):\n...\nThe same approach works for all method definitions.\nI try to use __spam and I get an error about _SomeClassName__spam.\u00b6\nVariable names with double leading underscores are \u201cmangled\u201d to provide a simple\nbut effective way to define class private variables. Any identifier of the form\n__spam\n(at least two leading underscores, at most one trailing underscore)\nis textually replaced with _classname__spam\n, where classname\nis the\ncurrent class name with any leading underscores stripped.\nThe identifier can be used unchanged within the class, but to access it outside the class, the mangled name must be used:\nclass A:\ndef __one(self):\nreturn 1\ndef two(self):\nreturn 2 * self.__one()\nclass B(A):\ndef three(self):\nreturn 3 * self._A__one()\nfour = 4 * A()._A__one()\nIn particular, this does not guarantee privacy since an outside user can still deliberately access the private attribute; many Python programmers never bother to use private variable names at all.\nSee also\nThe private name mangling specifications for details and special cases.\nMy class defines __del__ but it is not called when I delete the object.\u00b6\nThere are several possible reasons for this.\nThe del\nstatement does not necessarily call __del__()\n\u2013 it simply\ndecrements the object\u2019s reference count, and if this reaches zero\n__del__()\nis called.\nIf your data structures contain circular links (e.g. a tree where each child has\na parent reference and each parent has a list of children) the reference counts\nwill never go back to zero. Once in a while Python runs an algorithm to detect\nsuch cycles, but the garbage collector might run some time after the last\nreference to your data structure vanishes, so your __del__()\nmethod may be\ncalled at an inconvenient and random time. This is inconvenient if you\u2019re trying\nto reproduce a problem. Worse, the order in which object\u2019s __del__()\nmethods are executed is arbitrary. You can run gc.collect()\nto force a\ncollection, but there are pathological cases where objects will never be\ncollected.\nDespite the cycle collector, it\u2019s still a good idea to define an explicit\nclose()\nmethod on objects to be called whenever you\u2019re done with them. The\nclose()\nmethod can then remove attributes that refer to subobjects. Don\u2019t\ncall __del__()\ndirectly \u2013 __del__()\nshould call close()\nand\nclose()\nshould make sure that it can be called more than once for the same\nobject.\nAnother way to avoid cyclical references is to use the weakref\nmodule,\nwhich allows you to point to objects without incrementing their reference count.\nTree data structures, for instance, should use weak references for their parent\nand sibling references (if they need them!).\nFinally, if your __del__()\nmethod raises an exception, a warning message\nis printed to sys.stderr\n.\nHow do I get a list of all instances of a given class?\u00b6\nPython does not keep track of all instances of a class (or of a built-in type). You can program the class\u2019s constructor to keep track of all instances by keeping a list of weak references to each instance.\nWhy does the result of id()\nappear to be not unique?\u00b6\nThe id()\nbuiltin returns an integer that is guaranteed to be unique during\nthe lifetime of the object. Since in CPython, this is the object\u2019s memory\naddress, it happens frequently that after an object is deleted from memory, the\nnext freshly created object is allocated at the same position in memory. This\nis illustrated by this example:\n>>> id(1000)\n13901272\n>>> id(2000)\n13901272\nThe two ids belong to different integer objects that are created before, and\ndeleted immediately after execution of the id()\ncall. To be sure that\nobjects whose id you want to examine are still alive, create another reference\nto the object:\n>>> a = 1000; b = 2000\n>>> id(a)\n13901272\n>>> id(b)\n13891296\nWhen can I rely on identity tests with the is operator?\u00b6\nThe is\noperator tests for object identity. The test a is b\nis\nequivalent to id(a) == id(b)\n.\nThe most important property of an identity test is that an object is always\nidentical to itself, a is a\nalways returns True\n. Identity tests are\nusually faster than equality tests. And unlike equality tests, identity tests\nare guaranteed to return a boolean True\nor False\n.\nHowever, identity tests can only be substituted for equality tests when object identity is assured. Generally, there are three circumstances where identity is guaranteed:\nAssignments create new names but do not change object identity. After the assignment\nnew = old\n, it is guaranteed thatnew is old\n.Putting an object in a container that stores object references does not change object identity. After the list assignment\ns[0] = x\n, it is guaranteed thats[0] is x\n.If an object is a singleton, it means that only one instance of that object can exist. After the assignments\na = None\nandb = None\n, it is guaranteed thata is b\nbecauseNone\nis a singleton.\nIn most other circumstances, identity tests are inadvisable and equality tests\nare preferred. In particular, identity tests should not be used to check\nconstants such as int\nand str\nwhich aren\u2019t guaranteed to be\nsingletons:\n>>> a = 1000\n>>> b = 500\n>>> c = b + 500\n>>> a is c\nFalse\n>>> a = 'Python'\n>>> b = 'Py'\n>>> c = b + 'thon'\n>>> a is c\nFalse\nLikewise, new instances of mutable containers are never identical:\n>>> a = []\n>>> b = []\n>>> a is b\nFalse\nIn the standard library code, you will see several common patterns for correctly using identity tests:\nAs recommended by PEP 8, an identity test is the preferred way to check for\nNone\n. This reads like plain English in code and avoids confusion with other objects that may have boolean values that evaluate to false.Detecting optional arguments can be tricky when\nNone\nis a valid input value. In those situations, you can create a singleton sentinel object guaranteed to be distinct from other objects. For example, here is how to implement a method that behaves likedict.pop()\n:_sentinel = object() def pop(self, key, default=_sentinel): if key in self: value = self[key] del self[key] return value if default is _sentinel: raise KeyError(key) return default\nContainer implementations sometimes need to augment equality tests with identity tests. This prevents the code from being confused by objects such as\nfloat('NaN')\nthat are not equal to themselves.\nFor example, here is the implementation of\ncollections.abc.Sequence.__contains__()\n:\ndef __contains__(self, value):\nfor v in self:\nif v is value or v == value:\nreturn True\nreturn False\nHow can a subclass control what data is stored in an immutable instance?\u00b6\nWhen subclassing an immutable type, override the __new__()\nmethod\ninstead of the __init__()\nmethod. The latter only runs after an\ninstance is created, which is too late to alter data in an immutable\ninstance.\nAll of these immutable classes have a different signature than their parent class:\nfrom datetime import date\nclass FirstOfMonthDate(date):\n\"Always choose the first day of the month\"\ndef __new__(cls, year, month, day):\nreturn super().__new__(cls, year, month, 1)\nclass NamedInt(int):\n\"Allow text names for some numbers\"\nxlat = {'zero': 0, 'one': 1, 'ten': 10}\ndef __new__(cls, value):\nvalue = cls.xlat.get(value, value)\nreturn super().__new__(cls, value)\nclass TitleStr(str):\n\"Convert str to name suitable for a URL path\"\ndef __new__(cls, s):\ns = s.lower().replace(' ', '-')\ns = ''.join([c for c in s if c.isalnum() or c == '-'])\nreturn super().__new__(cls, s)\nThe classes can be used like this:\n>>> FirstOfMonthDate(2012, 2, 14)\nFirstOfMonthDate(2012, 2, 1)\n>>> NamedInt('ten')\n10\n>>> NamedInt(20)\n20\n>>> TitleStr('Blog: Why Python Rocks')\n'blog-why-python-rocks'\nHow do I cache method calls?\u00b6\nThe two principal tools for caching methods are\nfunctools.cached_property()\nand functools.lru_cache()\n. The\nformer stores results at the instance level and the latter at the class\nlevel.\nThe cached_property approach only works with methods that do not take any arguments. It does not create a reference to the instance. The cached method result will be kept only as long as the instance is alive.\nThe advantage is that when an instance is no longer used, the cached method result will be released right away. The disadvantage is that if instances accumulate, so too will the accumulated method results. They can grow without bound.\nThe lru_cache approach works with methods that have hashable arguments. It creates a reference to the instance unless special efforts are made to pass in weak references.\nThe advantage of the least recently used algorithm is that the cache is bounded by the specified maxsize. The disadvantage is that instances are kept alive until they age out of the cache or until the cache is cleared.\nThis example shows the various techniques:\nclass Weather:\n\"Lookup weather information on a government website\"\ndef __init__(self, station_id):\nself._station_id = station_id\n# The _station_id is private and immutable\ndef current_temperature(self):\n\"Latest hourly observation\"\n# Do not cache this because old results\n# can be out of date.\n@cached_property\ndef location(self):\n\"Return the longitude/latitude coordinates of the station\"\n# Result only depends on the station_id\n@lru_cache(maxsize=20)\ndef historic_rainfall(self, date, units='mm'):\n\"Rainfall on a given date\"\n# Depends on the station_id, date, and units.\nThe above example assumes that the station_id never changes. If the relevant instance attributes are mutable, the cached_property approach can\u2019t be made to work because it cannot detect changes to the attributes.\nTo make the lru_cache approach work when the station_id is mutable,\nthe class needs to define the __eq__()\nand __hash__()\nmethods so that the cache can detect relevant attribute updates:\nclass Weather:\n\"Example with a mutable station identifier\"\ndef __init__(self, station_id):\nself.station_id = station_id\ndef change_station(self, station_id):\nself.station_id = station_id\ndef __eq__(self, other):\nreturn self.station_id == other.station_id\ndef __hash__(self):\nreturn hash(self.station_id)\n@lru_cache(maxsize=20)\ndef historic_rainfall(self, date, units='cm'):\n'Rainfall on a given date'\n# Depends on the station_id, date, and units.\nModules\u00b6\nHow do I create a .pyc file?\u00b6\nWhen a module is imported for the first time (or when the source file has\nchanged since the current compiled file was created) a .pyc\nfile containing\nthe compiled code should be created in a __pycache__\nsubdirectory of the\ndirectory containing the .py\nfile. The .pyc\nfile will have a\nfilename that starts with the same name as the .py\nfile, and ends with\n.pyc\n, with a middle component that depends on the particular python\nbinary that created it. (See PEP 3147 for details.)\nOne reason that a .pyc\nfile may not be created is a permissions problem\nwith the directory containing the source file, meaning that the __pycache__\nsubdirectory cannot be created. This can happen, for example, if you develop as\none user but run as another, such as if you are testing with a web server.\nUnless the PYTHONDONTWRITEBYTECODE\nenvironment variable is set,\ncreation of a .pyc file is automatic if you\u2019re importing a module and Python\nhas the ability (permissions, free space, etc\u2026) to create a __pycache__\nsubdirectory and write the compiled module to that subdirectory.\nRunning Python on a top level script is not considered an import and no\n.pyc\nwill be created. For example, if you have a top-level module\nfoo.py\nthat imports another module xyz.py\n, when you run foo\n(by\ntyping python foo.py\nas a shell command), a .pyc\nwill be created for\nxyz\nbecause xyz\nis imported, but no .pyc\nfile will be created for\nfoo\nsince foo.py\nisn\u2019t being imported.\nIf you need to create a .pyc\nfile for foo\n\u2013 that is, to create a\n.pyc\nfile for a module that is not imported \u2013 you can, using the\npy_compile\nand compileall\nmodules.\nThe py_compile\nmodule can manually compile any module. One way is to use\nthe compile()\nfunction in that module interactively:\n>>> import py_compile\n>>> py_compile.compile('foo.py')\nThis will write the .pyc\nto a __pycache__\nsubdirectory in the same\nlocation as foo.py\n(or you can override that with the optional parameter\ncfile\n).\nYou can also automatically compile all files in a directory or directories using\nthe compileall\nmodule. You can do it from the shell prompt by running\ncompileall.py\nand providing the path of a directory containing Python files\nto compile:\npython -m compileall .\nHow do I find the current module name?\u00b6\nA module can find out its own module name by looking at the predefined global\nvariable __name__\n. If this has the value '__main__'\n, the program is\nrunning as a script. Many modules that are usually used by importing them also\nprovide a command-line interface or a self-test, and only execute this code\nafter checking __name__\n:\ndef main():\nprint('Running test...')\n...\nif __name__ == '__main__':\nmain()\nHow can I have modules that mutually import each other?\u00b6\nSuppose you have the following modules:\nfoo.py\n:\nfrom bar import bar_var\nfoo_var = 1\nbar.py\n:\nfrom foo import foo_var\nbar_var = 2\nThe problem is that the interpreter will perform the following steps:\nmain imports\nfoo\nEmpty globals for\nfoo\nare createdfoo\nis compiled and starts executingfoo\nimportsbar\nEmpty globals for\nbar\nare createdbar\nis compiled and starts executingbar\nimportsfoo\n(which is a no-op since there already is a module namedfoo\n)The import mechanism tries to read\nfoo_var\nfromfoo\nglobals, to setbar.foo_var = foo.foo_var\nThe last step fails, because Python isn\u2019t done with interpreting foo\nyet and\nthe global symbol dictionary for foo\nis still empty.\nThe same thing happens when you use import foo\n, and then try to access\nfoo.foo_var\nin global code.\nThere are (at least) three possible workarounds for this problem.\nGuido van Rossum recommends avoiding all uses of from import ...\n,\nand placing all code inside functions. Initializations of global variables and\nclass variables should use constants or built-in functions only. This means\neverything from an imported module is referenced as .\n.\nJim Roskind suggests performing steps in the following order in each module:\nexports (globals, functions, and classes that don\u2019t need imported base classes)\nimport\nstatementsactive code (including globals that are initialized from imported values).\nVan Rossum doesn\u2019t like this approach much because the imports appear in a strange place, but it does work.\nMatthias Urlichs recommends restructuring your code so that the recursive import is not necessary in the first place.\nThese solutions are not mutually exclusive.\n__import__(\u2018x.y.z\u2019) returns ; how do I get z?\u00b6\nConsider using the convenience function import_module()\nfrom\nimportlib\ninstead:\nz = importlib.import_module('x.y.z')\nWhen I edit an imported module and reimport it, the changes don\u2019t show up. Why does this happen?\u00b6\nFor reasons of efficiency as well as consistency, Python only reads the module file on the first time a module is imported. If it didn\u2019t, in a program consisting of many modules where each one imports the same basic module, the basic module would be parsed and re-parsed many times. To force re-reading of a changed module, do this:\nimport importlib\nimport modname\nimportlib.reload(modname)\nWarning: this technique is not 100% fool-proof. In particular, modules containing statements like\nfrom modname import some_objects\nwill continue to work with the old version of the imported objects. If the module contains class definitions, existing class instances will not be updated to use the new class definition. This can result in the following paradoxical behaviour:\n>>> import importlib\n>>> import cls\n>>> c = cls.C() # Create an instance of C\n>>> importlib.reload(cls)\n\n>>> isinstance(c, cls.C) # isinstance is false?!?\nFalse\nThe nature of the problem is made clear if you print out the \u201cidentity\u201d of the class objects:\n>>> hex(id(c.__class__))\n'0x7352a0'\n>>> hex(id(cls.C))\n'0x4198d0'", "code_snippets": [" ", " ", "\n", " ", " ", " ", "\n", " ", " ", "\n", "\n", "\n", "\n", "\n", " ", " ", "\n", "\n", "\n", " ", " ", "\n", " ", " ", " ", "\n", " ", " ", " ", "\n", "\n", "\n", "\n", "\n", " ", " ", " ", "\n", "\n", " ", " ", "\n", "\n", "\n", "\n", " ", "\n ", " ", " ", " ", "\n ", " ", " ", "\n ", " ", "\n", "\n ", "\n", "\n ", " ", " ", " ", "\n ", " ", " ", " ", "\n", "\n", " ", " ", " ", "\n ", " ", " ", " ", " ", "\n ", " ", " ", "\n\n ", "\n ", " ", " ", " ", " ", " ", "\n ", " ", " ", " ", " ", "\n ", " ", "\n", " ", " ", "\n ", "\n ", " ", " ", "\n ", "\n ", " ", " ", "\n", " ", " ", "\n ", "\n", " ", " ", "\n", " ", " ", "\n", " ", " ", "\n", "\n", "\n", "\n", "\n", "\n", " ", " ", " ", "\n", " ", " ", "\n", " ", " ", " ", " ", " ", "\n", "\n", "\n", "\n", "\n", " ", "\n", " ", " ", " ", " ", "\n", " ", " ", " ", " ", " ", " ", "\n", " ", " ", " ", " ", "\n", "\n", " ", " ", " ", " ", "\n", " ", "\n", "\n", "\n", " ", " ", " ", " ", "\n", " ", " ", " ", " ", " ", " ", "\n", "\n", " ", " ", " ", "\n", "\n", "\n", "\n", "\n", " ", " ", " ", " ", "\n", " ", " ", " ", " ", " ", " ", "\n", "\n", " ", " ", " ", " ", " ", "\n", "\n", "\n", "\n", "\n", " ", " ", " ", "\n", " ", " ", " ", " ", " ", "\n", " ", " ", " ", "\n", "\n", "\n", " ", " ", " ", " ", "\n", " ", " ", " ", " ", " ", " ", "\n", "\n", " ", " ", " ", "\n", "\n", "\n", "\n", " ", "\n ", "\n ", " ", " ", " ", " ", " ", "\n ", " ", "\n", "\n\n ", " ", " ", "\n ", " ", " ", " ", " ", "\n\n ", " ", "\n ", " ", " ", " ", " ", " ", "\n", " ", " ", " ", "\n", "\n ", "\n ", " ", "\n ", " ", " ", " ", " ", " ", "\n", "\n\n ", " ", " ", "\n\n ", " ", "\n ", " ", " ", "\n\n ", "\n ", " ", " ", " ", " ", "\n\n ", "\n ", " ", " ", " ", " ", "\n\n", " ", " ", "\n", " ", " ", " ", " ", " ", " ", "\n", " ", " ", "\n", " ", " ", "\n", "\n", " ", "\n", "\n", " ", " ", "\n", " ", " ", "\n", " ", " ", "\n", "\n", "\n", "\n", "\n", " ", " ", " ", "\n", "\n", " ", " ", " ", "\n", " ", " ", " ", "\n", " ", " ", " ", " ", "\n\n", " ", " ", " ", " ", "\n", " ", " ", " ", " ", " ", " ", " ", " ", "\n", " ", " ", " ", " ", "\n", " ", "\n\n", "\n", " ", " ", "\n", " ", "\n\n", "\n", " ", " ", " ", " ", " ", " ", "\n", " ", "\n\n", "\n", " ", " ", " ", "\n", " ", "\n", " ", " ", "\n", " ", " ", " ", "\n", " ", " ", "\n", "\n", " ", " ", " ", " ", " ", " ", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n\n", "\n", "\n", " ", "\n", "\n File ", ", line ", ", in ", "\n", ": ", "\n", " ", " ", "\n", "\n", "\n", " ", " ", "\n", "\n", "\n", " ", " ", "\n", "\n", "\n", " ", " ", " ", " ", " ", " ", " ", " ", " ", " ", "\n", "\n File ", ", line ", "\n", "\n", "\n", ": ", "\n", "\n", " ", " ", "\n", " ", " ", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n\n", "\n", " ", " ", " ", "\n", "\n", "\n", " ", " ", "\n", "\n", "\n", "\n", "\n", "\n ", "\n\n", "\n ", "\n\n", " ", " ", " ", " ", " ", " ", "\n\n", " ", "\n", "\n", " ", "\n", "\n ", "\n ", "\n\n ", "\n ", "\n\n", " ", " ", " ", " ", " ", "\n", "\n", "\n ", "\n\n", " ", " ", "\n\n", " ", " ", "\n", "\n", " ", " ", "\n", " ", "\n", " ", "\n", "\n", "\n", "\n File ", ", line ", "\n", "\n", "\n", ": ", "\n", "\n", "\n", " ", "\n", "\n", " ", "\n", "\n", "\n", "\n", " ", " ", "\n", " ", " ", " ", "\n ", "\n", " ", " ", "\n", " ", " ", "\n", " ", " ", " ", "\n ", " ", " ", "\n", " ", " ", " ", "\n ", " ", "\n", " ", "\n ", "\n ", " ", " ", "\n ", " ", " ", " ", " ", " ", "\n ", " ", " ", " ", "\n ", " ", "\n ", "\n ", " ", " ", "\n", " ", " ", "\n", " ", " ", " ", "\n", " ", " ", " ", " ", " ", " ", " ", " ", "\n", " ", " ", " ", " ", " ", " ", " ", " ", "\n", " ", " ", " ", " ", "\n", " ", " ", " ", " ", " ", " ", " ", "\n", " ", " ", " ", " ", " ", " ", "\n", " ", " ", " ", " ", "\n", " ", " ", " ", "\n ", " ", " ", " ", " ", "\n", " ", " ", " ", " ", "\n", " ", " ", " ", " ", " ", " ", " ", " ", "\n", " ", " ", " ", " ", " ", " ", "\n\n", " ", " ", " ", " ", " ", " ", "\n", " ", " ", " ", "\n ", "\n\n", " ", " ", " ", "\n ", "\n", " ", " ", " ", "\n", " ", " ", "\n", "\n", "\n", ": ", "\n", " ", " ", " ", " ", "\n", " ", " ", "\n", "\n", "\n", ": ", "\n", " ", " ", " ", "\n", " ", " ", "\n", "\n", "\n", ": ", "\n", "\n", "\n", " ", " ", "\n", " ", " ", "\n", "\n", "\n", " ", " ", "\n", " ", " ", "\n", " ", " ", "\n", " ", " ", "\n", "\n", "\n", ": ", "\n", " ", " ", "\n", " ", " ", "\n", " ", " ", " ", " ", " ", "\n", " ", " ", " ", " ", " ", "\n", " ", " ", " ", "\n", " ", " ", "\n", "\n", "\n", " ", " ", " ", " ", " ", " ", "\n", "\n", "\n", "\n ", " ", "\n ", " ", " ", " ", " ", " ", "\n", "\n ", " ", " ", "\n ", " ", "\n ", " ", " ", "\n ", " ", "\n ", " ", "\n", "\n ", "\n ", " ", "\n\n", "\n ", "\n ", " ", "\n\n", "\n", "\n\n ", " ", "\n ", " ", " ", "\n\n ", " ", "\n ", "\n\n ", " ", "\n ", " ", " ", "\n", "\n ", "\n ", " ", " ", "\n ", " ", " ", "\n ", "\n", "\n ", " ", " ", "\n ", "\n ", " ", " ", "\n", "\n ", "\n ", " ", "\n", "\n ", "\n\n", " ", " ", "\n\n", "\n ", "\n", "\n ", " ", " ", " ", "\n\n ", "\n ", " ", " ", " ", " ", "\n\n ", "\n ", " ", " ", "\n", " ", " ", "\n", "\n ", "\n ", " ", " ", "\n ", "\n ", "\n", "\n ", " ", "\n", "\n ", " ", "\n ", " ", " ", " ", "\n ", "\n ", "\n ", " ", "\n", " ", "\n ", "\n", " ", " ", "\n", " ", " ", "\n", " ", " ", " ", " ", "\n", " ", " ", "\n", "\n\n", " ", " ", "\n", " ", " ", "\n", " ", " ", " ", " ", "\n", " ", " ", "\n", "\n", " ", " ", "\n", " ", " ", "\n", " ", " ", "\n", "\n", " ", "\n ", " ", " ", " ", "\n ", " ", " ", " ", " ", " ", " ", " ", "\n ", " ", "\n ", " ", "\n", "\n ", "\n\n ", " ", "\n ", " ", " ", "\n ", "\n\n ", "\n ", "\n ", "\n ", "\n\n ", "\n ", "\n ", "\n ", "\n\n ", "\n ", " ", " ", "\n ", "\n ", "\n", "\n ", "\n\n ", " ", "\n ", " ", " ", "\n\n ", " ", "\n ", " ", " ", "\n\n ", " ", "\n ", " ", " ", " ", "\n\n ", "\n ", " ", "\n\n ", "\n ", " ", " ", "\n ", "\n ", "\n", "\n", "\n", " ", " ", " ", "\n", "\n ", "\n ", "\n\n", " ", " ", " ", "\n ", "\n", " ", "\n", " ", " ", "\n", " ", "\n", " ", " ", "\n", " ", " ", "\n", "\n", "\n", "\n", " ", "\n", "\n", "\n", " ", " ", " ", "\n", "\n", "\n", " ", " ", "\n", "\n", "\n", "\n", "\n", "\n"], "language": "Python", "source": "python.org", "token_count": 16050} +{"url": "https://docs.python.org/3/c-api/concrete.html", "title": "Concrete Objects Layer", "content": "Concrete Objects Layer\u00b6\nThe functions in this chapter are specific to certain Python object types.\nPassing them an object of the wrong type is not a good idea; if you receive an\nobject from a Python program and you are not sure that it has the right type,\nyou must perform a type check first; for example, to check that an object is a\ndictionary, use PyDict_Check()\n. The chapter is structured like the\n\u201cfamily tree\u201d of Python object types.\nWarning\nWhile the functions described in this chapter carefully check the type of the\nobjects which are passed in, many of them do not check for NULL\nbeing passed\ninstead of a valid object. Allowing NULL\nto be passed in can cause memory\naccess violations and immediate termination of the interpreter.\nFundamental Objects\u00b6\nThis section describes Python type objects and the singleton object None\n.\nNumeric Objects\u00b6\nSequence Objects\u00b6\nGeneric operations on sequence objects were discussed in the previous chapter; this section deals with the specific kinds of sequence objects that are intrinsic to the Python language.\nContainer Objects\u00b6\nFunction Objects\u00b6\nOther Objects\u00b6\n- File Objects\n- Module Objects\n- Module definitions\n- Creating extension modules dynamically\n- Support functions\n- Iterator Objects\n- Descriptor Objects\n- Slice Objects\n- MemoryView objects\n- Pickle buffer objects\n- Weak Reference Objects\n- Capsules\n- Frame Objects\n- Generator Objects\n- Coroutine Objects\n- Context Variables Objects\n- Objects for Type Hinting", "code_snippets": [], "language": "Python", "source": "python.org", "token_count": 368} +{"url": "https://docs.python.org/3/whatsnew/2.1.html", "title": "What\u2019s New in Python 2.1", "content": "What\u2019s New in Python 2.1\u00b6\n- Author:\nA.M. Kuchling\nIntroduction\u00b6\nThis article explains the new features in Python 2.1. While there aren\u2019t as many changes in 2.1 as there were in Python 2.0, there are still some pleasant surprises in store. 2.1 is the first release to be steered through the use of Python Enhancement Proposals, or PEPs, so most of the sizable changes have accompanying PEPs that provide more complete documentation and a design rationale for the change. This article doesn\u2019t attempt to document the new features completely, but simply provides an overview of the new features for Python programmers. Refer to the Python 2.1 documentation, or to the specific PEP, for more details about any new feature that particularly interests you.\nOne recent goal of the Python development team has been to accelerate the pace of new releases, with a new release coming every 6 to 9 months. 2.1 is the first release to come out at this faster pace, with the first alpha appearing in January, 3 months after the final version of 2.0 was released.\nThe final release of Python 2.1 was made on April 17, 2001.\nPEP 227: Nested Scopes\u00b6\nThe largest change in Python 2.1 is to Python\u2019s scoping rules. In Python 2.0, at any given time there are at most three namespaces used to look up variable names: local, module-level, and the built-in namespace. This often surprised people because it didn\u2019t match their intuitive expectations. For example, a nested recursive function definition doesn\u2019t work:\ndef f():\n...\ndef g(value):\n...\nreturn g(value-1) + 1\n...\nThe function g()\nwill always raise a NameError\nexception, because\nthe binding of the name g\nisn\u2019t in either its local namespace or in the\nmodule-level namespace. This isn\u2019t much of a problem in practice (how often do\nyou recursively define interior functions like this?), but this also made using\nthe lambda\nexpression clumsier, and this was a problem in practice.\nIn code which uses lambda\nyou can often find local variables being\ncopied by passing them as the default values of arguments.\ndef find(self, name):\n\"Return list of any entries equal to 'name'\"\nL = filter(lambda x, name=name: x == name,\nself.list_attribute)\nreturn L\nThe readability of Python code written in a strongly functional style suffers greatly as a result.\nThe most significant change to Python 2.1 is that static scoping has been added\nto the language to fix this problem. As a first effect, the name=name\ndefault argument is now unnecessary in the above example. Put simply, when a\ngiven variable name is not assigned a value within a function (by an assignment,\nor the def\n, class\n, or import\nstatements),\nreferences to the variable will be looked up in the local namespace of the\nenclosing scope. A more detailed explanation of the rules, and a dissection of\nthe implementation, can be found in the PEP.\nThis change may cause some compatibility problems for code where the same variable name is used both at the module level and as a local variable within a function that contains further function definitions. This seems rather unlikely though, since such code would have been pretty confusing to read in the first place.\nOne side effect of the change is that the from module import *\nand\nexec\nstatements have been made illegal inside a function scope under\ncertain conditions. The Python reference manual has said all along that from\nmodule import *\nis only legal at the top level of a module, but the CPython\ninterpreter has never enforced this before. As part of the implementation of\nnested scopes, the compiler which turns Python source into bytecodes has to\ngenerate different code to access variables in a containing scope. from\nmodule import *\nand exec\nmake it impossible for the compiler to\nfigure this out, because they add names to the local namespace that are\nunknowable at compile time. Therefore, if a function contains function\ndefinitions or lambda\nexpressions with free variables, the compiler\nwill flag this by raising a SyntaxError\nexception.\nTo make the preceding explanation a bit clearer, here\u2019s an example:\nx = 1\ndef f():\n# The next line is a syntax error\nexec 'x=2'\ndef g():\nreturn x\nLine 4 containing the exec\nstatement is a syntax error, since\nexec\nwould define a new local variable named x\nwhose value should\nbe accessed by g()\n.\nThis shouldn\u2019t be much of a limitation, since exec\nis rarely used in\nmost Python code (and when it is used, it\u2019s often a sign of a poor design\nanyway).\nCompatibility concerns have led to nested scopes being introduced gradually; in Python 2.1, they aren\u2019t enabled by default, but can be turned on within a module by using a future statement as described in PEP 236. (See the following section for further discussion of PEP 236.) In Python 2.2, nested scopes will become the default and there will be no way to turn them off, but users will have had all of 2.1\u2019s lifetime to fix any breakage resulting from their introduction.\nSee also\n- PEP 227 - Statically Nested Scopes\nWritten and implemented by Jeremy Hylton.\nPEP 236: __future__ Directives\u00b6\nThe reaction to nested scopes was widespread concern about the dangers of breaking code with the 2.1 release, and it was strong enough to make the Pythoneers take a more conservative approach. This approach consists of introducing a convention for enabling optional functionality in release N that will become compulsory in release N+1.\nThe syntax uses a from...import\nstatement using the reserved module name\n__future__\n. Nested scopes can be enabled by the following statement:\nfrom __future__ import nested_scopes\nWhile it looks like a normal import\nstatement, it\u2019s not; there are\nstrict rules on where such a future statement can be put. They can only be at\nthe top of a module, and must precede any Python code or regular\nimport\nstatements. This is because such statements can affect how\nthe Python bytecode compiler parses code and generates bytecode, so they must\nprecede any statement that will result in bytecodes being produced.\nSee also\n- PEP 236 - Back to the\n__future__\nWritten by Tim Peters, and primarily implemented by Jeremy Hylton.\nPEP 207: Rich Comparisons\u00b6\nIn earlier versions, Python\u2019s support for implementing comparisons on user-defined\nclasses and extension types was quite simple. Classes could implement a\n__cmp__()\nmethod that was given two instances of a class, and could only\nreturn 0 if they were equal or +1 or -1 if they weren\u2019t; the method couldn\u2019t\nraise an exception or return anything other than a Boolean value. Users of\nNumeric Python often found this model too weak and restrictive, because in the\nnumber-crunching programs that numeric Python is used for, it would be more\nuseful to be able to perform elementwise comparisons of two matrices, returning\na matrix containing the results of a given comparison for each element. If the\ntwo matrices are of different sizes, then the compare has to be able to raise an\nexception to signal the error.\nIn Python 2.1, rich comparisons were added in order to support this need.\nPython classes can now individually overload each of the <\n, <=\n, >\n,\n>=\n, ==\n, and !=\noperations. The new magic method names are:\nOperation |\nMethod name |\n|---|---|\n|\n|\n|\n|\n|\n|\n|\n|\n|\n|\n|\n(The magic methods are named after the corresponding Fortran operators .LT.\n.\n.LE.\n, &c. Numeric programmers are almost certainly quite familiar with\nthese names and will find them easy to remember.)\nEach of these magic methods is of the form method(self, other)\n, where\nself\nwill be the object on the left-hand side of the operator, while\nother\nwill be the object on the right-hand side. For example, the\nexpression A < B\nwill cause A.__lt__(B)\nto be called.\nEach of these magic methods can return anything at all: a Boolean, a matrix, a list, or any other Python object. Alternatively they can raise an exception if the comparison is impossible, inconsistent, or otherwise meaningless.\nThe built-in cmp(A,B)\nfunction can use the rich comparison machinery,\nand now accepts an optional argument specifying which comparison operation to\nuse; this is given as one of the strings \"<\"\n, \"<=\"\n, \">\"\n, \">=\"\n,\n\"==\"\n, or \"!=\"\n. If called without the optional third argument,\ncmp()\nwill only return -1, 0, or +1 as in previous versions of Python;\notherwise it will call the appropriate method and can return any Python object.\nThere are also corresponding changes of interest to C programmers; there\u2019s a new\nslot tp_richcmp\nin type objects and an API for performing a given rich\ncomparison. I won\u2019t cover the C API here, but will refer you to PEP 207, or to\n2.1\u2019s C API documentation, for the full list of related functions.\nSee also\n- PEP 207 - Rich Comparisons\nWritten by Guido van Rossum, heavily based on earlier work by David Ascher, and implemented by Guido van Rossum.\nPEP 230: Warning Framework\u00b6\nOver its 10 years of existence, Python has accumulated a certain number of obsolete modules and features along the way. It\u2019s difficult to know when a feature is safe to remove, since there\u2019s no way of knowing how much code uses it \u2014 perhaps no programs depend on the feature, or perhaps many do. To enable removing old features in a more structured way, a warning framework was added. When the Python developers want to get rid of a feature, it will first trigger a warning in the next version of Python. The following Python version can then drop the feature, and users will have had a full release cycle to remove uses of the old feature.\nPython 2.1 adds the warning framework to be used in this scheme. It adds a\nwarnings\nmodule that provide functions to issue warnings, and to filter\nout warnings that you don\u2019t want to be displayed. Third-party modules can also\nuse this framework to deprecate old features that they no longer wish to\nsupport.\nFor example, in Python 2.1 the regex\nmodule is deprecated, so importing\nit causes a warning to be printed:\n>>> import regex\n__main__:1: DeprecationWarning: the regex module\nis deprecated; please use the re module\n>>>\nWarnings can be issued by calling the warnings.warn()\nfunction:\nwarnings.warn(\"feature X no longer supported\")\nThe first parameter is the warning message; an additional optional parameters can be used to specify a particular warning category.\nFilters can be added to disable certain warnings; a regular expression pattern\ncan be applied to the message or to the module name in order to suppress a\nwarning. For example, you may have a program that uses the regex\nmodule\nand not want to spare the time to convert it to use the re\nmodule right\nnow. The warning can be suppressed by calling\nimport warnings\nwarnings.filterwarnings(action = 'ignore',\nmessage='.*regex module is deprecated',\ncategory=DeprecationWarning,\nmodule = '__main__')\nThis adds a filter that will apply only to warnings of the class\nDeprecationWarning\ntriggered in the __main__\nmodule, and applies\na regular expression to only match the message about the regex\nmodule\nbeing deprecated, and will cause such warnings to be ignored. Warnings can also\nbe printed only once, printed every time the offending code is executed, or\nturned into exceptions that will cause the program to stop (unless the\nexceptions are caught in the usual way, of course).\nFunctions were also added to Python\u2019s C API for issuing warnings; refer to PEP 230 or to Python\u2019s API documentation for the details.\nSee also\n- PEP 5 - Guidelines for Language Evolution\nWritten by Paul Prescod, to specify procedures to be followed when removing old features from Python. The policy described in this PEP hasn\u2019t been officially adopted, but the eventual policy probably won\u2019t be too different from Prescod\u2019s proposal.\n- PEP 230 - Warning Framework\nWritten and implemented by Guido van Rossum.\nPEP 229: New Build System\u00b6\nWhen compiling Python, the user had to go in and edit the Modules/Setup\nfile in order to enable various additional modules; the default set is\nrelatively small and limited to modules that compile on most Unix platforms.\nThis means that on Unix platforms with many more features, most notably Linux,\nPython installations often don\u2019t contain all useful modules they could.\nPython 2.0 added the Distutils, a set of modules for distributing and installing extensions. In Python 2.1, the Distutils are used to compile much of the standard library of extension modules, autodetecting which ones are supported on the current machine. It\u2019s hoped that this will make Python installations easier and more featureful.\nInstead of having to edit the Modules/Setup\nfile in order to enable\nmodules, a setup.py\nscript in the top directory of the Python source\ndistribution is run at build time, and attempts to discover which modules can be\nenabled by examining the modules and header files on the system. If a module is\nconfigured in Modules/Setup\n, the setup.py\nscript won\u2019t attempt\nto compile that module and will defer to the Modules/Setup\nfile\u2019s\ncontents. This provides a way to specific any strange command-line flags or\nlibraries that are required for a specific platform.\nIn another far-reaching change to the build mechanism, Neil Schemenauer\nrestructured things so Python now uses a single makefile that isn\u2019t recursive,\ninstead of makefiles in the top directory and in each of the Python/\n,\nParser/\n, Objects/\n, and Modules/\nsubdirectories. This\nmakes building Python faster and also makes hacking the Makefiles clearer and\nsimpler.\nSee also\n- PEP 229 - Using Distutils to Build Python\nWritten and implemented by A.M. Kuchling.\nPEP 205: Weak References\u00b6\nWeak references, available through the weakref\nmodule, are a minor but\nuseful new data type in the Python programmer\u2019s toolbox.\nStoring a reference to an object (say, in a dictionary or a list) has the side effect of keeping that object alive forever. There are a few specific cases where this behaviour is undesirable, object caches being the most common one, and another being circular references in data structures such as trees.\nFor example, consider a memoizing function that caches the results of another\nfunction f(x)\nby storing the function\u2019s argument and its result in a\ndictionary:\n_cache = {}\ndef memoize(x):\nif _cache.has_key(x):\nreturn _cache[x]\nretval = f(x)\n# Cache the returned object\n_cache[x] = retval\nreturn retval\nThis version works for simple things such as integers, but it has a side effect;\nthe _cache\ndictionary holds a reference to the return values, so they\u2019ll\nnever be deallocated until the Python process exits and cleans up. This isn\u2019t\nvery noticeable for integers, but if f()\nreturns an object, or a data\nstructure that takes up a lot of memory, this can be a problem.\nWeak references provide a way to implement a cache that won\u2019t keep objects alive\nbeyond their time. If an object is only accessible through weak references, the\nobject will be deallocated and the weak references will now indicate that the\nobject it referred to no longer exists. A weak reference to an object obj is\ncreated by calling wr = weakref.ref(obj)\n. The object being referred to is\nreturned by calling the weak reference as if it were a function: wr()\n. It\nwill return the referenced object, or None\nif the object no longer exists.\nThis makes it possible to write a memoize()\nfunction whose cache doesn\u2019t\nkeep objects alive, by storing weak references in the cache.\n_cache = {}\ndef memoize(x):\nif _cache.has_key(x):\nobj = _cache[x]()\n# If weak reference object still exists,\n# return it\nif obj is not None: return obj\nretval = f(x)\n# Cache a weak reference\n_cache[x] = weakref.ref(retval)\nreturn retval\nThe weakref\nmodule also allows creating proxy objects which behave like\nweak references \u2014 an object referenced only by proxy objects is deallocated \u2013\nbut instead of requiring an explicit call to retrieve the object, the proxy\ntransparently forwards all operations to the object as long as the object still\nexists. If the object is deallocated, attempting to use a proxy will cause a\nweakref.ReferenceError\nexception to be raised.\nproxy = weakref.proxy(obj)\nproxy.attr # Equivalent to obj.attr\nproxy.meth() # Equivalent to obj.meth()\ndel obj\nproxy.attr # raises weakref.ReferenceError\nSee also\n- PEP 205 - Weak References\nWritten and implemented by Fred L. Drake, Jr.\nPEP 232: Function Attributes\u00b6\nIn Python 2.1, functions can now have arbitrary information attached to them.\nPeople were often using docstrings to hold information about functions and\nmethods, because the __doc__\nattribute was the only way of\nattaching any\ninformation to a function. For example, in the Zope web application server,\nfunctions are marked as safe for public access by having a docstring, and in\nJohn Aycock\u2019s SPARK parsing framework, docstrings hold parts of the BNF grammar\nto be parsed. This overloading is unfortunate, since docstrings are really\nintended to hold a function\u2019s documentation; for example, it means you can\u2019t\nproperly document functions intended for private use in Zope.\nArbitrary attributes can now be set and retrieved on functions using the regular Python syntax:\ndef f(): pass\nf.publish = 1\nf.secure = 1\nf.grammar = \"A ::= B (C D)*\"\nThe dictionary containing attributes can be accessed as the function\u2019s\n__dict__\n. Unlike the __dict__\nattribute of class instances, in\nfunctions you can actually assign a new dictionary to __dict__\n, though\nthe new value is restricted to a regular Python dictionary; you can\u2019t be\ntricky and set it to a UserDict\ninstance, or any other random object\nthat behaves like a mapping.\nSee also\n- PEP 232 - Function Attributes\nWritten and implemented by Barry Warsaw.\nPEP 235: Importing Modules on Case-Insensitive Platforms\u00b6\nSome operating systems have filesystems that are case-insensitive, MacOS and\nWindows being the primary examples; on these systems, it\u2019s impossible to\ndistinguish the filenames FILE.PY\nand file.py\n, even though they do store\nthe file\u2019s name in its original case (they\u2019re case-preserving, too).\nIn Python 2.1, the import\nstatement will work to simulate case-sensitivity\non case-insensitive platforms. Python will now search for the first\ncase-sensitive match by default, raising an ImportError\nif no such file\nis found, so import file\nwill not import a module named FILE.PY\n.\nCase-insensitive matching can be requested by setting the PYTHONCASEOK\nenvironment variable before starting the Python interpreter.\nPEP 217: Interactive Display Hook\u00b6\nWhen using the Python interpreter interactively, the output of commands is\ndisplayed using the built-in repr()\nfunction. In Python 2.1, the variable\nsys.displayhook()\ncan be set to a callable object which will be called\ninstead of repr()\n. For example, you can set it to a special\npretty-printing function:\n>>> # Create a recursive data structure\n... L = [1,2,3]\n>>> L.append(L)\n>>> L # Show Python's default output\n[1, 2, 3, [...]]\n>>> # Use pprint.pprint() as the display function\n... import sys, pprint\n>>> sys.displayhook = pprint.pprint\n>>> L\n[1, 2, 3, ]\n>>>\nSee also\n- PEP 217 - Display Hook for Interactive Use\nWritten and implemented by Moshe Zadka.\nPEP 208: New Coercion Model\u00b6\nHow numeric coercion is done at the C level was significantly modified. This will only affect the authors of C extensions to Python, allowing them more flexibility in writing extension types that support numeric operations.\nExtension types can now set the type flag Py_TPFLAGS_CHECKTYPES\nin their\nPyTypeObject\nstructure to indicate that they support the new coercion model.\nIn such extension types, the numeric slot functions can no longer assume that\nthey\u2019ll be passed two arguments of the same type; instead they may be passed two\narguments of differing types, and can then perform their own internal coercion.\nIf the slot function is passed a type it can\u2019t handle, it can indicate the\nfailure by returning a reference to the Py_NotImplemented\nsingleton value.\nThe numeric functions of the other type will then be tried, and perhaps they can\nhandle the operation; if the other type also returns Py_NotImplemented\n, then\na TypeError\nwill be raised. Numeric methods written in Python can also\nreturn Py_NotImplemented\n, causing the interpreter to act as if the method\ndid not exist (perhaps raising a TypeError\n, perhaps trying another\nobject\u2019s numeric methods).\nSee also\n- PEP 208 - Reworking the Coercion Model\nWritten and implemented by Neil Schemenauer, heavily based upon earlier work by Marc-Andr\u00e9 Lemburg. Read this to understand the fine points of how numeric operations will now be processed at the C level.\nPEP 241: Metadata in Python Packages\u00b6\nA common complaint from Python users is that there\u2019s no single catalog of all\nthe Python modules in existence. T. Middleton\u2019s Vaults of Parnassus at\nwww.vex.net/parnassus/\n(retired in February 2009, available in the\nInternet Archive Wayback Machine)\nwas the largest catalog of Python modules, but\nregistering software at the Vaults is optional, and many people did not bother.\nAs a first small step toward fixing the problem, Python software packaged using\nthe Distutils sdist command will include a file named\nPKG-INFO\ncontaining information about the package such as its name,\nversion, and author (metadata, in cataloguing terminology). PEP 241 contains\nthe full list of fields that can be present in the PKG-INFO\nfile. As\npeople began to package their software using Python 2.1, more and more packages\nwill include metadata, making it possible to build automated cataloguing systems\nand experiment with them. With the result experience, perhaps it\u2019ll be possible\nto design a really good catalog and then build support for it into Python 2.2.\nFor example, the Distutils sdist and bdist_* commands\ncould support an upload\noption that would automatically upload your\npackage to a catalog server.\nYou can start creating packages containing PKG-INFO\neven if you\u2019re not\nusing Python 2.1, since a new release of the Distutils will be made for users of\nearlier Python versions. Version 1.0.2 of the Distutils includes the changes\ndescribed in PEP 241, as well as various bugfixes and enhancements. It will be\navailable from the Distutils SIG at https://www.python.org/community/sigs/current/distutils-sig/.\nNew and Improved Modules\u00b6\nKa-Ping Yee contributed two new modules:\ninspect.py\n, a module for getting information about live Python code, andpydoc.py\n, a module for interactively converting docstrings to HTML or text. As a bonus,Tools/scripts/pydoc\n, which is now automatically installed, usespydoc.py\nto display documentation given a Python module, package, or class name. For example,pydoc xml.dom\ndisplays the following:Python Library Documentation: package xml.dom in xml NAME xml.dom - W3C Document Object Model implementation for Python. FILE /usr/local/lib/python2.1/xml/dom/__init__.pyc DESCRIPTION The Python mapping of the Document Object Model is documented in the Python Library Reference in the section on the xml.dom package. This package contains the following modules: ...\npydoc\nalso includes a Tk-based interactive help browser.pydoc\nquickly becomes addictive; try it out!Two different modules for unit testing were added to the standard library. The\ndoctest\nmodule, contributed by Tim Peters, provides a testing framework based on running embedded examples in docstrings and comparing the results against the expected output. PyUnit, contributed by Steve Purcell, is a unit testing framework inspired by JUnit, which was in turn an adaptation of Kent Beck\u2019s Smalltalk testing framework. See https://pyunit.sourceforge.net/ for more information about PyUnit.The\ndifflib\nmodule contains a class,SequenceMatcher\n, which compares two sequences and computes the changes required to transform one sequence into the other. For example, this module can be used to write a tool similar to the Unix diff program, and in fact the sample programTools/scripts/ndiff.py\ndemonstrates how to write such a script.curses.panel\n, a wrapper for the panel library, part of ncurses and of SYSV curses, was contributed by Thomas Gellekum. The panel library provides windows with the additional feature of depth. Windows can be moved higher or lower in the depth ordering, and the panel library figures out where panels overlap and which sections are visible.The PyXML package has gone through a few releases since Python 2.0, and Python 2.1 includes an updated version of the\nxml\npackage. Some of the noteworthy changes include support for Expat 1.2 and later versions, the ability for Expat parsers to handle files in any encoding supported by Python, and various bugfixes for SAX, DOM, and theminidom\nmodule.Ping also contributed another hook for handling uncaught exceptions.\nsys.excepthook()\ncan be set to a callable object. When an exception isn\u2019t caught by anytry\n\u2026except\nblocks, the exception will be passed tosys.excepthook()\n, which can then do whatever it likes. At the Ninth Python Conference, Ping demonstrated an application for this hook: printing an extended traceback that not only lists the stack frames, but also lists the function arguments and the local variables for each frame.Various functions in the\ntime\nmodule, such asasctime()\nandlocaltime()\n, require a floating-point argument containing the time in seconds since the epoch. The most common use of these functions is to work with the current time, so the floating-point argument has been made optional; when a value isn\u2019t provided, the current time will be used. For example, log file entries usually need a string containing the current time; in Python 2.1,time.asctime()\ncan be used, instead of the lengthiertime.asctime(time.localtime(time.time()))\nthat was previously required.This change was proposed and implemented by Thomas Wouters.\nThe\nftplib\nmodule now defaults to retrieving files in passive mode, because passive mode is more likely to work from behind a firewall. This request came from the Debian bug tracking system, since other Debian packages useftplib\nto retrieve files and then don\u2019t work from behind a firewall. It\u2019s deemed unlikely that this will cause problems for anyone, because Netscape defaults to passive mode and few people complain, but if passive mode is unsuitable for your application or network setup, callset_pasv(0)\non FTP objects to disable passive mode.Support for raw socket access has been added to the\nsocket\nmodule, contributed by Grant Edwards.The\npstats\nmodule now contains a simple interactive statistics browser for displaying timing profiles for Python programs, invoked when the module is run as a script. Contributed by Eric S. Raymond.A new implementation-dependent function,\nsys._getframe([depth])\n, has been added to return a given frame object from the current call stack.sys._getframe()\nreturns the frame at the top of the call stack; if the optional integer argument depth is supplied, the function returns the frame that is depth calls below the top of the stack. For example,sys._getframe(1)\nreturns the caller\u2019s frame object.This function is only present in CPython, not in Jython or the .NET implementation. Use it for debugging, and resist the temptation to put it into production code.\nOther Changes and Fixes\u00b6\nThere were relatively few smaller changes made in Python 2.1 due to the shorter release cycle. A search through the CVS change logs turns up 117 patches applied, and 136 bugs fixed; both figures are likely to be underestimates. Some of the more notable changes are:\nA specialized object allocator is now optionally available, that should be faster than the system\nmalloc()\nand have less memory overhead. The allocator uses C\u2019smalloc()\nfunction to get large pools of memory, and then fulfills smaller memory requests from these pools. It can be enabled by providing the--with-pymalloc\noption to the configure script; seeObjects/obmalloc.c\nfor the implementation details.Authors of C extension modules should test their code with the object allocator enabled, because some incorrect code may break, causing core dumps at runtime. There are a bunch of memory allocation functions in Python\u2019s C API that have previously been just aliases for the C library\u2019s\nmalloc()\nandfree()\n, meaning that if you accidentally called mismatched functions, the error wouldn\u2019t be noticeable. When the object allocator is enabled, these functions aren\u2019t aliases ofmalloc()\nandfree()\nany more, and calling the wrong function to free memory will get you a core dump. For example, if memory was allocated usingPyMem_New\n, it has to be freed usingPyMem_Del()\n, notfree()\n. A few modules included with Python fell afoul of this and had to be fixed; doubtless there are more third-party modules that will have the same problem.The object allocator was contributed by Vladimir Marangozov.\nThe speed of line-oriented file I/O has been improved because people often complain about its lack of speed, and because it\u2019s often been used as a na\u00efve benchmark. The\nreadline()\nmethod of file objects has therefore been rewritten to be much faster. The exact amount of the speedup will vary from platform to platform depending on how slow the C library\u2019sgetc()\nwas, but is around 66%, and potentially much faster on some particular operating systems. Tim Peters did much of the benchmarking and coding for this change, motivated by a discussion in comp.lang.python.A new module and method for file objects was also added, contributed by Jeff Epler. The new method,\nxreadlines()\n, is similar to the existingxrange()\nbuilt-in.xreadlines()\nreturns an opaque sequence object that only supports being iterated over, reading a line on every iteration but not reading the entire file into memory as the existingreadlines()\nmethod does. You\u2019d use it like this:for line in sys.stdin.xreadlines(): # ... do something for each line ... ...\nFor a fuller discussion of the line I/O changes, see the python-dev summary for January 1\u201315, 2001 at https://mail.python.org/pipermail/python-dev/2001-January/.\nA new method,\npopitem()\n, was added to dictionaries to enable destructively iterating through the contents of a dictionary; this can be faster for large dictionaries because there\u2019s no need to construct a list containing all the keys or values.D.popitem()\nremoves a random(key, value)\npair from the dictionaryD\nand returns it as a 2-tuple. This was implemented mostly by Tim Peters and Guido van Rossum, after a suggestion and preliminary patch by Moshe Zadka.Modules can now control which names are imported when\nfrom module import *\nis used, by defining an__all__\nattribute containing a list of names that will be imported. One common complaint is that if the module imports other modules such assys\norstring\n,from module import *\nwill add them to the importing module\u2019s namespace. To fix this, simply list the public names in__all__\n:# List public names __all__ = ['Database', 'open']\nA stricter version of this patch was first suggested and implemented by Ben Wolfson, but after some python-dev discussion, a weaker final version was checked in.\nApplying\nrepr()\nto strings previously used octal escapes for non-printable characters; for example, a newline was'\\012'\n. This was a vestigial trace of Python\u2019s C ancestry, but today octal is of very little practical use. Ka-Ping Yee suggested using hex escapes instead of octal ones, and using the\\n\n,\\t\n,\\r\nescapes for the appropriate characters, and implemented this new formatting.Syntax errors detected at compile-time can now raise exceptions containing the filename and line number of the error, a pleasant side effect of the compiler reorganization done by Jeremy Hylton.\nC extensions which import other modules have been changed to use\nPyImport_ImportModule()\n, which means that they will use any import hooks that have been installed. This is also encouraged for third-party extensions that need to import some other module from C code.The size of the Unicode character database was shrunk by another 340K thanks to Fredrik Lundh.\nSome new ports were contributed: MacOS X (by Steven Majewski), Cygwin (by Jason Tishler); RISCOS (by Dietmar Schwertberger); Unixware 7 (by Billy G. Allie).\nAnd there\u2019s the usual list of minor bugfixes, minor memory leaks, docstring edits, and other tweaks, too lengthy to be worth itemizing; see the CVS logs for the full details if you want them.\nAcknowledgements\u00b6\nThe author would like to thank the following people for offering suggestions on various drafts of this article: Graeme Cross, David Goodger, Jay Graves, Michael Hudson, Marc-Andr\u00e9 Lemburg, Fredrik Lundh, Neil Schemenauer, Thomas Wouters.", "code_snippets": ["\n ", "\n ", "\n ", "\n ", " ", " ", " ", "\n ", "\n", " ", "\n ", "\n ", " ", " ", " ", " ", " ", " ", " ", "\n ", "\n ", " ", "\n", " ", " ", "\n", "\n ", "\n ", " ", "\n ", "\n ", " ", "\n", " ", "\n", "\n", "\n", "\n", "\n", "\n", "\n", " ", " ", "\n ", "\n ", "\n ", " ", " ", "\n", " ", " ", "\n", "\n ", " ", "\n ", " ", "\n\n ", " ", " ", "\n\n ", "\n ", " ", " ", "\n\n ", " ", "\n", " ", " ", "\n", "\n ", " ", "\n ", " ", " ", "\n ", "\n ", "\n ", " ", " ", " ", " ", " ", " ", "\n\n ", " ", " ", "\n\n ", "\n ", " ", " ", "\n\n ", " ", "\n", " ", " ", "\n", " ", "\n", " ", "\n", " ", "\n", " ", "\n", " ", "\n\n", " ", " ", "\n", " ", " ", "\n", " ", " ", "\n", "\n", " ", " ", "\n", "\n", " ", "\n", "\n", "\n", "\n", " ", " ", "\n", "\n", "\n", "\n", " ", " ", " ", " ", " ", " ", "\n\n", "\n ", " ", " ", " ", " ", " ", " ", " ", " ", "\n\n", "\n ", "\n\n", "\n ", " ", " ", " ", " ", " ", " ", " ", " ", " ", " ", " ", "\n ", " ", " ", " ", " ", " ", " ", " ", " ", " ", "\n\n ", " ", " ", " ", " ", " ", "\n ", "\n", " ", " ", " ", "\n ", "\n ", "\n", "\n", " ", " ", " ", "\n"], "language": "Python", "source": "python.org", "token_count": 8079} +{"url": "https://docs.python.org/3/library/msilib.html", "title": " \u2014 Read and write Microsoft Installer files", "content": "msilib\n\u2014 Read and write Microsoft Installer files\u00b6\nDeprecated since version 3.11, removed in version 3.13.\nThis module is no longer part of the Python standard library. It was removed in Python 3.13 after being deprecated in Python 3.11. The removal was decided in PEP 594.\nThe last version of Python that provided the msilib\nmodule was\nPython 3.12.", "code_snippets": [], "language": "Python", "source": "python.org", "token_count": 87} +{"url": "https://docs.python.org/3/faq/index.html", "title": "Python Frequently Asked Questions", "content": "Theme\nAuto\nLight\nDark\nPrevious topic\nRemote debugging attachment protocol\nNext topic\nGeneral Python FAQ\nThis page\nReport a bug\nImprove this page\nShow source\nNavigation\nindex\nmodules\n|\nnext\n|\nprevious\n|\nPython\n\u00bb\n3.14.3 Documentation\n\u00bb\nPython Frequently Asked Questions\n|\nTheme\nAuto\nLight\nDark\n|\nPython Frequently Asked Questions\n\u00b6\nGeneral Python FAQ\nProgramming FAQ\nDesign and History FAQ\nLibrary and Extension FAQ\nExtending/Embedding FAQ\nPython on Windows FAQ\nGraphic User Interface FAQ\n\u201cWhy is Python Installed on my Computer?\u201d FAQ\nPrevious topic\nRemote debugging attachment protocol\nNext topic\nGeneral Python FAQ\nThis page\nReport a bug\nImprove this page\nShow source\n\u00ab\nNavigation\nindex\nmodules\n|\nnext\n|\nprevious\n|\nPython\n\u00bb\n3.14.3 Documentation\n\u00bb\nPython Frequently Asked Questions\n|\nTheme\nAuto\nLight\nDark\n|", "code_snippets": [], "language": "Python", "source": "python.org", "token_count": 201} +{"url": "https://docs.python.org/3/howto/timerfd.html", "title": "timer file descriptor HOWTO", "content": "timer file descriptor HOWTO\u00b6\n- Release:\n1.13\nThis HOWTO discusses Python\u2019s support for the linux timer file descriptor.\nExamples\u00b6\nThe following example shows how to use a timer file descriptor to execute a function twice a second:\n# Practical scripts should use really use a non-blocking timer,\n# we use a blocking timer here for simplicity.\nimport os, time\n# Create the timer file descriptor\nfd = os.timerfd_create(time.CLOCK_REALTIME)\n# Start the timer in 1 second, with an interval of half a second\nos.timerfd_settime(fd, initial=1, interval=0.5)\ntry:\n# Process timer events four times.\nfor _ in range(4):\n# read() will block until the timer expires\n_ = os.read(fd, 8)\nprint(\"Timer expired\")\nfinally:\n# Remember to close the timer file descriptor!\nos.close(fd)\nTo avoid the precision loss caused by the float\ntype,\ntimer file descriptors allow specifying initial expiration and interval\nin integer nanoseconds with _ns\nvariants of the functions.\nThis example shows how epoll()\ncan be used with timer file\ndescriptors to wait until the file descriptor is ready for reading:\nimport os, time, select, socket, sys\n# Create an epoll object\nep = select.epoll()\n# In this example, use loopback address to send \"stop\" command to the server.\n#\n# $ telnet 127.0.0.1 1234\n# Trying 127.0.0.1...\n# Connected to 127.0.0.1.\n# Escape character is '^]'.\n# stop\n# Connection closed by foreign host.\n#\nsock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)\nsock.bind((\"127.0.0.1\", 1234))\nsock.setblocking(False)\nsock.listen(1)\nep.register(sock, select.EPOLLIN)\n# Create timer file descriptors in non-blocking mode.\nnum = 3\nfds = []\nfor _ in range(num):\nfd = os.timerfd_create(time.CLOCK_REALTIME, flags=os.TFD_NONBLOCK)\nfds.append(fd)\n# Register the timer file descriptor for read events\nep.register(fd, select.EPOLLIN)\n# Start the timer with os.timerfd_settime_ns() in nanoseconds.\n# Timer 1 fires every 0.25 seconds; timer 2 every 0.5 seconds; etc\nfor i, fd in enumerate(fds, start=1):\none_sec_in_nsec = 10**9\ni = i * one_sec_in_nsec\nos.timerfd_settime_ns(fd, initial=i//4, interval=i//4)\ntimeout = 3\ntry:\nconn = None\nis_active = True\nwhile is_active:\n# Wait for the timer to expire for 3 seconds.\n# epoll.poll() returns a list of (fd, event) pairs.\n# fd is a file descriptor.\n# sock and conn[=returned value of socket.accept()] are socket objects, not file descriptors.\n# So use sock.fileno() and conn.fileno() to get the file descriptors.\nevents = ep.poll(timeout)\n# If more than one timer file descriptors are ready for reading at once,\n# epoll.poll() returns a list of (fd, event) pairs.\n#\n# In this example settings,\n# 1st timer fires every 0.25 seconds in 0.25 seconds. (0.25, 0.5, 0.75, 1.0, ...)\n# 2nd timer every 0.5 seconds in 0.5 seconds. (0.5, 1.0, 1.5, 2.0, ...)\n# 3rd timer every 0.75 seconds in 0.75 seconds. (0.75, 1.5, 2.25, 3.0, ...)\n#\n# In 0.25 seconds, only 1st timer fires.\n# In 0.5 seconds, 1st timer and 2nd timer fires at once.\n# In 0.75 seconds, 1st timer and 3rd timer fires at once.\n# In 1.5 seconds, 1st timer, 2nd timer and 3rd timer fires at once.\n#\n# If a timer file descriptor is signaled more than once since\n# the last os.read() call, os.read() returns the number of signaled\n# as host order of class bytes.\nprint(f\"Signaled events={events}\")\nfor fd, event in events:\nif event & select.EPOLLIN:\nif fd == sock.fileno():\n# Check if there is a connection request.\nprint(f\"Accepting connection {fd}\")\nconn, addr = sock.accept()\nconn.setblocking(False)\nprint(f\"Accepted connection {conn} from {addr}\")\nep.register(conn, select.EPOLLIN)\nelif conn and fd == conn.fileno():\n# Check if there is data to read.\nprint(f\"Reading data {fd}\")\ndata = conn.recv(1024)\nif data:\n# You should catch UnicodeDecodeError exception for safety.\ncmd = data.decode()\nif cmd.startswith(\"stop\"):\nprint(f\"Stopping server\")\nis_active = False\nelse:\nprint(f\"Unknown command: {cmd}\")\nelse:\n# No more data, close connection\nprint(f\"Closing connection {fd}\")\nep.unregister(conn)\nconn.close()\nconn = None\nelif fd in fds:\nprint(f\"Reading timer {fd}\")\ncount = int.from_bytes(os.read(fd, 8), byteorder=sys.byteorder)\nprint(f\"Timer {fds.index(fd) + 1} expired {count} times\")\nelse:\nprint(f\"Unknown file descriptor {fd}\")\nfinally:\nfor fd in fds:\nep.unregister(fd)\nos.close(fd)\nep.close()\nThis example shows how select()\ncan be used with timer file\ndescriptors to wait until the file descriptor is ready for reading:\nimport os, time, select, socket, sys\n# In this example, use loopback address to send \"stop\" command to the server.\n#\n# $ telnet 127.0.0.1 1234\n# Trying 127.0.0.1...\n# Connected to 127.0.0.1.\n# Escape character is '^]'.\n# stop\n# Connection closed by foreign host.\n#\nsock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)\nsock.bind((\"127.0.0.1\", 1234))\nsock.setblocking(False)\nsock.listen(1)\n# Create timer file descriptors in non-blocking mode.\nnum = 3\nfds = [os.timerfd_create(time.CLOCK_REALTIME, flags=os.TFD_NONBLOCK)\nfor _ in range(num)]\nselect_fds = fds + [sock]\n# Start the timers with os.timerfd_settime() in seconds.\n# Timer 1 fires every 0.25 seconds; timer 2 every 0.5 seconds; etc\nfor i, fd in enumerate(fds, start=1):\nos.timerfd_settime(fd, initial=i/4, interval=i/4)\ntimeout = 3\ntry:\nconn = None\nis_active = True\nwhile is_active:\n# Wait for the timer to expire for 3 seconds.\n# select.select() returns a list of file descriptors or objects.\nrfd, wfd, xfd = select.select(select_fds, select_fds, select_fds, timeout)\nfor fd in rfd:\nif fd == sock:\n# Check if there is a connection request.\nprint(f\"Accepting connection {fd}\")\nconn, addr = sock.accept()\nconn.setblocking(False)\nprint(f\"Accepted connection {conn} from {addr}\")\nselect_fds.append(conn)\nelif conn and fd == conn:\n# Check if there is data to read.\nprint(f\"Reading data {fd}\")\ndata = conn.recv(1024)\nif data:\n# You should catch UnicodeDecodeError exception for safety.\ncmd = data.decode()\nif cmd.startswith(\"stop\"):\nprint(f\"Stopping server\")\nis_active = False\nelse:\nprint(f\"Unknown command: {cmd}\")\nelse:\n# No more data, close connection\nprint(f\"Closing connection {fd}\")\nselect_fds.remove(conn)\nconn.close()\nconn = None\nelif fd in fds:\nprint(f\"Reading timer {fd}\")\ncount = int.from_bytes(os.read(fd, 8), byteorder=sys.byteorder)\nprint(f\"Timer {fds.index(fd) + 1} expired {count} times\")\nelse:\nprint(f\"Unknown file descriptor {fd}\")\nfinally:\nfor fd in fds:\nos.close(fd)\nsock.close()\nsock = None", "code_snippets": [], "language": "Python", "source": "python.org", "token_count": 1591} +{"url": "https://docs.python.org/3/c-api/unicode.html", "title": "Unicode Objects and Codecs", "content": "Unicode Objects and Codecs\u00b6\nUnicode Objects\u00b6\nSince the implementation of PEP 393 in Python 3.3, Unicode objects internally use a variety of representations, in order to allow handling the complete range of Unicode characters while staying memory efficient. There are special cases for strings where all code points are below 128, 256, or 65536; otherwise, code points must be below 1114112 (which is the full Unicode range).\nUTF-8 representation is created on demand and cached in the Unicode object.\nNote\nThe Py_UNICODE\nrepresentation has been removed since Python 3.12\nwith deprecated APIs.\nSee PEP 623 for more information.\nUnicode Type\u00b6\nThese are the basic Unicode object types used for the Unicode implementation in Python:\n-\nPyTypeObject PyUnicode_Type\u00b6\n- Part of the Stable ABI.\nThis instance of\nPyTypeObject\nrepresents the Python Unicode type. It is exposed to Python code asstr\n.\n-\nPyTypeObject PyUnicodeIter_Type\u00b6\n- Part of the Stable ABI.\nThis instance of\nPyTypeObject\nrepresents the Python Unicode iterator type. It is used to iterate over Unicode string objects.\n-\ntype Py_UCS4\u00b6\n-\ntype Py_UCS2\u00b6\n-\ntype Py_UCS1\u00b6\n- Part of the Stable ABI.\nThese types are typedefs for unsigned integer types wide enough to contain characters of 32 bits, 16 bits and 8 bits, respectively. When dealing with single Unicode characters, use\nPy_UCS4\n.Added in version 3.3.\n-\ntype PyASCIIObject\u00b6\n-\ntype PyCompactUnicodeObject\u00b6\n-\ntype PyUnicodeObject\u00b6\nThese subtypes of\nPyObject\nrepresent a Python Unicode object. In almost all cases, they shouldn\u2019t be used directly, since all API functions that deal with Unicode objects take and returnPyObject\npointers.Added in version 3.3.\nThe structure of a particular object can be determined using the following macros. The macros cannot fail; their behavior is undefined if their argument is not a Python Unicode object.\n-\nPyUnicode_IS_COMPACT(o)\u00b6\nTrue if o uses the\nPyCompactUnicodeObject\nstructure.Added in version 3.3.\n-\nPyUnicode_IS_COMPACT_ASCII(o)\u00b6\nTrue if o uses the\nPyASCIIObject\nstructure.Added in version 3.3.\n-\nPyUnicode_IS_COMPACT(o)\u00b6\nThe following APIs are C macros and static inlined functions for fast checks and access to internal read-only data of Unicode objects:\n-\nint PyUnicode_Check(PyObject *obj)\u00b6\nReturn true if the object obj is a Unicode object or an instance of a Unicode subtype. This function always succeeds.\n-\nint PyUnicode_CheckExact(PyObject *obj)\u00b6\nReturn true if the object obj is a Unicode object, but not an instance of a subtype. This function always succeeds.\n-\nPy_ssize_t PyUnicode_GET_LENGTH(PyObject *unicode)\u00b6\nReturn the length of the Unicode string, in code points. unicode has to be a Unicode object in the \u201ccanonical\u201d representation (not checked).\nAdded in version 3.3.\n-\nPy_UCS1 *PyUnicode_1BYTE_DATA(PyObject *unicode)\u00b6\n-\nPy_UCS2 *PyUnicode_2BYTE_DATA(PyObject *unicode)\u00b6\n-\nPy_UCS4 *PyUnicode_4BYTE_DATA(PyObject *unicode)\u00b6\nReturn a pointer to the canonical representation cast to UCS1, UCS2 or UCS4 integer types for direct character access. No checks are performed if the canonical representation has the correct character size; use\nPyUnicode_KIND()\nto select the right function.Added in version 3.3.\n-\nPyUnicode_1BYTE_KIND\u00b6\n-\nPyUnicode_2BYTE_KIND\u00b6\n-\nPyUnicode_4BYTE_KIND\u00b6\nReturn values of the\nPyUnicode_KIND()\nmacro.Added in version 3.3.\nChanged in version 3.12:\nPyUnicode_WCHAR_KIND\nhas been removed.\n-\nint PyUnicode_KIND(PyObject *unicode)\u00b6\nReturn one of the PyUnicode kind constants (see above) that indicate how many bytes per character this Unicode object uses to store its data. unicode has to be a Unicode object in the \u201ccanonical\u201d representation (not checked).\nAdded in version 3.3.\n-\nvoid *PyUnicode_DATA(PyObject *unicode)\u00b6\nReturn a void pointer to the raw Unicode buffer. unicode has to be a Unicode object in the \u201ccanonical\u201d representation (not checked).\nAdded in version 3.3.\n-\nvoid PyUnicode_WRITE(int kind, void *data, Py_ssize_t index, Py_UCS4 value)\u00b6\nWrite the code point value to the given zero-based index in a string.\nThe kind value and data pointer must have been obtained from a string using\nPyUnicode_KIND()\nandPyUnicode_DATA()\nrespectively. You must hold a reference to that string while callingPyUnicode_WRITE()\n. All requirements ofPyUnicode_WriteChar()\nalso apply.The function performs no checks for any of its requirements, and is intended for usage in loops.\nAdded in version 3.3.\n-\nPy_UCS4 PyUnicode_READ(int kind, void *data, Py_ssize_t index)\u00b6\nRead a code point from a canonical representation data (as obtained with\nPyUnicode_DATA()\n). No checks or ready calls are performed.Added in version 3.3.\n-\nPy_UCS4 PyUnicode_READ_CHAR(PyObject *unicode, Py_ssize_t index)\u00b6\nRead a character from a Unicode object unicode, which must be in the \u201ccanonical\u201d representation. This is less efficient than\nPyUnicode_READ()\nif you do multiple consecutive reads.Added in version 3.3.\n-\nPy_UCS4 PyUnicode_MAX_CHAR_VALUE(PyObject *unicode)\u00b6\nReturn the maximum code point that is suitable for creating another string based on unicode, which must be in the \u201ccanonical\u201d representation. This is always an approximation but more efficient than iterating over the string.\nAdded in version 3.3.\n-\nint PyUnicode_IsIdentifier(PyObject *unicode)\u00b6\n- Part of the Stable ABI.\nReturn\n1\nif the string is a valid identifier according to the language definition, section Names (identifiers and keywords). Return0\notherwise.Changed in version 3.9: The function does not call\nPy_FatalError()\nanymore if the string is not ready.\n-\nunsigned int PyUnicode_IS_ASCII(PyObject *unicode)\u00b6\nReturn true if the string only contains ASCII characters. Equivalent to\nstr.isascii()\n.Added in version 3.2.\nUnicode Character Properties\u00b6\nUnicode provides many different character properties. The most often needed ones are available through these macros which are mapped to C functions depending on the Python configuration.\n-\nint Py_UNICODE_ISSPACE(Py_UCS4 ch)\u00b6\nReturn\n1\nor0\ndepending on whether ch is a whitespace character.\n-\nint Py_UNICODE_ISUPPER(Py_UCS4 ch)\u00b6\nReturn\n1\nor0\ndepending on whether ch is an uppercase character.\n-\nint Py_UNICODE_ISLINEBREAK(Py_UCS4 ch)\u00b6\nReturn\n1\nor0\ndepending on whether ch is a linebreak character.\n-\nint Py_UNICODE_ISALPHA(Py_UCS4 ch)\u00b6\nReturn\n1\nor0\ndepending on whether ch is an alphabetic character.\n-\nint Py_UNICODE_ISALNUM(Py_UCS4 ch)\u00b6\nReturn\n1\nor0\ndepending on whether ch is an alphanumeric character.\n-\nint Py_UNICODE_ISPRINTABLE(Py_UCS4 ch)\u00b6\nReturn\n1\nor0\ndepending on whether ch is a printable character, in the sense ofstr.isprintable()\n.\nThese APIs can be used for fast direct character conversions:\n-\nint Py_UNICODE_TODECIMAL(Py_UCS4 ch)\u00b6\nReturn the character ch converted to a decimal positive integer. Return\n-1\nif this is not possible. This function does not raise exceptions.\n-\nint Py_UNICODE_TODIGIT(Py_UCS4 ch)\u00b6\nReturn the character ch converted to a single digit integer. Return\n-1\nif this is not possible. This function does not raise exceptions.\n-\ndouble Py_UNICODE_TONUMERIC(Py_UCS4 ch)\u00b6\nReturn the character ch converted to a double. Return\n-1.0\nif this is not possible. This function does not raise exceptions.\nThese APIs can be used to work with surrogates:\n-\nint Py_UNICODE_IS_HIGH_SURROGATE(Py_UCS4 ch)\u00b6\nCheck if ch is a high surrogate (\n0xD800 <= ch <= 0xDBFF\n).\n-\nint Py_UNICODE_IS_LOW_SURROGATE(Py_UCS4 ch)\u00b6\nCheck if ch is a low surrogate (\n0xDC00 <= ch <= 0xDFFF\n).\n-\nPy_UCS4 Py_UNICODE_HIGH_SURROGATE(Py_UCS4 ch)\u00b6\nReturn the high UTF-16 surrogate (\n0xD800\nto0xDBFF\n) for a Unicode code point in the range[0x10000; 0x10FFFF]\n.\n-\nPy_UCS4 Py_UNICODE_LOW_SURROGATE(Py_UCS4 ch)\u00b6\nReturn the low UTF-16 surrogate (\n0xDC00\nto0xDFFF\n) for a Unicode code point in the range[0x10000; 0x10FFFF]\n.\n-\nPy_UCS4 Py_UNICODE_JOIN_SURROGATES(Py_UCS4 high, Py_UCS4 low)\u00b6\nJoin two surrogate code points and return a single\nPy_UCS4\nvalue. high and low are respectively the leading and trailing surrogates in a surrogate pair. high must be in the range[0xD800; 0xDBFF]\nand low must be in the range[0xDC00; 0xDFFF]\n.\nCreating and accessing Unicode strings\u00b6\nTo create Unicode objects and access their basic sequence properties, use these APIs:\n-\nPyObject *PyUnicode_New(Py_ssize_t size, Py_UCS4 maxchar)\u00b6\n- Return value: New reference.\nCreate a new Unicode object. maxchar should be the true maximum code point to be placed in the string. As an approximation, it can be rounded up to the nearest value in the sequence 127, 255, 65535, 1114111.\nOn error, set an exception and return\nNULL\n.After creation, the string can be filled by\nPyUnicode_WriteChar()\n,PyUnicode_CopyCharacters()\n,PyUnicode_Fill()\n,PyUnicode_WRITE()\nor similar. Since strings are supposed to be immutable, take care to not \u201cuse\u201d the result while it is being modified. In particular, before it\u2019s filled with its final contents, a string:must not be hashed,\nmust not be\nconverted to UTF-8\n, or another non-\u201ccanonical\u201d representation,must not have its reference count changed,\nmust not be shared with code that might do one of the above.\nThis list is not exhaustive. Avoiding these uses is your responsibility; Python does not always check these requirements.\nTo avoid accidentally exposing a partially-written string object, prefer using the\nPyUnicodeWriter\nAPI, or one of thePyUnicode_From*\nfunctions below.Added in version 3.3.\n-\nPyObject *PyUnicode_FromKindAndData(int kind, const void *buffer, Py_ssize_t size)\u00b6\n- Return value: New reference.\nCreate a new Unicode object with the given kind (possible values are\nPyUnicode_1BYTE_KIND\netc., as returned byPyUnicode_KIND()\n). The buffer must point to an array of size units of 1, 2 or 4 bytes per character, as given by the kind.If necessary, the input buffer is copied and transformed into the canonical representation. For example, if the buffer is a UCS4 string (\nPyUnicode_4BYTE_KIND\n) and it consists only of codepoints in the UCS1 range, it will be transformed into UCS1 (PyUnicode_1BYTE_KIND\n).Added in version 3.3.\n-\nPyObject *PyUnicode_FromStringAndSize(const char *str, Py_ssize_t size)\u00b6\n- Return value: New reference. Part of the Stable ABI.\nCreate a Unicode object from the char buffer str. The bytes will be interpreted as being UTF-8 encoded. The buffer is copied into the new object. The return value might be a shared object, i.e. modification of the data is not allowed.\nThis function raises\nSystemError\nwhen:size < 0,\nstr is\nNULL\nand size > 0\nChanged in version 3.12: str ==\nNULL\nwith size > 0 is not allowed anymore.\n-\nPyObject *PyUnicode_FromString(const char *str)\u00b6\n- Return value: New reference. Part of the Stable ABI.\nCreate a Unicode object from a UTF-8 encoded null-terminated char buffer str.\n-\nPyObject *PyUnicode_FromFormat(const char *format, ...)\u00b6\n- Return value: New reference. Part of the Stable ABI.\nTake a C\nprintf()\n-style format string and a variable number of arguments, calculate the size of the resulting Python Unicode string and return a string with the values formatted into it. The variable arguments must be C types and must correspond exactly to the format characters in the format ASCII-encoded string.A conversion specifier contains two or more characters and has the following components, which must occur in this order:\nThe\n'%'\ncharacter, which marks the start of the specifier.Conversion flags (optional), which affect the result of some conversion types.\nMinimum field width (optional). If specified as an\n'*'\n(asterisk), the actual width is given in the next argument, which must be of type int, and the object to convert comes after the minimum field width and optional precision.Precision (optional), given as a\n'.'\n(dot) followed by the precision. If specified as'*'\n(an asterisk), the actual precision is given in the next argument, which must be of type int, and the value to convert comes after the precision.Length modifier (optional).\nConversion type.\nThe conversion flag characters are:\nFlag\nMeaning\n0\nThe conversion will be zero padded for numeric values.\n-\nThe converted value is left adjusted (overrides the\n0\nflag if both are given).The length modifiers for following integer conversions (\nd\n,i\n,o\n,u\n,x\n, orX\n) specify the type of the argument (int by default):Modifier\nTypes\nl\nlong or unsigned long\nll\nlong long or unsigned long long\nj\nintmax_t\noruintmax_t\nz\nsize_t\norssize_t\nt\nptrdiff_t\nThe length modifier\nl\nfor following conversionss\norV\nspecify that the type of the argument is const wchar_t*.The conversion specifiers are:\nConversion Specifier\nType\nComment\n%\nn/a\nThe literal\n%\ncharacter.d\n,i\nSpecified by the length modifier\nThe decimal representation of a signed C integer.\nu\nSpecified by the length modifier\nThe decimal representation of an unsigned C integer.\no\nSpecified by the length modifier\nThe octal representation of an unsigned C integer.\nx\nSpecified by the length modifier\nThe hexadecimal representation of an unsigned C integer (lowercase).\nX\nSpecified by the length modifier\nThe hexadecimal representation of an unsigned C integer (uppercase).\nc\nint\nA single character.\ns\nconst char* or const wchar_t*\nA null-terminated C character array.\np\nconst void*\nThe hex representation of a C pointer. Mostly equivalent to\nprintf(\"%p\")\nexcept that it is guaranteed to start with the literal0x\nregardless of what the platform\u2019sprintf\nyields.A\nThe result of calling\nascii()\n.U\nA Unicode object.\nV\nPyObject*, const char* or const wchar_t*\nA Unicode object (which may be\nNULL\n) and a null-terminated C character array as a second parameter (which will be used, if the first parameter isNULL\n).S\nThe result of calling\nPyObject_Str()\n.R\nThe result of calling\nPyObject_Repr()\n.T\nGet the fully qualified name of an object type; call\nPyType_GetFullyQualifiedName()\n.#T\nSimilar to\nT\nformat, but use a colon (:\n) as separator between the module name and the qualified name.N\nGet the fully qualified name of a type; call\nPyType_GetFullyQualifiedName()\n.#N\nSimilar to\nN\nformat, but use a colon (:\n) as separator between the module name and the qualified name.Note\nThe width formatter unit is number of characters rather than bytes. The precision formatter unit is number of bytes or\nwchar_t\nitems (if the length modifierl\nis used) for\"%s\"\nand\"%V\"\n(if thePyObject*\nargument isNULL\n), and a number of characters for\"%A\"\n,\"%U\"\n,\"%S\"\n,\"%R\"\nand\"%V\"\n(if thePyObject*\nargument is notNULL\n).Note\nUnlike to C\nprintf()\nthe0\nflag has effect even when a precision is given for integer conversions (d\n,i\n,u\n,o\n,x\n, orX\n).Changed in version 3.2: Support for\n\"%lld\"\nand\"%llu\"\nadded.Changed in version 3.3: Support for\n\"%li\"\n,\"%lli\"\nand\"%zi\"\nadded.Changed in version 3.4: Support width and precision formatter for\n\"%s\"\n,\"%A\"\n,\"%U\"\n,\"%V\"\n,\"%S\"\n,\"%R\"\nadded.Changed in version 3.12: Support for conversion specifiers\no\nandX\n. Support for length modifiersj\nandt\n. Length modifiers are now applied to all integer conversions. Length modifierl\nis now applied to conversion specifierss\nandV\n. Support for variable width and precision*\n. Support for flag-\n.An unrecognized format character now sets a\nSystemError\n. In previous versions it caused all the rest of the format string to be copied as-is to the result string, and any extra arguments discarded.Changed in version 3.13: Support for\n%T\n,%#T\n,%N\nand%#N\nformats added.\n-\nPyObject *PyUnicode_FromFormatV(const char *format, va_list vargs)\u00b6\n- Return value: New reference. Part of the Stable ABI.\nIdentical to\nPyUnicode_FromFormat()\nexcept that it takes exactly two arguments.\n-\nPyObject *PyUnicode_FromObject(PyObject *obj)\u00b6\n- Return value: New reference. Part of the Stable ABI.\nCopy an instance of a Unicode subtype to a new true Unicode object if necessary. If obj is already a true Unicode object (not a subtype), return a new strong reference to the object.\nObjects other than Unicode or its subtypes will cause a\nTypeError\n.\n-\nPyObject *PyUnicode_FromOrdinal(int ordinal)\u00b6\n- Return value: New reference. Part of the Stable ABI.\nCreate a Unicode Object from the given Unicode code point ordinal.\nThe ordinal must be in\nrange(0x110000)\n. AValueError\nis raised in the case it is not.\n-\nPyObject *PyUnicode_FromEncodedObject(PyObject *obj, const char *encoding, const char *errors)\u00b6\n- Return value: New reference. Part of the Stable ABI.\nDecode an encoded object obj to a Unicode object.\nbytes\n,bytearray\nand other bytes-like objects are decoded according to the given encoding and using the error handling defined by errors. Both can beNULL\nto have the interface use the default values (see Built-in Codecs for details).All other objects, including Unicode objects, cause a\nTypeError\nto be set.The API returns\nNULL\nif there was an error. The caller is responsible for decref\u2019ing the returned objects.\n-\nvoid PyUnicode_Append(PyObject **p_left, PyObject *right)\u00b6\n- Part of the Stable ABI.\nAppend the string right to the end of p_left. p_left must point to a strong reference to a Unicode object;\nPyUnicode_Append()\nreleases (\u201csteals\u201d) this reference.On error, set *p_left to\nNULL\nand set an exception.On success, set *p_left to a new strong reference to the result.\n-\nvoid PyUnicode_AppendAndDel(PyObject **p_left, PyObject *right)\u00b6\n- Part of the Stable ABI.\nThe function is similar to\nPyUnicode_Append()\n, with the only difference being that it decrements the reference count of right by one.\n-\nPyObject *PyUnicode_BuildEncodingMap(PyObject *string)\u00b6\n- Return value: New reference. Part of the Stable ABI.\nReturn a mapping suitable for decoding a custom single-byte encoding. Given a Unicode string string of up to 256 characters representing an encoding table, returns either a compact internal mapping object or a dictionary mapping character ordinals to byte values. Raises a\nTypeError\nand returnNULL\non invalid input.Added in version 3.2.\n-\nconst char *PyUnicode_GetDefaultEncoding(void)\u00b6\n- Part of the Stable ABI.\nReturn the name of the default string encoding,\n\"utf-8\"\n. Seesys.getdefaultencoding()\n.The returned string does not need to be freed, and is valid until interpreter shutdown.\n-\nPy_ssize_t PyUnicode_GetLength(PyObject *unicode)\u00b6\n- Part of the Stable ABI since version 3.7.\nReturn the length of the Unicode object, in code points.\nOn error, set an exception and return\n-1\n.Added in version 3.3.\n-\nPy_ssize_t PyUnicode_CopyCharacters(PyObject *to, Py_ssize_t to_start, PyObject *from, Py_ssize_t from_start, Py_ssize_t how_many)\u00b6\nCopy characters from one Unicode object into another. This function performs character conversion when necessary and falls back to\nmemcpy()\nif possible. Returns-1\nand sets an exception on error, otherwise returns the number of copied characters.The string must not have been \u201cused\u201d yet. See\nPyUnicode_New()\nfor details.Added in version 3.3.\n-\nint PyUnicode_Resize(PyObject **unicode, Py_ssize_t length);\u00b6\n- Part of the Stable ABI.\nResize a Unicode object *unicode to the new length in code points.\nTry to resize the string in place (which is usually faster than allocating a new string and copying characters), or create a new string.\n*unicode is modified to point to the new (resized) object and\n0\nis returned on success. Otherwise,-1\nis returned and an exception is set, and *unicode is left untouched.The function doesn\u2019t check string content, the result may not be a string in canonical representation.\n-\nPy_ssize_t PyUnicode_Fill(PyObject *unicode, Py_ssize_t start, Py_ssize_t length, Py_UCS4 fill_char)\u00b6\nFill a string with a character: write fill_char into\nunicode[start:start+length]\n.Fail if fill_char is bigger than the string maximum character, or if the string has more than 1 reference.\nThe string must not have been \u201cused\u201d yet. See\nPyUnicode_New()\nfor details.Return the number of written character, or return\n-1\nand raise an exception on error.Added in version 3.3.\n-\nint PyUnicode_WriteChar(PyObject *unicode, Py_ssize_t index, Py_UCS4 character)\u00b6\n- Part of the Stable ABI since version 3.7.\nWrite a character to the string unicode at the zero-based index. Return\n0\non success,-1\non error with an exception set.This function checks that unicode is a Unicode object, that the index is not out of bounds, and that the object\u2019s reference count is one. See\nPyUnicode_WRITE()\nfor a version that skips these checks, making them your responsibility.The string must not have been \u201cused\u201d yet. See\nPyUnicode_New()\nfor details.Added in version 3.3.\n-\nPy_UCS4 PyUnicode_ReadChar(PyObject *unicode, Py_ssize_t index)\u00b6\n- Part of the Stable ABI since version 3.7.\nRead a character from a string. This function checks that unicode is a Unicode object and the index is not out of bounds, in contrast to\nPyUnicode_READ_CHAR()\n, which performs no error checking.Return character on success,\n-1\non error with an exception set.Added in version 3.3.\n-\nPyObject *PyUnicode_Substring(PyObject *unicode, Py_ssize_t start, Py_ssize_t end)\u00b6\n- Return value: New reference. Part of the Stable ABI since version 3.7.\nReturn a substring of unicode, from character index start (included) to character index end (excluded). Negative indices are not supported. On error, set an exception and return\nNULL\n.Added in version 3.3.\n-\nPy_UCS4 *PyUnicode_AsUCS4(PyObject *unicode, Py_UCS4 *buffer, Py_ssize_t buflen, int copy_null)\u00b6\n- Part of the Stable ABI since version 3.7.\nCopy the string unicode into a UCS4 buffer, including a null character, if copy_null is set. Returns\nNULL\nand sets an exception on error (in particular, aSystemError\nif buflen is smaller than the length of unicode). buffer is returned on success.Added in version 3.3.\n-\nPy_UCS4 *PyUnicode_AsUCS4Copy(PyObject *unicode)\u00b6\n- Part of the Stable ABI since version 3.7.\nCopy the string unicode into a new UCS4 buffer that is allocated using\nPyMem_Malloc()\n. If this fails,NULL\nis returned with aMemoryError\nset. The returned buffer always has an extra null code point appended.Added in version 3.3.\nLocale Encoding\u00b6\nThe current locale encoding can be used to decode text from the operating system.\n-\nPyObject *PyUnicode_DecodeLocaleAndSize(const char *str, Py_ssize_t length, const char *errors)\u00b6\n- Return value: New reference. Part of the Stable ABI since version 3.7.\nDecode a string from UTF-8 on Android and VxWorks, or from the current locale encoding on other platforms. The supported error handlers are\n\"strict\"\nand\"surrogateescape\"\n(PEP 383). The decoder uses\"strict\"\nerror handler if errors isNULL\n. str must end with a null character but cannot contain embedded null characters.Use\nPyUnicode_DecodeFSDefaultAndSize()\nto decode a string from the filesystem encoding and error handler.This function ignores the Python UTF-8 Mode.\nSee also\nThe\nPy_DecodeLocale()\nfunction.Added in version 3.3.\nChanged in version 3.7: The function now also uses the current locale encoding for the\nsurrogateescape\nerror handler, except on Android. Previously,Py_DecodeLocale()\nwas used for thesurrogateescape\n, and the current locale encoding was used forstrict\n.\n-\nPyObject *PyUnicode_DecodeLocale(const char *str, const char *errors)\u00b6\n- Return value: New reference. Part of the Stable ABI since version 3.7.\nSimilar to\nPyUnicode_DecodeLocaleAndSize()\n, but compute the string length usingstrlen()\n.Added in version 3.3.\n-\nPyObject *PyUnicode_EncodeLocale(PyObject *unicode, const char *errors)\u00b6\n- Return value: New reference. Part of the Stable ABI since version 3.7.\nEncode a Unicode object to UTF-8 on Android and VxWorks, or to the current locale encoding on other platforms. The supported error handlers are\n\"strict\"\nand\"surrogateescape\"\n(PEP 383). The encoder uses\"strict\"\nerror handler if errors isNULL\n. Return abytes\nobject. unicode cannot contain embedded null characters.Use\nPyUnicode_EncodeFSDefault()\nto encode a string to the filesystem encoding and error handler.This function ignores the Python UTF-8 Mode.\nSee also\nThe\nPy_EncodeLocale()\nfunction.Added in version 3.3.\nChanged in version 3.7: The function now also uses the current locale encoding for the\nsurrogateescape\nerror handler, except on Android. Previously,Py_EncodeLocale()\nwas used for thesurrogateescape\n, and the current locale encoding was used forstrict\n.\nFile System Encoding\u00b6\nFunctions encoding to and decoding from the filesystem encoding and error handler (PEP 383 and PEP 529).\nTo encode file names to bytes\nduring argument parsing, the \"O&\"\nconverter should be used, passing PyUnicode_FSConverter()\nas the\nconversion function:\n-\nint PyUnicode_FSConverter(PyObject *obj, void *result)\u00b6\n- Part of the Stable ABI.\nPyArg_Parse* converter: encode\nstr\nobjects \u2013 obtained directly or through theos.PathLike\ninterface \u2013 tobytes\nusingPyUnicode_EncodeFSDefault()\n;bytes\nobjects are output as-is. result must be an address of a C variable of type PyObject* (or PyBytesObject*). On success, set the variable to a new strong reference to a bytes object which must be released when it is no longer used and return a non-zero value (Py_CLEANUP_SUPPORTED\n). Embedded null bytes are not allowed in the result. On failure, return0\nwith an exception set.If obj is\nNULL\n, the function releases a strong reference stored in the variable referred by result and returns1\n.Added in version 3.1.\nChanged in version 3.6: Accepts a path-like object.\nTo decode file names to str\nduring argument parsing, the \"O&\"\nconverter should be used, passing PyUnicode_FSDecoder()\nas the\nconversion function:\n-\nint PyUnicode_FSDecoder(PyObject *obj, void *result)\u00b6\n- Part of the Stable ABI.\nPyArg_Parse* converter: decode\nbytes\nobjects \u2013 obtained either directly or indirectly through theos.PathLike\ninterface \u2013 tostr\nusingPyUnicode_DecodeFSDefaultAndSize()\n;str\nobjects are output as-is. result must be an address of a C variable of type PyObject* (or PyUnicodeObject*). On success, set the variable to a new strong reference to a Unicode object which must be released when it is no longer used and return a non-zero value (Py_CLEANUP_SUPPORTED\n). Embedded null characters are not allowed in the result. On failure, return0\nwith an exception set.If obj is\nNULL\n, release the strong reference to the object referred to by result and return1\n.Added in version 3.2.\nChanged in version 3.6: Accepts a path-like object.\n-\nPyObject *PyUnicode_DecodeFSDefaultAndSize(const char *str, Py_ssize_t size)\u00b6\n- Return value: New reference. Part of the Stable ABI.\nDecode a string from the filesystem encoding and error handler.\nIf you need to decode a string from the current locale encoding, use\nPyUnicode_DecodeLocaleAndSize()\n.See also\nThe\nPy_DecodeLocale()\nfunction.Changed in version 3.6: The filesystem error handler is now used.\n-\nPyObject *PyUnicode_DecodeFSDefault(const char *str)\u00b6\n- Return value: New reference. Part of the Stable ABI.\nDecode a null-terminated string from the filesystem encoding and error handler.\nIf the string length is known, use\nPyUnicode_DecodeFSDefaultAndSize()\n.Changed in version 3.6: The filesystem error handler is now used.\n-\nPyObject *PyUnicode_EncodeFSDefault(PyObject *unicode)\u00b6\n- Return value: New reference. Part of the Stable ABI.\nEncode a Unicode object to the filesystem encoding and error handler, and return\nbytes\n. Note that the resultingbytes\nobject can contain null bytes.If you need to encode a string to the current locale encoding, use\nPyUnicode_EncodeLocale()\n.See also\nThe\nPy_EncodeLocale()\nfunction.Added in version 3.2.\nChanged in version 3.6: The filesystem error handler is now used.\nwchar_t Support\u00b6\nwchar_t\nsupport for platforms which support it:\n-\nPyObject *PyUnicode_FromWideChar(const wchar_t *wstr, Py_ssize_t size)\u00b6\n- Return value: New reference. Part of the Stable ABI.\nCreate a Unicode object from the\nwchar_t\nbuffer wstr of the given size. Passing-1\nas the size indicates that the function must itself compute the length, usingwcslen()\n. ReturnNULL\non failure.\n-\nPy_ssize_t PyUnicode_AsWideChar(PyObject *unicode, wchar_t *wstr, Py_ssize_t size)\u00b6\n- Part of the Stable ABI.\nCopy the Unicode object contents into the\nwchar_t\nbuffer wstr. At most sizewchar_t\ncharacters are copied (excluding a possibly trailing null termination character). Return the number ofwchar_t\ncharacters copied or-1\nin case of an error.When wstr is\nNULL\n, instead return the size that would be required to store all of unicode including a terminating null.Note that the resulting wchar_t* string may or may not be null-terminated. It is the responsibility of the caller to make sure that the wchar_t* string is null-terminated in case this is required by the application. Also, note that the wchar_t* string might contain null characters, which would cause the string to be truncated when used with most C functions.\n-\nwchar_t *PyUnicode_AsWideCharString(PyObject *unicode, Py_ssize_t *size)\u00b6\n- Part of the Stable ABI since version 3.7.\nConvert the Unicode object to a wide character string. The output string always ends with a null character. If size is not\nNULL\n, write the number of wide characters (excluding the trailing null termination character) into *size. Note that the resultingwchar_t\nstring might contain null characters, which would cause the string to be truncated when used with most C functions. If size isNULL\nand the wchar_t* string contains null characters aValueError\nis raised.Returns a buffer allocated by\nPyMem_New\n(usePyMem_Free()\nto free it) on success. On error, returnsNULL\nand *size is undefined. Raises aMemoryError\nif memory allocation is failed.Added in version 3.2.\nChanged in version 3.7: Raises a\nValueError\nif size isNULL\nand the wchar_t* string contains null characters.\nBuilt-in Codecs\u00b6\nPython provides a set of built-in codecs which are written in C for speed. All of these codecs are directly usable via the following functions.\nMany of the following APIs take two arguments encoding and errors, and they\nhave the same semantics as the ones of the built-in str()\nstring object\nconstructor.\nSetting encoding to NULL\ncauses the default encoding to be used\nwhich is UTF-8. The file system calls should use\nPyUnicode_FSConverter()\nfor encoding file names. This uses the\nfilesystem encoding and error handler internally.\nError handling is set by errors which may also be set to NULL\nmeaning to use\nthe default handling defined for the codec. Default error handling for all\nbuilt-in codecs is \u201cstrict\u201d (ValueError\nis raised).\nThe codecs all use a similar interface. Only deviations from the following generic ones are documented for simplicity.\nGeneric Codecs\u00b6\nThe following macro is provided:\n-\nPy_UNICODE_REPLACEMENT_CHARACTER\u00b6\nThe Unicode code point\nU+FFFD\n(replacement character).This Unicode character is used as the replacement character during decoding if the errors argument is set to \u201creplace\u201d.\nThese are the generic codec APIs:\n-\nPyObject *PyUnicode_Decode(const char *str, Py_ssize_t size, const char *encoding, const char *errors)\u00b6\n- Return value: New reference. Part of the Stable ABI.\nCreate a Unicode object by decoding size bytes of the encoded string str. encoding and errors have the same meaning as the parameters of the same name in the\nstr()\nbuilt-in function. The codec to be used is looked up using the Python codec registry. ReturnNULL\nif an exception was raised by the codec.\n-\nPyObject *PyUnicode_AsEncodedString(PyObject *unicode, const char *encoding, const char *errors)\u00b6\n- Return value: New reference. Part of the Stable ABI.\nEncode a Unicode object and return the result as Python bytes object. encoding and errors have the same meaning as the parameters of the same name in the Unicode\nencode()\nmethod. The codec to be used is looked up using the Python codec registry. ReturnNULL\nif an exception was raised by the codec.\nUTF-8 Codecs\u00b6\nThese are the UTF-8 codec APIs:\n-\nPyObject *PyUnicode_DecodeUTF8(const char *str, Py_ssize_t size, const char *errors)\u00b6\n- Return value: New reference. Part of the Stable ABI.\nCreate a Unicode object by decoding size bytes of the UTF-8 encoded string str. Return\nNULL\nif an exception was raised by the codec.\n-\nPyObject *PyUnicode_DecodeUTF8Stateful(const char *str, Py_ssize_t size, const char *errors, Py_ssize_t *consumed)\u00b6\n- Return value: New reference. Part of the Stable ABI.\nIf consumed is\nNULL\n, behave likePyUnicode_DecodeUTF8()\n. If consumed is notNULL\n, trailing incomplete UTF-8 byte sequences will not be treated as an error. Those bytes will not be decoded and the number of bytes that have been decoded will be stored in consumed.\n-\nPyObject *PyUnicode_AsUTF8String(PyObject *unicode)\u00b6\n- Return value: New reference. Part of the Stable ABI.\nEncode a Unicode object using UTF-8 and return the result as Python bytes object. Error handling is \u201cstrict\u201d. Return\nNULL\nif an exception was raised by the codec.The function fails if the string contains surrogate code points (\nU+D800\n-U+DFFF\n).\n-\nconst char *PyUnicode_AsUTF8AndSize(PyObject *unicode, Py_ssize_t *size)\u00b6\n- Part of the Stable ABI since version 3.10.\nReturn a pointer to the UTF-8 encoding of the Unicode object, and store the size of the encoded representation (in bytes) in size. The size argument can be\nNULL\n; in this case no size will be stored. The returned buffer always has an extra null byte appended (not included in size), regardless of whether there are any other null code points.On error, set an exception, set size to\n-1\n(if it\u2019s not NULL) and returnNULL\n.The function fails if the string contains surrogate code points (\nU+D800\n-U+DFFF\n).This caches the UTF-8 representation of the string in the Unicode object, and subsequent calls will return a pointer to the same buffer. The caller is not responsible for deallocating the buffer. The buffer is deallocated and pointers to it become invalid when the Unicode object is garbage collected.\nAdded in version 3.3.\nChanged in version 3.7: The return type is now\nconst char *\nrather ofchar *\n.Changed in version 3.10: This function is a part of the limited API.\n-\nconst char *PyUnicode_AsUTF8(PyObject *unicode)\u00b6\nAs\nPyUnicode_AsUTF8AndSize()\n, but does not store the size.Warning\nThis function does not have any special behavior for null characters embedded within unicode. As a result, strings containing null characters will remain in the returned string, which some C functions might interpret as the end of the string, leading to truncation. If truncation is an issue, it is recommended to use\nPyUnicode_AsUTF8AndSize()\ninstead.Added in version 3.3.\nChanged in version 3.7: The return type is now\nconst char *\nrather ofchar *\n.\nUTF-32 Codecs\u00b6\nThese are the UTF-32 codec APIs:\n-\nPyObject *PyUnicode_DecodeUTF32(const char *str, Py_ssize_t size, const char *errors, int *byteorder)\u00b6\n- Return value: New reference. Part of the Stable ABI.\nDecode size bytes from a UTF-32 encoded buffer string and return the corresponding Unicode object. errors (if non-\nNULL\n) defines the error handling. It defaults to \u201cstrict\u201d.If byteorder is non-\nNULL\n, the decoder starts decoding using the given byte order:*byteorder == -1: little endian *byteorder == 0: native order *byteorder == 1: big endian\nIf\n*byteorder\nis zero, and the first four bytes of the input data are a byte order mark (BOM), the decoder switches to this byte order and the BOM is not copied into the resulting Unicode string. If*byteorder\nis-1\nor1\n, any byte order mark is copied to the output.After completion, *byteorder is set to the current byte order at the end of input data.\nIf byteorder is\nNULL\n, the codec starts in native order mode.Return\nNULL\nif an exception was raised by the codec.\n-\nPyObject *PyUnicode_DecodeUTF32Stateful(const char *str, Py_ssize_t size, const char *errors, int *byteorder, Py_ssize_t *consumed)\u00b6\n- Return value: New reference. Part of the Stable ABI.\nIf consumed is\nNULL\n, behave likePyUnicode_DecodeUTF32()\n. If consumed is notNULL\n,PyUnicode_DecodeUTF32Stateful()\nwill not treat trailing incomplete UTF-32 byte sequences (such as a number of bytes not divisible by four) as an error. Those bytes will not be decoded and the number of bytes that have been decoded will be stored in consumed.\n-\nPyObject *PyUnicode_AsUTF32String(PyObject *unicode)\u00b6\n- Return value: New reference. Part of the Stable ABI.\nReturn a Python byte string using the UTF-32 encoding in native byte order. The string always starts with a BOM mark. Error handling is \u201cstrict\u201d. Return\nNULL\nif an exception was raised by the codec.\nUTF-16 Codecs\u00b6\nThese are the UTF-16 codec APIs:\n-\nPyObject *PyUnicode_DecodeUTF16(const char *str, Py_ssize_t size, const char *errors, int *byteorder)\u00b6\n- Return value: New reference. Part of the Stable ABI.\nDecode size bytes from a UTF-16 encoded buffer string and return the corresponding Unicode object. errors (if non-\nNULL\n) defines the error handling. It defaults to \u201cstrict\u201d.If byteorder is non-\nNULL\n, the decoder starts decoding using the given byte order:*byteorder == -1: little endian *byteorder == 0: native order *byteorder == 1: big endian\nIf\n*byteorder\nis zero, and the first two bytes of the input data are a byte order mark (BOM), the decoder switches to this byte order and the BOM is not copied into the resulting Unicode string. If*byteorder\nis-1\nor1\n, any byte order mark is copied to the output (where it will result in either a\\ufeff\nor a\\ufffe\ncharacter).After completion,\n*byteorder\nis set to the current byte order at the end of input data.If byteorder is\nNULL\n, the codec starts in native order mode.Return\nNULL\nif an exception was raised by the codec.\n-\nPyObject *PyUnicode_DecodeUTF16Stateful(const char *str, Py_ssize_t size, const char *errors, int *byteorder, Py_ssize_t *consumed)\u00b6\n- Return value: New reference. Part of the Stable ABI.\nIf consumed is\nNULL\n, behave likePyUnicode_DecodeUTF16()\n. If consumed is notNULL\n,PyUnicode_DecodeUTF16Stateful()\nwill not treat trailing incomplete UTF-16 byte sequences (such as an odd number of bytes or a split surrogate pair) as an error. Those bytes will not be decoded and the number of bytes that have been decoded will be stored in consumed.\n-\nPyObject *PyUnicode_AsUTF16String(PyObject *unicode)\u00b6\n- Return value: New reference. Part of the Stable ABI.\nReturn a Python byte string using the UTF-16 encoding in native byte order. The string always starts with a BOM mark. Error handling is \u201cstrict\u201d. Return\nNULL\nif an exception was raised by the codec.\nUTF-7 Codecs\u00b6\nThese are the UTF-7 codec APIs:\n-\nPyObject *PyUnicode_DecodeUTF7(const char *str, Py_ssize_t size, const char *errors)\u00b6\n- Return value: New reference. Part of the Stable ABI.\nCreate a Unicode object by decoding size bytes of the UTF-7 encoded string str. Return\nNULL\nif an exception was raised by the codec.\n-\nPyObject *PyUnicode_DecodeUTF7Stateful(const char *str, Py_ssize_t size, const char *errors, Py_ssize_t *consumed)\u00b6\n- Return value: New reference. Part of the Stable ABI.\nIf consumed is\nNULL\n, behave likePyUnicode_DecodeUTF7()\n. If consumed is notNULL\n, trailing incomplete UTF-7 base-64 sections will not be treated as an error. Those bytes will not be decoded and the number of bytes that have been decoded will be stored in consumed.\nUnicode-Escape Codecs\u00b6\nThese are the \u201cUnicode Escape\u201d codec APIs:\n-\nPyObject *PyUnicode_DecodeUnicodeEscape(const char *str, Py_ssize_t size, const char *errors)\u00b6\n- Return value: New reference. Part of the Stable ABI.\nCreate a Unicode object by decoding size bytes of the Unicode-Escape encoded string str. Return\nNULL\nif an exception was raised by the codec.\n-\nPyObject *PyUnicode_AsUnicodeEscapeString(PyObject *unicode)\u00b6\n- Return value: New reference. Part of the Stable ABI.\nEncode a Unicode object using Unicode-Escape and return the result as a bytes object. Error handling is \u201cstrict\u201d. Return\nNULL\nif an exception was raised by the codec.\nRaw-Unicode-Escape Codecs\u00b6\nThese are the \u201cRaw Unicode Escape\u201d codec APIs:\n-\nPyObject *PyUnicode_DecodeRawUnicodeEscape(const char *str, Py_ssize_t size, const char *errors)\u00b6\n- Return value: New reference. Part of the Stable ABI.\nCreate a Unicode object by decoding size bytes of the Raw-Unicode-Escape encoded string str. Return\nNULL\nif an exception was raised by the codec.\n-\nPyObject *PyUnicode_AsRawUnicodeEscapeString(PyObject *unicode)\u00b6\n- Return value: New reference. Part of the Stable ABI.\nEncode a Unicode object using Raw-Unicode-Escape and return the result as a bytes object. Error handling is \u201cstrict\u201d. Return\nNULL\nif an exception was raised by the codec.\nLatin-1 Codecs\u00b6\nThese are the Latin-1 codec APIs: Latin-1 corresponds to the first 256 Unicode ordinals and only these are accepted by the codecs during encoding.\n-\nPyObject *PyUnicode_DecodeLatin1(const char *str, Py_ssize_t size, const char *errors)\u00b6\n- Return value: New reference. Part of the Stable ABI.\nCreate a Unicode object by decoding size bytes of the Latin-1 encoded string str. Return\nNULL\nif an exception was raised by the codec.\n-\nPyObject *PyUnicode_AsLatin1String(PyObject *unicode)\u00b6\n- Return value: New reference. Part of the Stable ABI.\nEncode a Unicode object using Latin-1 and return the result as Python bytes object. Error handling is \u201cstrict\u201d. Return\nNULL\nif an exception was raised by the codec.\nASCII Codecs\u00b6\nThese are the ASCII codec APIs. Only 7-bit ASCII data is accepted. All other codes generate errors.\n-\nPyObject *PyUnicode_DecodeASCII(const char *str, Py_ssize_t size, const char *errors)\u00b6\n- Return value: New reference. Part of the Stable ABI.\nCreate a Unicode object by decoding size bytes of the ASCII encoded string str. Return\nNULL\nif an exception was raised by the codec.\n-\nPyObject *PyUnicode_AsASCIIString(PyObject *unicode)\u00b6\n- Return value: New reference. Part of the Stable ABI.\nEncode a Unicode object using ASCII and return the result as Python bytes object. Error handling is \u201cstrict\u201d. Return\nNULL\nif an exception was raised by the codec.\nCharacter Map Codecs\u00b6\nThis codec is special in that it can be used to implement many different codecs\n(and this is in fact what was done to obtain most of the standard codecs\nincluded in the encodings\npackage). The codec uses mappings to encode and\ndecode characters. The mapping objects provided must support the\n__getitem__()\nmapping interface; dictionaries and sequences work well.\nThese are the mapping codec APIs:\n-\nPyObject *PyUnicode_DecodeCharmap(const char *str, Py_ssize_t length, PyObject *mapping, const char *errors)\u00b6\n- Return value: New reference. Part of the Stable ABI.\nCreate a Unicode object by decoding size bytes of the encoded string str using the given mapping object. Return\nNULL\nif an exception was raised by the codec.If mapping is\nNULL\n, Latin-1 decoding will be applied. Else mapping must map bytes ordinals (integers in the range from 0 to 255) to Unicode strings, integers (which are then interpreted as Unicode ordinals) orNone\n. Unmapped data bytes \u2013 ones which cause aLookupError\n, as well as ones which get mapped toNone\n,0xFFFE\nor'\\ufffe'\n, are treated as undefined mappings and cause an error.\n-\nPyObject *PyUnicode_AsCharmapString(PyObject *unicode, PyObject *mapping)\u00b6\n- Return value: New reference. Part of the Stable ABI.\nEncode a Unicode object using the given mapping object and return the result as a bytes object. Error handling is \u201cstrict\u201d. Return\nNULL\nif an exception was raised by the codec.The mapping object must map Unicode ordinal integers to bytes objects, integers in the range from 0 to 255 or\nNone\n. Unmapped character ordinals (ones which cause aLookupError\n) as well as mapped toNone\nare treated as \u201cundefined mapping\u201d and cause an error.\nThe following codec API is special in that maps Unicode to Unicode.\n-\nPyObject *PyUnicode_Translate(PyObject *unicode, PyObject *table, const char *errors)\u00b6\n- Return value: New reference. Part of the Stable ABI.\nTranslate a string by applying a character mapping table to it and return the resulting Unicode object. Return\nNULL\nif an exception was raised by the codec.The mapping table must map Unicode ordinal integers to Unicode ordinal integers or\nNone\n(causing deletion of the character).Mapping tables need only provide the\n__getitem__()\ninterface; dictionaries and sequences work well. Unmapped character ordinals (ones which cause aLookupError\n) are left untouched and are copied as-is.errors has the usual meaning for codecs. It may be\nNULL\nwhich indicates to use the default error handling.\nMBCS codecs for Windows\u00b6\nThese are the MBCS codec APIs. They are currently only available on Windows and use the Win32 MBCS converters to implement the conversions. Note that MBCS (or DBCS) is a class of encodings, not just one. The target encoding is defined by the user settings on the machine running the codec.\n-\nPyObject *PyUnicode_DecodeMBCS(const char *str, Py_ssize_t size, const char *errors)\u00b6\n- Return value: New reference. Part of the Stable ABI on Windows since version 3.7.\nCreate a Unicode object by decoding size bytes of the MBCS encoded string str. Return\nNULL\nif an exception was raised by the codec.\n-\nPyObject *PyUnicode_DecodeMBCSStateful(const char *str, Py_ssize_t size, const char *errors, Py_ssize_t *consumed)\u00b6\n- Return value: New reference. Part of the Stable ABI on Windows since version 3.7.\nIf consumed is\nNULL\n, behave likePyUnicode_DecodeMBCS()\n. If consumed is notNULL\n,PyUnicode_DecodeMBCSStateful()\nwill not decode trailing lead byte and the number of bytes that have been decoded will be stored in consumed.\n-\nPyObject *PyUnicode_DecodeCodePageStateful(int code_page, const char *str, Py_ssize_t size, const char *errors, Py_ssize_t *consumed)\u00b6\n- Return value: New reference. Part of the Stable ABI on Windows since version 3.7.\nSimilar to\nPyUnicode_DecodeMBCSStateful()\n, except uses the code page specified by code_page.\n-\nPyObject *PyUnicode_AsMBCSString(PyObject *unicode)\u00b6\n- Return value: New reference. Part of the Stable ABI on Windows since version 3.7.\nEncode a Unicode object using MBCS and return the result as Python bytes object. Error handling is \u201cstrict\u201d. Return\nNULL\nif an exception was raised by the codec.\n-\nPyObject *PyUnicode_EncodeCodePage(int code_page, PyObject *unicode, const char *errors)\u00b6\n- Return value: New reference. Part of the Stable ABI on Windows since version 3.7.\nEncode the Unicode object using the specified code page and return a Python bytes object. Return\nNULL\nif an exception was raised by the codec. UseCP_ACP\ncode page to get the MBCS encoder.Added in version 3.3.\nMethods and Slot Functions\u00b6\nThe following APIs are capable of handling Unicode objects and strings on input (we refer to them as strings in the descriptions) and return Unicode objects or integers as appropriate.\nThey all return NULL\nor -1\nif an exception occurs.\n-\nPyObject *PyUnicode_Concat(PyObject *left, PyObject *right)\u00b6\n- Return value: New reference. Part of the Stable ABI.\nConcat two strings giving a new Unicode string.\n-\nPyObject *PyUnicode_Split(PyObject *unicode, PyObject *sep, Py_ssize_t maxsplit)\u00b6\n- Return value: New reference. Part of the Stable ABI.\nSplit a string giving a list of Unicode strings. If sep is\nNULL\n, splitting will be done at all whitespace substrings. Otherwise, splits occur at the given separator. At most maxsplit splits will be done. If negative, no limit is set. Separators are not included in the resulting list.On error, return\nNULL\nwith an exception set.Equivalent to\nstr.split()\n.\n-\nPyObject *PyUnicode_RSplit(PyObject *unicode, PyObject *sep, Py_ssize_t maxsplit)\u00b6\n- Return value: New reference. Part of the Stable ABI.\nSimilar to\nPyUnicode_Split()\n, but splitting will be done beginning at the end of the string.On error, return\nNULL\nwith an exception set.Equivalent to\nstr.rsplit()\n.\n-\nPyObject *PyUnicode_Splitlines(PyObject *unicode, int keepends)\u00b6\n- Return value: New reference. Part of the Stable ABI.\nSplit a Unicode string at line breaks, returning a list of Unicode strings. CRLF is considered to be one line break. If keepends is\n0\n, the Line break characters are not included in the resulting strings.\n-\nPyObject *PyUnicode_Partition(PyObject *unicode, PyObject *sep)\u00b6\n- Return value: New reference. Part of the Stable ABI.\nSplit a Unicode string at the first occurrence of sep, and return a 3-tuple containing the part before the separator, the separator itself, and the part after the separator. If the separator is not found, return a 3-tuple containing the string itself, followed by two empty strings.\nsep must not be empty.\nOn error, return\nNULL\nwith an exception set.Equivalent to\nstr.partition()\n.\n-\nPyObject *PyUnicode_RPartition(PyObject *unicode, PyObject *sep)\u00b6\n- Return value: New reference. Part of the Stable ABI.\nSimilar to\nPyUnicode_Partition()\n, but split a Unicode string at the last occurrence of sep. If the separator is not found, return a 3-tuple containing two empty strings, followed by the string itself.sep must not be empty.\nOn error, return\nNULL\nwith an exception set.Equivalent to\nstr.rpartition()\n.\n-\nPyObject *PyUnicode_Join(PyObject *separator, PyObject *seq)\u00b6\n- Return value: New reference. Part of the Stable ABI.\nJoin a sequence of strings using the given separator and return the resulting Unicode string.\n-\nPy_ssize_t PyUnicode_Tailmatch(PyObject *unicode, PyObject *substr, Py_ssize_t start, Py_ssize_t end, int direction)\u00b6\n- Part of the Stable ABI.\nReturn\n1\nif substr matchesunicode[start:end]\nat the given tail end (direction ==-1\nmeans to do a prefix match, direction ==1\na suffix match),0\notherwise. Return-1\nif an error occurred.\n-\nPy_ssize_t PyUnicode_Find(PyObject *unicode, PyObject *substr, Py_ssize_t start, Py_ssize_t end, int direction)\u00b6\n- Part of the Stable ABI.\nReturn the first position of substr in\nunicode[start:end]\nusing the given direction (direction ==1\nmeans to do a forward search, direction ==-1\na backward search). The return value is the index of the first match; a value of-1\nindicates that no match was found, and-2\nindicates that an error occurred and an exception has been set.\n-\nPy_ssize_t PyUnicode_FindChar(PyObject *unicode, Py_UCS4 ch, Py_ssize_t start, Py_ssize_t end, int direction)\u00b6\n- Part of the Stable ABI since version 3.7.\nReturn the first position of the character ch in\nunicode[start:end]\nusing the given direction (direction ==1\nmeans to do a forward search, direction ==-1\na backward search). The return value is the index of the first match; a value of-1\nindicates that no match was found, and-2\nindicates that an error occurred and an exception has been set.Added in version 3.3.\nChanged in version 3.7: start and end are now adjusted to behave like\nunicode[start:end]\n.\n-\nPy_ssize_t PyUnicode_Count(PyObject *unicode, PyObject *substr, Py_ssize_t start, Py_ssize_t end)\u00b6\n- Part of the Stable ABI.\nReturn the number of non-overlapping occurrences of substr in\nunicode[start:end]\n. Return-1\nif an error occurred.\n-\nPyObject *PyUnicode_Replace(PyObject *unicode, PyObject *substr, PyObject *replstr, Py_ssize_t maxcount)\u00b6\n- Return value: New reference. Part of the Stable ABI.\nReplace at most maxcount occurrences of substr in unicode with replstr and return the resulting Unicode object. maxcount ==\n-1\nmeans replace all occurrences.\n-\nint PyUnicode_Compare(PyObject *left, PyObject *right)\u00b6\n- Part of the Stable ABI.\nCompare two strings and return\n-1\n,0\n,1\nfor less than, equal, and greater than, respectively.This function returns\n-1\nupon failure, so one should callPyErr_Occurred()\nto check for errors.See also\nThe\nPyUnicode_Equal()\nfunction.\n-\nint PyUnicode_Equal(PyObject *a, PyObject *b)\u00b6\n- Part of the Stable ABI since version 3.14.\nTest if two strings are equal:\nReturn\n1\nif a is equal to b.Return\n0\nif a is not equal to b.Set a\nTypeError\nexception and return-1\nif a or b is not astr\nobject.\nThe function always succeeds if a and b are\nstr\nobjects.The function works for\nstr\nsubclasses, but does not honor custom__eq__()\nmethod.See also\nThe\nPyUnicode_Compare()\nfunction.Added in version 3.14.\n-\nint PyUnicode_EqualToUTF8AndSize(PyObject *unicode, const char *string, Py_ssize_t size)\u00b6\n- Part of the Stable ABI since version 3.13.\nCompare a Unicode object with a char buffer which is interpreted as being UTF-8 or ASCII encoded and return true (\n1\n) if they are equal, or false (0\n) otherwise. If the Unicode object contains surrogate code points (U+D800\n-U+DFFF\n) or the C string is not valid UTF-8, false (0\n) is returned.This function does not raise exceptions.\nAdded in version 3.13.\n-\nint PyUnicode_EqualToUTF8(PyObject *unicode, const char *string)\u00b6\n- Part of the Stable ABI since version 3.13.\nSimilar to\nPyUnicode_EqualToUTF8AndSize()\n, but compute string length usingstrlen()\n. If the Unicode object contains null characters, false (0\n) is returned.Added in version 3.13.\n-\nint PyUnicode_CompareWithASCIIString(PyObject *unicode, const char *string)\u00b6\n- Part of the Stable ABI.\nCompare a Unicode object, unicode, with string and return\n-1\n,0\n,1\nfor less than, equal, and greater than, respectively. It is best to pass only ASCII-encoded strings, but the function interprets the input string as ISO-8859-1 if it contains non-ASCII characters.This function does not raise exceptions.\n-\nPyObject *PyUnicode_RichCompare(PyObject *left, PyObject *right, int op)\u00b6\n- Return value: New reference. Part of the Stable ABI.\nRich compare two Unicode strings and return one of the following:\nNULL\nin case an exception was raisedPy_NotImplemented\nin case the type combination is unknown\nPossible values for op are\nPy_GT\n,Py_GE\n,Py_EQ\n,Py_NE\n,Py_LT\n, andPy_LE\n.\n-\nPyObject *PyUnicode_Format(PyObject *format, PyObject *args)\u00b6\n- Return value: New reference. Part of the Stable ABI.\nReturn a new string object from format and args; this is analogous to\nformat % args\n.\n-\nint PyUnicode_Contains(PyObject *unicode, PyObject *substr)\u00b6\n- Part of the Stable ABI.\nCheck whether substr is contained in unicode and return true or false accordingly.\nsubstr has to coerce to a one element Unicode string.\n-1\nis returned if there was an error.\n-\nvoid PyUnicode_InternInPlace(PyObject **p_unicode)\u00b6\n- Part of the Stable ABI.\nIntern the argument *p_unicode in place. The argument must be the address of a pointer variable pointing to a Python Unicode string object. If there is an existing interned string that is the same as *p_unicode, it sets *p_unicode to it (releasing the reference to the old string object and creating a new strong reference to the interned string object), otherwise it leaves *p_unicode alone and interns it.\n(Clarification: even though there is a lot of talk about references, think of this function as reference-neutral. You must own the object you pass in; after the call you no longer own the passed-in reference, but you newly own the result.)\nThis function never raises an exception. On error, it leaves its argument unchanged without interning it.\nInstances of subclasses of\nstr\nmay not be interned, that is, PyUnicode_CheckExact(*p_unicode) must be true. If it is not, then \u2013 as with any other error \u2013 the argument is left unchanged.Note that interned strings are not \u201cimmortal\u201d. You must keep a reference to the result to benefit from interning.\n-\nPyObject *PyUnicode_InternFromString(const char *str)\u00b6\n- Return value: New reference. Part of the Stable ABI.\nA combination of\nPyUnicode_FromString()\nandPyUnicode_InternInPlace()\n, meant for statically allocated strings.Return a new (\u201cowned\u201d) reference to either a new Unicode string object that has been interned, or an earlier interned string object with the same value.\nPython may keep a reference to the result, or make it immortal, preventing it from being garbage-collected promptly. For interning an unbounded number of different strings, such as ones coming from user input, prefer calling\nPyUnicode_FromString()\nandPyUnicode_InternInPlace()\ndirectly.\n-\nunsigned int PyUnicode_CHECK_INTERNED(PyObject *str)\u00b6\nReturn a non-zero value if str is interned, zero if not. The str argument must be a string; this is not checked. This function always succeeds.\nCPython implementation detail: A non-zero return value may carry additional information about how the string is interned. The meaning of such non-zero values, as well as each specific string\u2019s intern-related details, may change between CPython versions.\nPyUnicodeWriter\u00b6\nThe PyUnicodeWriter\nAPI can be used to create a Python str\nobject.\nAdded in version 3.14.\n-\ntype PyUnicodeWriter\u00b6\nA Unicode writer instance.\nThe instance must be destroyed by\nPyUnicodeWriter_Finish()\non success, orPyUnicodeWriter_Discard()\non error.\n-\nPyUnicodeWriter *PyUnicodeWriter_Create(Py_ssize_t length)\u00b6\nCreate a Unicode writer instance.\nlength must be greater than or equal to\n0\n.If length is greater than\n0\n, preallocate an internal buffer of length characters.Set an exception and return\nNULL\non error.\n-\nPyObject *PyUnicodeWriter_Finish(PyUnicodeWriter *writer)\u00b6\nReturn the final Python\nstr\nobject and destroy the writer instance.Set an exception and return\nNULL\non error.The writer instance is invalid after this call.\n-\nvoid PyUnicodeWriter_Discard(PyUnicodeWriter *writer)\u00b6\nDiscard the internal Unicode buffer and destroy the writer instance.\nIf writer is\nNULL\n, no operation is performed.The writer instance is invalid after this call.\n-\nint PyUnicodeWriter_WriteChar(PyUnicodeWriter *writer, Py_UCS4 ch)\u00b6\nWrite the single Unicode character ch into writer.\nOn success, return\n0\n. On error, set an exception, leave the writer unchanged, and return-1\n.\n-\nint PyUnicodeWriter_WriteUTF8(PyUnicodeWriter *writer, const char *str, Py_ssize_t size)\u00b6\nDecode the string str from UTF-8 in strict mode and write the output into writer.\nsize is the string length in bytes. If size is equal to\n-1\n, callstrlen(str)\nto get the string length.On success, return\n0\n. On error, set an exception, leave the writer unchanged, and return-1\n.See also\nPyUnicodeWriter_DecodeUTF8Stateful()\n.\n-\nint PyUnicodeWriter_WriteASCII(PyUnicodeWriter *writer, const char *str, Py_ssize_t size)\u00b6\nWrite the ASCII string str into writer.\nsize is the string length in bytes. If size is equal to\n-1\n, callstrlen(str)\nto get the string length.str must only contain ASCII characters. The behavior is undefined if str contains non-ASCII characters.\nOn success, return\n0\n. On error, set an exception, leave the writer unchanged, and return-1\n.Added in version 3.14.\n-\nint PyUnicodeWriter_WriteWideChar(PyUnicodeWriter *writer, const wchar_t *str, Py_ssize_t size)\u00b6\nWrite the wide string str into writer.\nsize is a number of wide characters. If size is equal to\n-1\n, callwcslen(str)\nto get the string length.On success, return\n0\n. On error, set an exception, leave the writer unchanged, and return-1\n.\n-\nint PyUnicodeWriter_WriteUCS4(PyUnicodeWriter *writer, Py_UCS4 *str, Py_ssize_t size)\u00b6\nWriter the UCS4 string str into writer.\nsize is a number of UCS4 characters.\nOn success, return\n0\n. On error, set an exception, leave the writer unchanged, and return-1\n.\n-\nint PyUnicodeWriter_WriteStr(PyUnicodeWriter *writer, PyObject *obj)\u00b6\nCall\nPyObject_Str()\non obj and write the output into writer.On success, return\n0\n. On error, set an exception, leave the writer unchanged, and return-1\n.\n-\nint PyUnicodeWriter_WriteRepr(PyUnicodeWriter *writer, PyObject *obj)\u00b6\nCall\nPyObject_Repr()\non obj and write the output into writer.On success, return\n0\n. On error, set an exception, leave the writer unchanged, and return-1\n.\n-\nint PyUnicodeWriter_WriteSubstring(PyUnicodeWriter *writer, PyObject *str, Py_ssize_t start, Py_ssize_t end)\u00b6\nWrite the substring\nstr[start:end]\ninto writer.str must be Python\nstr\nobject. start must be greater than or equal to 0, and less than or equal to end. end must be less than or equal to str length.On success, return\n0\n. On error, set an exception, leave the writer unchanged, and return-1\n.\n-\nint PyUnicodeWriter_Format(PyUnicodeWriter *writer, const char *format, ...)\u00b6\nSimilar to\nPyUnicode_FromFormat()\n, but write the output directly into writer.On success, return\n0\n. On error, set an exception, leave the writer unchanged, and return-1\n.\n-\nint PyUnicodeWriter_DecodeUTF8Stateful(PyUnicodeWriter *writer, const char *string, Py_ssize_t length, const char *errors, Py_ssize_t *consumed)\u00b6\nDecode the string str from UTF-8 with errors error handler and write the output into writer.\nsize is the string length in bytes. If size is equal to\n-1\n, callstrlen(str)\nto get the string length.errors is an error handler name, such as\n\"replace\"\n. If errors isNULL\n, use the strict error handler.If consumed is not\nNULL\n, set *consumed to the number of decoded bytes on success. If consumed isNULL\n, treat trailing incomplete UTF-8 byte sequences as an error.On success, return\n0\n. On error, set an exception, leave the writer unchanged, and return-1\n.See also\nPyUnicodeWriter_WriteUTF8()\n.\nDeprecated API\u00b6\nThe following API is deprecated.\n-\ntype Py_UNICODE\u00b6\nThis is a typedef of\nwchar_t\n, which is a 16-bit type or 32-bit type depending on the platform. Please usewchar_t\ndirectly instead.Changed in version 3.3: In previous versions, this was a 16-bit type or a 32-bit type depending on whether you selected a \u201cnarrow\u201d or \u201cwide\u201d Unicode version of Python at build time.\nDeprecated since version 3.13, will be removed in version 3.15.\n-\nint PyUnicode_READY(PyObject *unicode)\u00b6\nDo nothing and return\n0\n. This API is kept only for backward compatibility, but there are no plans to remove it.Added in version 3.3.\nDeprecated since version 3.10: This API does nothing since Python 3.12. Previously, this needed to be called for each string created using the old API (\nPyUnicode_FromUnicode()\nor similar).\n-\nunsigned int PyUnicode_IS_READY(PyObject *unicode)\u00b6\nDo nothing and return\n1\n. This API is kept only for backward compatibility, but there are no plans to remove it.Added in version 3.3.\nDeprecated since version 3.14: This API does nothing since Python 3.12. Previously, this could be called to check if\nPyUnicode_READY()\nis necessary.", "code_snippets": [], "language": "Python", "source": "python.org", "token_count": 15326} +{"url": "https://docs.python.org/3/extending/newtypes.html", "title": "Defining Extension Types: Assorted Topics", "content": "3. Defining Extension Types: Assorted Topics\u00b6\nThis section aims to give a quick fly-by on the various type methods you can implement and what they do.\nHere is the definition of PyTypeObject\n, with some fields only used in\ndebug builds omitted:\ntypedef struct _typeobject {\nPyObject_VAR_HEAD\nconst char *tp_name; /* For printing, in format \".\" */\nPy_ssize_t tp_basicsize, tp_itemsize; /* For allocation */\n/* Methods to implement standard operations */\ndestructor tp_dealloc;\nPy_ssize_t tp_vectorcall_offset;\ngetattrfunc tp_getattr;\nsetattrfunc tp_setattr;\nPyAsyncMethods *tp_as_async; /* formerly known as tp_compare (Python 2)\nor tp_reserved (Python 3) */\nreprfunc tp_repr;\n/* Method suites for standard classes */\nPyNumberMethods *tp_as_number;\nPySequenceMethods *tp_as_sequence;\nPyMappingMethods *tp_as_mapping;\n/* More standard operations (here for binary compatibility) */\nhashfunc tp_hash;\nternaryfunc tp_call;\nreprfunc tp_str;\ngetattrofunc tp_getattro;\nsetattrofunc tp_setattro;\n/* Functions to access object as input/output buffer */\nPyBufferProcs *tp_as_buffer;\n/* Flags to define presence of optional/expanded features */\nunsigned long tp_flags;\nconst char *tp_doc; /* Documentation string */\n/* Assigned meaning in release 2.0 */\n/* call function for all accessible objects */\ntraverseproc tp_traverse;\n/* delete references to contained objects */\ninquiry tp_clear;\n/* Assigned meaning in release 2.1 */\n/* rich comparisons */\nrichcmpfunc tp_richcompare;\n/* weak reference enabler */\nPy_ssize_t tp_weaklistoffset;\n/* Iterators */\ngetiterfunc tp_iter;\niternextfunc tp_iternext;\n/* Attribute descriptor and subclassing stuff */\nPyMethodDef *tp_methods;\nPyMemberDef *tp_members;\nPyGetSetDef *tp_getset;\n// Strong reference on a heap type, borrowed reference on a static type\nPyTypeObject *tp_base;\nPyObject *tp_dict;\ndescrgetfunc tp_descr_get;\ndescrsetfunc tp_descr_set;\nPy_ssize_t tp_dictoffset;\ninitproc tp_init;\nallocfunc tp_alloc;\nnewfunc tp_new;\nfreefunc tp_free; /* Low-level free-memory routine */\ninquiry tp_is_gc; /* For PyObject_IS_GC */\nPyObject *tp_bases;\nPyObject *tp_mro; /* method resolution order */\nPyObject *tp_cache; /* no longer used */\nvoid *tp_subclasses; /* for static builtin types this is an index */\nPyObject *tp_weaklist; /* not used for static builtin types */\ndestructor tp_del;\n/* Type attribute cache version tag. Added in version 2.6.\n* If zero, the cache is invalid and must be initialized.\n*/\nunsigned int tp_version_tag;\ndestructor tp_finalize;\nvectorcallfunc tp_vectorcall;\n/* bitset of which type-watchers care about this type */\nunsigned char tp_watched;\n/* Number of tp_version_tag values used.\n* Set to _Py_ATTR_CACHE_UNUSED if the attribute cache is\n* disabled for this type (e.g. due to custom MRO entries).\n* Otherwise, limited to MAX_VERSIONS_PER_CLASS (defined elsewhere).\n*/\nuint16_t tp_versions_used;\n} PyTypeObject;\nNow that\u2019s a lot of methods. Don\u2019t worry too much though \u2013 if you have a type you want to define, the chances are very good that you will only implement a handful of these.\nAs you probably expect by now, we\u2019re going to go over this and give more information about the various handlers. We won\u2019t go in the order they are defined in the structure, because there is a lot of historical baggage that impacts the ordering of the fields. It\u2019s often easiest to find an example that includes the fields you need and then change the values to suit your new type.\nconst char *tp_name; /* For printing */\nThe name of the type \u2013 as mentioned in the previous chapter, this will appear in various places, almost entirely for diagnostic purposes. Try to choose something that will be helpful in such a situation!\nPy_ssize_t tp_basicsize, tp_itemsize; /* For allocation */\nThese fields tell the runtime how much memory to allocate when new objects of\nthis type are created. Python has some built-in support for variable length\nstructures (think: strings, tuples) which is where the tp_itemsize\nfield\ncomes in. This will be dealt with later.\nconst char *tp_doc;\nHere you can put a string (or its address) that you want returned when the\nPython script references obj.__doc__\nto retrieve the doc string.\nNow we come to the basic type methods \u2013 the ones most extension types will implement.\n3.1. Finalization and De-allocation\u00b6\ndestructor tp_dealloc;\nThis function is called when the reference count of the instance of your type is reduced to zero and the Python interpreter wants to reclaim it. If your type has memory to free or other clean-up to perform, you can put it here. The object itself needs to be freed here as well. Here is an example of this function:\nstatic void\nnewdatatype_dealloc(PyObject *op)\n{\nnewdatatypeobject *self = (newdatatypeobject *) op;\nfree(self->obj_UnderlyingDatatypePtr);\nPy_TYPE(self)->tp_free(self);\n}\nIf your type supports garbage collection, the destructor should call\nPyObject_GC_UnTrack()\nbefore clearing any member fields:\nstatic void\nnewdatatype_dealloc(PyObject *op)\n{\nnewdatatypeobject *self = (newdatatypeobject *) op;\nPyObject_GC_UnTrack(op);\nPy_CLEAR(self->other_obj);\n...\nPy_TYPE(self)->tp_free(self);\n}\nOne important requirement of the deallocator function is that it leaves any\npending exceptions alone. This is important since deallocators are frequently\ncalled as the interpreter unwinds the Python stack; when the stack is unwound\ndue to an exception (rather than normal returns), nothing is done to protect the\ndeallocators from seeing that an exception has already been set. Any actions\nwhich a deallocator performs which may cause additional Python code to be\nexecuted may detect that an exception has been set. This can lead to misleading\nerrors from the interpreter. The proper way to protect against this is to save\na pending exception before performing the unsafe action, and restoring it when\ndone. This can be done using the PyErr_Fetch()\nand\nPyErr_Restore()\nfunctions:\nstatic void\nmy_dealloc(PyObject *obj)\n{\nMyObject *self = (MyObject *) obj;\nPyObject *cbresult;\nif (self->my_callback != NULL) {\nPyObject *err_type, *err_value, *err_traceback;\n/* This saves the current exception state */\nPyErr_Fetch(&err_type, &err_value, &err_traceback);\ncbresult = PyObject_CallNoArgs(self->my_callback);\nif (cbresult == NULL) {\nPyErr_WriteUnraisable(self->my_callback);\n}\nelse {\nPy_DECREF(cbresult);\n}\n/* This restores the saved exception state */\nPyErr_Restore(err_type, err_value, err_traceback);\nPy_DECREF(self->my_callback);\n}\nPy_TYPE(self)->tp_free(self);\n}\nNote\nThere are limitations to what you can safely do in a deallocator function.\nFirst, if your type supports garbage collection (using tp_traverse\nand/or tp_clear\n), some of the object\u2019s members can have been\ncleared or finalized by the time tp_dealloc\nis called. Second, in\ntp_dealloc\n, your object is in an unstable state: its reference\ncount is equal to zero. Any call to a non-trivial object or API (as in the\nexample above) might end up calling tp_dealloc\nagain, causing a\ndouble free and a crash.\nStarting with Python 3.4, it is recommended not to put any complex\nfinalization code in tp_dealloc\n, and instead use the new\ntp_finalize\ntype method.\nSee also\nPEP 442 explains the new finalization scheme.\n3.2. Object Presentation\u00b6\nIn Python, there are two ways to generate a textual representation of an object:\nthe repr()\nfunction, and the str()\nfunction. (The print()\nfunction just calls str()\n.) These handlers are both optional.\nreprfunc tp_repr;\nreprfunc tp_str;\nThe tp_repr\nhandler should return a string object containing a\nrepresentation of the instance for which it is called. Here is a simple\nexample:\nstatic PyObject *\nnewdatatype_repr(PyObject *op)\n{\nnewdatatypeobject *self = (newdatatypeobject *) op;\nreturn PyUnicode_FromFormat(\"Repr-ified_newdatatype{{size:%d}}\",\nself->obj_UnderlyingDatatypePtr->size);\n}\nIf no tp_repr\nhandler is specified, the interpreter will supply a\nrepresentation that uses the type\u2019s tp_name\nand a uniquely identifying\nvalue for the object.\nThe tp_str\nhandler is to str()\nwhat the tp_repr\nhandler\ndescribed above is to repr()\n; that is, it is called when Python code calls\nstr()\non an instance of your object. Its implementation is very similar\nto the tp_repr\nfunction, but the resulting string is intended for human\nconsumption. If tp_str\nis not specified, the tp_repr\nhandler is\nused instead.\nHere is a simple example:\nstatic PyObject *\nnewdatatype_str(PyObject *op)\n{\nnewdatatypeobject *self = (newdatatypeobject *) op;\nreturn PyUnicode_FromFormat(\"Stringified_newdatatype{{size:%d}}\",\nself->obj_UnderlyingDatatypePtr->size);\n}\n3.3. Attribute Management\u00b6\nFor every object which can support attributes, the corresponding type must\nprovide the functions that control how the attributes are resolved. There needs\nto be a function which can retrieve attributes (if any are defined), and another\nto set attributes (if setting attributes is allowed). Removing an attribute is\na special case, for which the new value passed to the handler is NULL\n.\nPython supports two pairs of attribute handlers; a type that supports attributes only needs to implement the functions for one pair. The difference is that one pair takes the name of the attribute as a char*, while the other accepts a PyObject*. Each type can use whichever pair makes more sense for the implementation\u2019s convenience.\ngetattrfunc tp_getattr; /* char * version */\nsetattrfunc tp_setattr;\n/* ... */\ngetattrofunc tp_getattro; /* PyObject * version */\nsetattrofunc tp_setattro;\nIf accessing attributes of an object is always a simple operation (this will be explained shortly), there are generic implementations which can be used to provide the PyObject* version of the attribute management functions. The actual need for type-specific attribute handlers almost completely disappeared starting with Python 2.2, though there are many examples which have not been updated to use some of the new generic mechanism that is available.\n3.3.1. Generic Attribute Management\u00b6\nMost extension types only use simple attributes. So, what makes the attributes simple? There are only a couple of conditions that must be met:\nThe name of the attributes must be known when\nPyType_Ready()\nis called.No special processing is needed to record that an attribute was looked up or set, nor do actions need to be taken based on the value.\nNote that this list does not place any restrictions on the values of the attributes, when the values are computed, or how relevant data is stored.\nWhen PyType_Ready()\nis called, it uses three tables referenced by the\ntype object to create descriptors which are placed in the dictionary of the\ntype object. Each descriptor controls access to one attribute of the instance\nobject. Each of the tables is optional; if all three are NULL\n, instances of\nthe type will only have attributes that are inherited from their base type, and\nshould leave the tp_getattro\nand tp_setattro\nfields NULL\nas\nwell, allowing the base type to handle attributes.\nThe tables are declared as three fields of the type object:\nstruct PyMethodDef *tp_methods;\nstruct PyMemberDef *tp_members;\nstruct PyGetSetDef *tp_getset;\nIf tp_methods\nis not NULL\n, it must refer to an array of\nPyMethodDef\nstructures. Each entry in the table is an instance of this\nstructure:\ntypedef struct PyMethodDef {\nconst char *ml_name; /* method name */\nPyCFunction ml_meth; /* implementation function */\nint ml_flags; /* flags */\nconst char *ml_doc; /* docstring */\n} PyMethodDef;\nOne entry should be defined for each method provided by the type; no entries are\nneeded for methods inherited from a base type. One additional entry is needed\nat the end; it is a sentinel that marks the end of the array. The\nml_name\nfield of the sentinel must be NULL\n.\nThe second table is used to define attributes which map directly to data stored in the instance. A variety of primitive C types are supported, and access may be read-only or read-write. The structures in the table are defined as:\ntypedef struct PyMemberDef {\nconst char *name;\nint type;\nint offset;\nint flags;\nconst char *doc;\n} PyMemberDef;\nFor each entry in the table, a descriptor will be constructed and added to the\ntype which will be able to extract a value from the instance structure. The\ntype\nfield should contain a type code like Py_T_INT\nor\nPy_T_DOUBLE\n; the value will be used to determine how to\nconvert Python values to and from C values. The flags\nfield is used to\nstore flags which control how the attribute can be accessed: you can set it to\nPy_READONLY\nto prevent Python code from setting it.\nAn interesting advantage of using the tp_members\ntable to build\ndescriptors that are used at runtime is that any attribute defined this way can\nhave an associated doc string simply by providing the text in the table. An\napplication can use the introspection API to retrieve the descriptor from the\nclass object, and get the doc string using its __doc__\nattribute.\nAs with the tp_methods\ntable, a sentinel entry with a ml_name\nvalue\nof NULL\nis required.\n3.3.2. Type-specific Attribute Management\u00b6\nFor simplicity, only the char* version will be demonstrated here; the type of the name parameter is the only difference between the char* and PyObject* flavors of the interface. This example effectively does the same thing as the generic example above, but does not use the generic support added in Python 2.2. It explains how the handler functions are called, so that if you do need to extend their functionality, you\u2019ll understand what needs to be done.\nThe tp_getattr\nhandler is called when the object requires an attribute\nlook-up. It is called in the same situations where the __getattr__()\nmethod of a class would be called.\nHere is an example:\nstatic PyObject *\nnewdatatype_getattr(PyObject *op, char *name)\n{\nnewdatatypeobject *self = (newdatatypeobject *) op;\nif (strcmp(name, \"data\") == 0) {\nreturn PyLong_FromLong(self->data);\n}\nPyErr_Format(PyExc_AttributeError,\n\"'%.100s' object has no attribute '%.400s'\",\nPy_TYPE(self)->tp_name, name);\nreturn NULL;\n}\nThe tp_setattr\nhandler is called when the __setattr__()\nor\n__delattr__()\nmethod of a class instance would be called. When an\nattribute should be deleted, the third parameter will be NULL\n. Here is an\nexample that simply raises an exception; if this were really all you wanted, the\ntp_setattr\nhandler should be set to NULL\n.\nstatic int\nnewdatatype_setattr(PyObject *op, char *name, PyObject *v)\n{\nPyErr_Format(PyExc_RuntimeError, \"Read-only attribute: %s\", name);\nreturn -1;\n}\n3.4. Object Comparison\u00b6\nrichcmpfunc tp_richcompare;\nThe tp_richcompare\nhandler is called when comparisons are needed. It is\nanalogous to the rich comparison methods, like\n__lt__()\n, and also called by PyObject_RichCompare()\nand\nPyObject_RichCompareBool()\n.\nThis function is called with two Python objects and the operator as arguments,\nwhere the operator is one of Py_EQ\n, Py_NE\n, Py_LE\n, Py_GE\n,\nPy_LT\nor Py_GT\n. It should compare the two objects with respect to the\nspecified operator and return Py_True\nor Py_False\nif the comparison is\nsuccessful, Py_NotImplemented\nto indicate that comparison is not\nimplemented and the other object\u2019s comparison method should be tried, or NULL\nif an exception was set.\nHere is a sample implementation, for a datatype that is considered equal if the size of an internal pointer is equal:\nstatic PyObject *\nnewdatatype_richcmp(PyObject *lhs, PyObject *rhs, int op)\n{\nnewdatatypeobject *obj1 = (newdatatypeobject *) lhs;\nnewdatatypeobject *obj2 = (newdatatypeobject *) rhs;\nPyObject *result;\nint c, size1, size2;\n/* code to make sure that both arguments are of type\nnewdatatype omitted */\nsize1 = obj1->obj_UnderlyingDatatypePtr->size;\nsize2 = obj2->obj_UnderlyingDatatypePtr->size;\nswitch (op) {\ncase Py_LT: c = size1 < size2; break;\ncase Py_LE: c = size1 <= size2; break;\ncase Py_EQ: c = size1 == size2; break;\ncase Py_NE: c = size1 != size2; break;\ncase Py_GT: c = size1 > size2; break;\ncase Py_GE: c = size1 >= size2; break;\n}\nresult = c ? Py_True : Py_False;\nreturn Py_NewRef(result);\n}\n3.5. Abstract Protocol Support\u00b6\nPython supports a variety of abstract \u2018protocols;\u2019 the specific interfaces provided to use these interfaces are documented in Abstract Objects Layer.\nA number of these abstract interfaces were defined early in the development of\nthe Python implementation. In particular, the number, mapping, and sequence\nprotocols have been part of Python since the beginning. Other protocols have\nbeen added over time. For protocols which depend on several handler routines\nfrom the type implementation, the older protocols have been defined as optional\nblocks of handlers referenced by the type object. For newer protocols there are\nadditional slots in the main type object, with a flag bit being set to indicate\nthat the slots are present and should be checked by the interpreter. (The flag\nbit does not indicate that the slot values are non-NULL\n. The flag may be set\nto indicate the presence of a slot, but a slot may still be unfilled.)\nPyNumberMethods *tp_as_number;\nPySequenceMethods *tp_as_sequence;\nPyMappingMethods *tp_as_mapping;\nIf you wish your object to be able to act like a number, a sequence, or a\nmapping object, then you place the address of a structure that implements the C\ntype PyNumberMethods\n, PySequenceMethods\n, or\nPyMappingMethods\n, respectively. It is up to you to fill in this\nstructure with appropriate values. You can find examples of the use of each of\nthese in the Objects\ndirectory of the Python source distribution.\nhashfunc tp_hash;\nThis function, if you choose to provide it, should return a hash number for an instance of your data type. Here is a simple example:\nstatic Py_hash_t\nnewdatatype_hash(PyObject *op)\n{\nnewdatatypeobject *self = (newdatatypeobject *) op;\nPy_hash_t result;\nresult = self->some_size + 32767 * self->some_number;\nif (result == -1) {\nresult = -2;\n}\nreturn result;\n}\nPy_hash_t\nis a signed integer type with a platform-varying width.\nReturning -1\nfrom tp_hash\nindicates an error,\nwhich is why you should be careful to avoid returning it when hash computation\nis successful, as seen above.\nternaryfunc tp_call;\nThis function is called when an instance of your data type is \u201ccalled\u201d, for\nexample, if obj1\nis an instance of your data type and the Python script\ncontains obj1('hello')\n, the tp_call\nhandler is invoked.\nThis function takes three arguments:\nself is the instance of the data type which is the subject of the call. If the call is\nobj1('hello')\n, then self isobj1\n.args is a tuple containing the arguments to the call. You can use\nPyArg_ParseTuple()\nto extract the arguments.kwds is a dictionary of keyword arguments that were passed. If this is non-\nNULL\nand you support keyword arguments, usePyArg_ParseTupleAndKeywords()\nto extract the arguments. If you do not want to support keyword arguments and this is non-NULL\n, raise aTypeError\nwith a message saying that keyword arguments are not supported.\nHere is a toy tp_call\nimplementation:\nstatic PyObject *\nnewdatatype_call(PyObject *op, PyObject *args, PyObject *kwds)\n{\nnewdatatypeobject *self = (newdatatypeobject *) op;\nPyObject *result;\nconst char *arg1;\nconst char *arg2;\nconst char *arg3;\nif (!PyArg_ParseTuple(args, \"sss:call\", &arg1, &arg2, &arg3)) {\nreturn NULL;\n}\nresult = PyUnicode_FromFormat(\n\"Returning -- value: [%d] arg1: [%s] arg2: [%s] arg3: [%s]\\n\",\nself->obj_UnderlyingDatatypePtr->size,\narg1, arg2, arg3);\nreturn result;\n}\n/* Iterators */\ngetiterfunc tp_iter;\niternextfunc tp_iternext;\nThese functions provide support for the iterator protocol. Both handlers\ntake exactly one parameter, the instance for which they are being called,\nand return a new reference. In the case of an error, they should set an\nexception and return NULL\n. tp_iter\ncorresponds\nto the Python __iter__()\nmethod, while tp_iternext\ncorresponds to the Python __next__()\nmethod.\nAny iterable object must implement the tp_iter\nhandler, which must return an iterator object. Here the same guidelines\napply as for Python classes:\nFor collections (such as lists and tuples) which can support multiple independent iterators, a new iterator should be created and returned by each call to\ntp_iter\n.Objects which can only be iterated over once (usually due to side effects of iteration, such as file objects) can implement\ntp_iter\nby returning a new reference to themselves \u2013 and should also therefore implement thetp_iternext\nhandler.\nAny iterator object should implement both tp_iter\nand tp_iternext\n. An iterator\u2019s\ntp_iter\nhandler should return a new reference\nto the iterator. Its tp_iternext\nhandler should\nreturn a new reference to the next object in the iteration, if there is one.\nIf the iteration has reached the end, tp_iternext\nmay return NULL\nwithout setting an exception, or it may set\nStopIteration\nin addition to returning NULL\n; avoiding\nthe exception can yield slightly better performance. If an actual error\noccurs, tp_iternext\nshould always set an exception\nand return NULL\n.\n3.6. Weak Reference Support\u00b6\nOne of the goals of Python\u2019s weak reference implementation is to allow any type to participate in the weak reference mechanism without incurring the overhead on performance-critical objects (such as numbers).\nSee also\nDocumentation for the weakref\nmodule.\nFor an object to be weakly referenceable, the extension type must set the\nPy_TPFLAGS_MANAGED_WEAKREF\nbit of the tp_flags\nfield. The legacy tp_weaklistoffset\nfield should\nbe left as zero.\nConcretely, here is how the statically declared type object would look:\nstatic PyTypeObject TrivialType = {\nPyVarObject_HEAD_INIT(NULL, 0)\n/* ... other members omitted for brevity ... */\n.tp_flags = Py_TPFLAGS_MANAGED_WEAKREF | ...,\n};\nThe only further addition is that tp_dealloc\nneeds to clear any weak\nreferences (by calling PyObject_ClearWeakRefs()\n):\nstatic void\nTrivial_dealloc(PyObject *op)\n{\n/* Clear weakrefs first before calling any destructors */\nPyObject_ClearWeakRefs(op);\n/* ... remainder of destruction code omitted for brevity ... */\nPy_TYPE(op)->tp_free(op);\n}\n3.7. More Suggestions\u00b6\nIn order to learn how to implement any specific method for your new data type,\nget the CPython source code. Go to the Objects\ndirectory,\nthen search the C source files for tp_\nplus the function you want\n(for example, tp_richcompare\n). You will find examples of the function\nyou want to implement.\nWhen you need to verify that an object is a concrete instance of the type you\nare implementing, use the PyObject_TypeCheck()\nfunction. A sample of\nits use might be something like the following:\nif (!PyObject_TypeCheck(some_object, &MyType)) {\nPyErr_SetString(PyExc_TypeError, \"arg #1 not a mything\");\nreturn NULL;\n}\nSee also\n- Download CPython source releases.\n- The CPython project on GitHub, where the CPython source code is developed.", "code_snippets": [], "language": "Python", "source": "python.org", "token_count": 5673} +{"url": "https://docs.python.org/3/library/email.errors.html", "title": ": Exception and Defect classes", "content": "email.errors\n: Exception and Defect classes\u00b6\nSource code: Lib/email/errors.py\nThe following exception classes are defined in the email.errors\nmodule:\n- exception email.errors.MessageError\u00b6\nThis is the base class for all exceptions that the\nemail\npackage can raise. It is derived from the standardException\nclass and defines no additional methods.\n- exception email.errors.MessageParseError\u00b6\nThis is the base class for exceptions raised by the\nParser\nclass. It is derived fromMessageError\n. This class is also used internally by the parser used byheaderregistry\n.\n- exception email.errors.HeaderParseError\u00b6\nRaised under some error conditions when parsing the RFC 5322 headers of a message, this class is derived from\nMessageParseError\n. Theset_boundary()\nmethod will raise this error if the content type is unknown when the method is called.Header\nmay raise this error for certain base64 decoding errors, and when an attempt is made to create a header that appears to contain an embedded header (that is, there is what is supposed to be a continuation line that has no leading whitespace and looks like a header).\n- exception email.errors.BoundaryError\u00b6\nDeprecated and no longer used.\n- exception email.errors.MultipartConversionError\u00b6\nRaised if the\nattach()\nmethod is called on an instance of a class derived fromMIMENonMultipart\n(e.g.MIMEImage\n).MultipartConversionError\nmultiply inherits fromMessageError\nand the built-inTypeError\n.\n- exception email.errors.HeaderWriteError\u00b6\nRaised when an error occurs when the\ngenerator\noutputs headers.\n- exception email.errors.MessageDefect\u00b6\nThis is the base class for all defects found when parsing email messages. It is derived from\nValueError\n.\n- exception email.errors.HeaderDefect\u00b6\nThis is the base class for all defects found when parsing email headers. It is derived from\nMessageDefect\n.\nHere is the list of the defects that the FeedParser\ncan find while parsing messages. Note that the defects are added to the message\nwhere the problem was found, so for example, if a message nested inside a\nmultipart/alternative had a malformed header, that nested message\nobject would have a defect, but the containing messages would not.\nAll defect classes are subclassed from email.errors.MessageDefect\n.\n- exception email.errors.NoBoundaryInMultipartDefect\u00b6\nA message claimed to be a multipart, but had no boundary parameter.\n- exception email.errors.StartBoundaryNotFoundDefect\u00b6\nThe start boundary claimed in the Content-Type header was never found.\n- exception email.errors.CloseBoundaryNotFoundDefect\u00b6\nA start boundary was found, but no corresponding close boundary was ever found.\nAdded in version 3.3.\n- exception email.errors.FirstHeaderLineIsContinuationDefect\u00b6\nThe message had a continuation line as its first header line.\n- exception email.errors.MisplacedEnvelopeHeaderDefect\u00b6\nA \u201cUnix From\u201d header was found in the middle of a header block.\n- exception email.errors.MissingHeaderBodySeparatorDefect\u00b6\nA line was found while parsing headers that had no leading white space but contained no \u2018:\u2019. Parsing continues assuming that the line represents the first line of the body.\nAdded in version 3.3.\n- exception email.errors.MalformedHeaderDefect\u00b6\nA header was found that was missing a colon, or was otherwise malformed.\nDeprecated since version 3.3: This defect has not been used for several Python versions.\n- exception email.errors.MultipartInvariantViolationDefect\u00b6\nA message claimed to be a multipart, but no subparts were found. Note that when a message has this defect, its\nis_multipart()\nmethod may returnFalse\neven though its content type claims to be multipart.\n- exception email.errors.InvalidBase64PaddingDefect\u00b6\nWhen decoding a block of base64 encoded bytes, the padding was not correct. Enough padding is added to perform the decode, but the resulting decoded bytes may be invalid.\n- exception email.errors.InvalidBase64CharactersDefect\u00b6\nWhen decoding a block of base64 encoded bytes, characters outside the base64 alphabet were encountered. The characters are ignored, but the resulting decoded bytes may be invalid.\n- exception email.errors.InvalidBase64LengthDefect\u00b6\nWhen decoding a block of base64 encoded bytes, the number of non-padding base64 characters was invalid (1 more than a multiple of 4). The encoded block was kept as-is.\n- exception email.errors.InvalidDateDefect\u00b6\nWhen decoding an invalid or unparsable date field. The original value is kept as-is.", "code_snippets": [], "language": "Python", "source": "python.org", "token_count": 1106} +{"url": "https://docs.python.org/3/reference/toplevel_components.html", "title": "Top-level components", "content": "9. Top-level components\u00b6\nThe Python interpreter can get its input from a number of sources: from a script passed to it as standard input or as program argument, typed in interactively, from a module source file, etc. This chapter gives the syntax used in these cases.\n9.1. Complete Python programs\u00b6\nWhile a language specification need not prescribe how the language interpreter\nis invoked, it is useful to have a notion of a complete Python program. A\ncomplete Python program is executed in a minimally initialized environment: all\nbuilt-in and standard modules are available, but none have been initialized,\nexcept for sys\n(various system services), builtins\n(built-in\nfunctions, exceptions and None\n) and __main__\n. The latter is used to\nprovide the local and global namespace for execution of the complete program.\nThe syntax for a complete Python program is that for file input, described in the next section.\nThe interpreter may also be invoked in interactive mode; in this case, it does\nnot read and execute a complete program but reads and executes one statement\n(possibly compound) at a time. The initial environment is identical to that of\na complete program; each statement is executed in the namespace of\n__main__\n.\nA complete program can be passed to the interpreter\nin three forms: with the -c\nstring command line option, as a file\npassed as the first command line argument, or as standard input. If the file\nor standard input is a tty device, the interpreter enters interactive mode;\notherwise, it executes the file as a complete program.\n9.2. File input\u00b6\nAll input read from non-interactive files has the same form:\nfile_input: (NEWLINE | statement\n)* ENDMARKER\nThis syntax is used in the following situations:\nwhen parsing a complete Python program (from a file or from a string);\nwhen parsing a module;\nwhen parsing a string passed to the\nexec()\nfunction;\n9.3. Interactive input\u00b6\nInput in interactive mode is parsed using the following grammar:\ninteractive_input: [stmt_list\n] NEWLINE |compound_stmt\nNEWLINE | ENDMARKER\nNote that a (top-level) compound statement must be followed by a blank line in interactive mode; this is needed to help the parser detect the end of the input.\n9.4. Expression input\u00b6\neval()\nis used for expression input. It ignores leading whitespace. The\nstring argument to eval()\nmust have the following form:\neval_input: expression_list\nNEWLINE* ENDMARKER", "code_snippets": [], "language": "Python", "source": "python.org", "token_count": 598} +{"url": "https://docs.python.org/3/howto/argparse-optparse.html", "title": "Migrating ", "content": "Migrating optparse\ncode to argparse\n\u00b6\nThe argparse\nmodule offers several higher level features not natively\nprovided by the optparse\nmodule, including:\nHandling positional arguments.\nSupporting subcommands.\nAllowing alternative option prefixes like\n+\nand/\n.Handling zero-or-more and one-or-more style arguments.\nProducing more informative usage messages.\nProviding a much simpler interface for custom\ntype\nandaction\n.\nOriginally, the argparse\nmodule attempted to maintain compatibility\nwith optparse\n. However, the fundamental design differences between\nsupporting declarative command line option processing (while leaving positional\nargument processing to application code), and supporting both named options\nand positional arguments in the declarative interface mean that the\nAPI has diverged from that of optparse\nover time.\nAs described in Choosing an argument parsing library, applications that are\ncurrently using optparse\nand are happy with the way it works can\njust continue to use optparse\n.\nApplication developers that are considering migrating should also review the list of intrinsic behavioural differences described in that section before deciding whether or not migration is desirable.\nFor applications that do choose to migrate from optparse\nto argparse\n,\nthe following suggestions should be helpful:\nReplace all\noptparse.OptionParser.add_option()\ncalls withArgumentParser.add_argument()\ncalls.Replace\n(options, args) = parser.parse_args()\nwithargs = parser.parse_args()\nand add additionalArgumentParser.add_argument()\ncalls for the positional arguments. Keep in mind that what was previously calledoptions\n, now in theargparse\ncontext is calledargs\n.Replace\noptparse.OptionParser.disable_interspersed_args()\nby usingparse_intermixed_args()\ninstead ofparse_args()\n.Replace callback actions and the\ncallback_*\nkeyword arguments withtype\noraction\narguments.Replace string names for\ntype\nkeyword arguments with the corresponding type objects (e.g. int, float, complex, etc).Replace\noptparse.Values\nwithNamespace\nandoptparse.OptionError\nandoptparse.OptionValueError\nwithArgumentError\n.Replace strings with implicit arguments such as\n%default\nor%prog\nwith the standard Python syntax to use dictionaries to format strings, that is,%(default)s\nand%(prog)s\n.Replace the OptionParser constructor\nversion\nargument with a call toparser.add_argument('--version', action='version', version='')\n.", "code_snippets": [], "language": "Python", "source": "python.org", "token_count": 603} +{"url": "https://docs.python.org/3/tutorial/introduction.html", "title": "An Informal Introduction to Python", "content": "3. An Informal Introduction to Python\u00b6\nIn the following examples, input and output are distinguished by the presence or absence of prompts (>>> and \u2026): to repeat the example, you must type everything after the prompt, when the prompt appears; lines that do not begin with a prompt are output from the interpreter. Note that a secondary prompt on a line by itself in an example means you must type a blank line; this is used to end a multi-line command.\nYou can use the \u201cCopy\u201d button (it appears in the upper-right corner when hovering over or tapping a code example), which strips prompts and omits output, to copy and paste the input lines into your interpreter.\nMany of the examples in this manual, even those entered at the interactive\nprompt, include comments. Comments in Python start with the hash character,\n#\n, and extend to the end of the physical line. A comment may appear at the\nstart of a line or following whitespace or code, but not within a string\nliteral. A hash character within a string literal is just a hash character.\nSince comments are to clarify code and are not interpreted by Python, they may\nbe omitted when typing in examples.\nSome examples:\n# this is the first comment\nspam = 1 # and this is the second comment\n# ... and now a third!\ntext = \"# This is not a comment because it's inside quotes.\"\n3.1. Using Python as a Calculator\u00b6\nLet\u2019s try some simple Python commands. Start the interpreter and wait for the\nprimary prompt, >>>\n. (It shouldn\u2019t take long.)\n3.1.1. Numbers\u00b6\nThe interpreter acts as a simple calculator: you can type an expression into it\nand it will write the value. Expression syntax is straightforward: the\noperators +\n, -\n, *\nand /\ncan be used to perform\narithmetic; parentheses (()\n) can be used for grouping.\nFor example:\n>>> 2 + 2\n4\n>>> 50 - 5*6\n20\n>>> (50 - 5*6) / 4\n5.0\n>>> 8 / 5 # division always returns a floating-point number\n1.6\nThe integer numbers (e.g. 2\n, 4\n, 20\n) have type int\n,\nthe ones with a fractional part (e.g. 5.0\n, 1.6\n) have type\nfloat\n. We will see more about numeric types later in the tutorial.\nDivision (/\n) always returns a float. To do floor division and\nget an integer result you can use the //\noperator; to calculate\nthe remainder you can use %\n:\n>>> 17 / 3 # classic division returns a float\n5.666666666666667\n>>>\n>>> 17 // 3 # floor division discards the fractional part\n5\n>>> 17 % 3 # the % operator returns the remainder of the division\n2\n>>> 5 * 3 + 2 # floored quotient * divisor + remainder\n17\nWith Python, it is possible to use the **\noperator to calculate powers [1]:\n>>> 5 ** 2 # 5 squared\n25\n>>> 2 ** 7 # 2 to the power of 7\n128\nThe equal sign (=\n) is used to assign a value to a variable. Afterwards, no\nresult is displayed before the next interactive prompt:\n>>> width = 20\n>>> height = 5 * 9\n>>> width * height\n900\nIf a variable is not \u201cdefined\u201d (assigned a value), trying to use it will give you an error:\n>>> n # try to access an undefined variable\nTraceback (most recent call last):\nFile \"\", line 1, in \nNameError: name 'n' is not defined\nThere is full support for floating point; operators with mixed type operands convert the integer operand to floating point:\n>>> 4 * 3.75 - 1\n14.0\nIn interactive mode, the last printed expression is assigned to the variable\n_\n. This means that when you are using Python as a desk calculator, it is\nsomewhat easier to continue calculations, for example:\n>>> tax = 12.5 / 100\n>>> price = 100.50\n>>> price * tax\n12.5625\n>>> price + _\n113.0625\n>>> round(_, 2)\n113.06\nThis variable should be treated as read-only by the user. Don\u2019t explicitly assign a value to it \u2014 you would create an independent local variable with the same name masking the built-in variable with its magic behavior.\nIn addition to int\nand float\n, Python supports other types of\nnumbers, such as Decimal\nand Fraction\n.\nPython also has built-in support for complex numbers,\nand uses the j\nor J\nsuffix to indicate the imaginary part\n(e.g. 3+5j\n).\n3.1.2. Text\u00b6\nPython can manipulate text (represented by type str\n, so-called\n\u201cstrings\u201d) as well as numbers. This includes characters \u201c!\n\u201d, words\n\u201crabbit\n\u201d, names \u201cParis\n\u201d, sentences \u201cGot your back.\n\u201d, etc.\n\u201cYay! :)\n\u201d. They can be enclosed in single quotes ('...'\n) or double\nquotes (\"...\"\n) with the same result [2].\n>>> 'spam eggs' # single quotes\n'spam eggs'\n>>> \"Paris rabbit got your back :)! Yay!\" # double quotes\n'Paris rabbit got your back :)! Yay!'\n>>> '1975' # digits and numerals enclosed in quotes are also strings\n'1975'\nTo quote a quote, we need to \u201cescape\u201d it, by preceding it with \\\n.\nAlternatively, we can use the other type of quotation marks:\n>>> 'doesn\\'t' # use \\' to escape the single quote...\n\"doesn't\"\n>>> \"doesn't\" # ...or use double quotes instead\n\"doesn't\"\n>>> '\"Yes,\" they said.'\n'\"Yes,\" they said.'\n>>> \"\\\"Yes,\\\" they said.\"\n'\"Yes,\" they said.'\n>>> '\"Isn\\'t,\" they said.'\n'\"Isn\\'t,\" they said.'\nIn the Python shell, the string definition and output string can look\ndifferent. The print()\nfunction produces a more readable output, by\nomitting the enclosing quotes and by printing escaped and special characters:\n>>> s = 'First line.\\nSecond line.' # \\n means newline\n>>> s # without print(), special characters are included in the string\n'First line.\\nSecond line.'\n>>> print(s) # with print(), special characters are interpreted, so \\n produces new line\nFirst line.\nSecond line.\nIf you don\u2019t want characters prefaced by \\\nto be interpreted as\nspecial characters, you can use raw strings by adding an r\nbefore\nthe first quote:\n>>> print('C:\\some\\name') # here \\n means newline!\nC:\\some\name\n>>> print(r'C:\\some\\name') # note the r before the quote\nC:\\some\\name\nThere is one subtle aspect to raw strings: a raw string may not end in\nan odd number of \\\ncharacters; see\nthe FAQ entry for more information\nand workarounds.\nString literals can span multiple lines. One way is using triple-quotes:\n\"\"\"...\"\"\"\nor '''...'''\n. End-of-line characters are automatically\nincluded in the string, but it\u2019s possible to prevent this by adding a \\\nat\nthe end of the line. In the following example, the initial newline is not\nincluded:\n>>> print(\"\"\"\\\n... Usage: thingy [OPTIONS]\n... -h Display this usage message\n... -H hostname Hostname to connect to\n... \"\"\")\nUsage: thingy [OPTIONS]\n-h Display this usage message\n-H hostname Hostname to connect to\n>>>\nStrings can be concatenated (glued together) with the +\noperator, and\nrepeated with *\n:\n>>> # 3 times 'un', followed by 'ium'\n>>> 3 * 'un' + 'ium'\n'unununium'\nTwo or more string literals (i.e. the ones enclosed between quotes) next to each other are automatically concatenated.\n>>> 'Py' 'thon'\n'Python'\nThis feature is particularly useful when you want to break long strings:\n>>> text = ('Put several strings within parentheses '\n... 'to have them joined together.')\n>>> text\n'Put several strings within parentheses to have them joined together.'\nThis only works with two literals though, not with variables or expressions:\n>>> prefix = 'Py'\n>>> prefix 'thon' # can't concatenate a variable and a string literal\nFile \"\", line 1\nprefix 'thon'\n^^^^^^\nSyntaxError: invalid syntax\n>>> ('un' * 3) 'ium'\nFile \"\", line 1\n('un' * 3) 'ium'\n^^^^^\nSyntaxError: invalid syntax\nIf you want to concatenate variables or a variable and a literal, use +\n:\n>>> prefix + 'thon'\n'Python'\nStrings can be indexed (subscripted), with the first character having index 0. There is no separate character type; a character is simply a string of size one:\n>>> word = 'Python'\n>>> word[0] # character in position 0\n'P'\n>>> word[5] # character in position 5\n'n'\nIndices may also be negative numbers, to start counting from the right:\n>>> word[-1] # last character\n'n'\n>>> word[-2] # second-last character\n'o'\n>>> word[-6]\n'P'\nNote that since -0 is the same as 0, negative indices start from -1.\nIn addition to indexing, slicing is also supported. While indexing is used to obtain individual characters, slicing allows you to obtain a substring:\n>>> word[0:2] # characters from position 0 (included) to 2 (excluded)\n'Py'\n>>> word[2:5] # characters from position 2 (included) to 5 (excluded)\n'tho'\nSlice indices have useful defaults; an omitted first index defaults to zero, an omitted second index defaults to the size of the string being sliced.\n>>> word[:2] # character from the beginning to position 2 (excluded)\n'Py'\n>>> word[4:] # characters from position 4 (included) to the end\n'on'\n>>> word[-2:] # characters from the second-last (included) to the end\n'on'\nNote how the start is always included, and the end always excluded. This\nmakes sure that s[:i] + s[i:]\nis always equal to s\n:\n>>> word[:2] + word[2:]\n'Python'\n>>> word[:4] + word[4:]\n'Python'\nOne way to remember how slices work is to think of the indices as pointing between characters, with the left edge of the first character numbered 0. Then the right edge of the last character of a string of n characters has index n, for example:\n+---+---+---+---+---+---+\n| P | y | t | h | o | n |\n+---+---+---+---+---+---+\n0 1 2 3 4 5 6\n-6 -5 -4 -3 -2 -1\nThe first row of numbers gives the position of the indices 0\u20266 in the string; the second row gives the corresponding negative indices. The slice from i to j consists of all characters between the edges labeled i and j, respectively.\nFor non-negative indices, the length of a slice is the difference of the\nindices, if both are within bounds. For example, the length of word[1:3]\nis\n2.\nAttempting to use an index that is too large will result in an error:\n>>> word[42] # the word only has 6 characters\nTraceback (most recent call last):\nFile \"\", line 1, in \nIndexError: string index out of range\nHowever, out of range slice indexes are handled gracefully when used for slicing:\n>>> word[4:42]\n'on'\n>>> word[42:]\n''\nPython strings cannot be changed \u2014 they are immutable. Therefore, assigning to an indexed position in the string results in an error:\n>>> word[0] = 'J'\nTraceback (most recent call last):\nFile \"\", line 1, in \nTypeError: 'str' object does not support item assignment\n>>> word[2:] = 'py'\nTraceback (most recent call last):\nFile \"\", line 1, in \nTypeError: 'str' object does not support item assignment\nIf you need a different string, you should create a new one:\n>>> 'J' + word[1:]\n'Jython'\n>>> word[:2] + 'py'\n'Pypy'\nThe built-in function len()\nreturns the length of a string:\n>>> s = 'supercalifragilisticexpialidocious'\n>>> len(s)\n34\nSee also\n- Text Sequence Type \u2014 str\nStrings are examples of sequence types, and support the common operations supported by such types.\n- String Methods\nStrings support a large number of methods for basic transformations and searching.\n- f-strings\nString literals that have embedded expressions.\n- Format String Syntax\nInformation about string formatting with\nstr.format()\n.- printf-style String Formatting\nThe old formatting operations invoked when strings are the left operand of the\n%\noperator are described in more detail here.\n3.1.3. Lists\u00b6\nPython knows a number of compound data types, used to group together other values. The most versatile is the list, which can be written as a list of comma-separated values (items) between square brackets. Lists might contain items of different types, but usually the items all have the same type.\n>>> squares = [1, 4, 9, 16, 25]\n>>> squares\n[1, 4, 9, 16, 25]\nLike strings (and all other built-in sequence types), lists can be indexed and sliced:\n>>> squares[0] # indexing returns the item\n1\n>>> squares[-1]\n25\n>>> squares[-3:] # slicing returns a new list\n[9, 16, 25]\nLists also support operations like concatenation:\n>>> squares + [36, 49, 64, 81, 100]\n[1, 4, 9, 16, 25, 36, 49, 64, 81, 100]\nUnlike strings, which are immutable, lists are a mutable type, i.e. it is possible to change their content:\n>>> cubes = [1, 8, 27, 65, 125] # something's wrong here\n>>> 4 ** 3 # the cube of 4 is 64, not 65!\n64\n>>> cubes[3] = 64 # replace the wrong value\n>>> cubes\n[1, 8, 27, 64, 125]\nYou can also add new items at the end of the list, by using\nthe list.append()\nmethod (we will see more about methods later):\n>>> cubes.append(216) # add the cube of 6\n>>> cubes.append(7 ** 3) # and the cube of 7\n>>> cubes\n[1, 8, 27, 64, 125, 216, 343]\nSimple assignment in Python never copies data. When you assign a list to a variable, the variable refers to the existing list. Any changes you make to the list through one variable will be seen through all other variables that refer to it.:\n>>> rgb = [\"Red\", \"Green\", \"Blue\"]\n>>> rgba = rgb\n>>> id(rgb) == id(rgba) # they reference the same object\nTrue\n>>> rgba.append(\"Alph\")\n>>> rgb\n[\"Red\", \"Green\", \"Blue\", \"Alph\"]\nAll slice operations return a new list containing the requested elements. This means that the following slice returns a shallow copy of the list:\n>>> correct_rgba = rgba[:]\n>>> correct_rgba[-1] = \"Alpha\"\n>>> correct_rgba\n[\"Red\", \"Green\", \"Blue\", \"Alpha\"]\n>>> rgba\n[\"Red\", \"Green\", \"Blue\", \"Alph\"]\nAssignment to slices is also possible, and this can even change the size of the list or clear it entirely:\n>>> letters = ['a', 'b', 'c', 'd', 'e', 'f', 'g']\n>>> letters\n['a', 'b', 'c', 'd', 'e', 'f', 'g']\n>>> # replace some values\n>>> letters[2:5] = ['C', 'D', 'E']\n>>> letters\n['a', 'b', 'C', 'D', 'E', 'f', 'g']\n>>> # now remove them\n>>> letters[2:5] = []\n>>> letters\n['a', 'b', 'f', 'g']\n>>> # clear the list by replacing all the elements with an empty list\n>>> letters[:] = []\n>>> letters\n[]\nThe built-in function len()\nalso applies to lists:\n>>> letters = ['a', 'b', 'c', 'd']\n>>> len(letters)\n4\nIt is possible to nest lists (create lists containing other lists), for example:\n>>> a = ['a', 'b', 'c']\n>>> n = [1, 2, 3]\n>>> x = [a, n]\n>>> x\n[['a', 'b', 'c'], [1, 2, 3]]\n>>> x[0]\n['a', 'b', 'c']\n>>> x[0][1]\n'b'\n3.2. First Steps Towards Programming\u00b6\nOf course, we can use Python for more complicated tasks than adding two and two together. For instance, we can write an initial sub-sequence of the Fibonacci series as follows:\n>>> # Fibonacci series:\n>>> # the sum of two elements defines the next\n>>> a, b = 0, 1\n>>> while a < 10:\n... print(a)\n... a, b = b, a+b\n...\n0\n1\n1\n2\n3\n5\n8\nThis example introduces several new features.\nThe first line contains a multiple assignment: the variables\na\nandb\nsimultaneously get the new values 0 and 1. On the last line this is used again, demonstrating that the expressions on the right-hand side are all evaluated first before any of the assignments take place. The right-hand side expressions are evaluated from the left to the right.The\nwhile\nloop executes as long as the condition (here:a < 10\n) remains true. In Python, like in C, any non-zero integer value is true; zero is false. The condition may also be a string or list value, in fact any sequence; anything with a non-zero length is true, empty sequences are false. The test used in the example is a simple comparison. The standard comparison operators are written the same as in C:<\n(less than),>\n(greater than),==\n(equal to),<=\n(less than or equal to),>=\n(greater than or equal to) and!=\n(not equal to).The body of the loop is indented: indentation is Python\u2019s way of grouping statements. At the interactive prompt, you have to type a tab or space(s) for each indented line. In practice you will prepare more complicated input for Python with a text editor; all decent text editors have an auto-indent facility. When a compound statement is entered interactively, it must be followed by a blank line to indicate completion (since the parser cannot guess when you have typed the last line). Note that each line within a basic block must be indented by the same amount.\nThe\nprint()\nfunction writes the value of the argument(s) it is given. It differs from just writing the expression you want to write (as we did earlier in the calculator examples) in the way it handles multiple arguments, floating-point quantities, and strings. Strings are printed without quotes, and a space is inserted between items, so you can format things nicely, like this:>>> i = 256*256 >>> print('The value of i is', i) The value of i is 65536\nThe keyword argument end can be used to avoid the newline after the output, or end the output with a different string:\n>>> a, b = 0, 1 >>> while a < 1000: ... print(a, end=',') ... a, b = b, a+b ... 0,1,1,2,3,5,8,13,21,34,55,89,144,233,377,610,987,\nFootnotes", "code_snippets": ["\n", " ", " ", " ", "\n ", "\n", " ", " ", "\n", " ", " ", "\n", "\n", " ", " ", "\n", "\n", " ", " ", " ", " ", "\n", "\n", " ", " ", " ", "\n", "\n", " ", " ", " ", "\n", "\n", "\n", " ", " ", " ", "\n", "\n", " ", " ", " ", "\n", "\n", " ", " ", " ", " ", " ", "\n", "\n", " ", " ", " ", "\n", "\n", " ", " ", " ", "\n", "\n", " ", " ", "\n", " ", " ", " ", " ", "\n", " ", " ", "\n", "\n", " ", "\n", "\n File ", ", line ", ", in ", "\n", ": ", "\n", " ", " ", " ", " ", "\n", "\n", " ", " ", " ", " ", "\n", " ", " ", "\n", " ", " ", "\n", "\n", " ", " ", "\n", "\n", " ", "\n", "\n", " ", "\n", "\n", " ", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", " ", " ", " ", "\n", " ", "\n", "\n", " ", "\n", "\n", "\n", " ", "\n", "\n", "\n", " ", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n\n", "\n", "\n", " ", " ", " ", " ", "\n", "\n", " ", "\n", "\n", " ", " ", "\n", " ", "\n", "\n", "\n", " ", " ", "\n", " ", " ", "\n File ", ", line ", "\n", " ", "\n", "\n", ": ", "\n", " ", " ", " ", "\n File ", ", line ", "\n", " ", " ", " ", "\n", "\n", ": ", "\n", " ", " ", "\n", "\n", " ", " ", "\n", " ", "\n", "\n", " ", "\n", "\n", " ", "\n", "\n", " ", "\n", "\n", "\n", "\n", " ", "\n", "\n", " ", "\n", "\n", " ", "\n", "\n", " ", "\n", "\n", " ", "\n", "\n", " ", " ", "\n", "\n", " ", " ", "\n", "\n", " ", "\n ", " ", " ", " ", " ", " ", " ", " ", " ", " ", " ", " ", " ", "\n ", "\n ", " ", " ", " ", " ", " ", " ", "\n", " ", " ", " ", " ", " ", "\n", " ", "\n", "\n File ", ", line ", ", in ", "\n", ": ", "\n", "\n", "\n", "\n", "\n", " ", " ", "\n", "\n File ", ", line ", ", in ", "\n", ": ", "\n", " ", " ", "\n", "\n File ", ", line ", ", in ", "\n", ": ", "\n", " ", " ", "\n", "\n", " ", " ", "\n", "\n", " ", " ", "\n", "\n", "\n", " ", " ", " ", " ", " ", " ", "\n", "\n", "\n", " ", "\n", "\n", "\n", "\n", " ", "\n", "\n", " ", " ", " ", " ", " ", " ", "\n", "\n", " ", " ", " ", " ", " ", " ", " ", "\n", " ", " ", " ", "\n", "\n", " ", " ", " ", "\n", "\n", "\n", " ", "\n", " ", " ", " ", "\n", "\n", "\n", " ", " ", " ", " ", "\n", " ", " ", "\n", " ", " ", " ", "\n", "\n", "\n", "\n", "\n", " ", " ", "\n", " ", " ", "\n", "\n", "\n", "\n", "\n", " ", " ", " ", " ", " ", " ", " ", " ", "\n", "\n", "\n", "\n", " ", " ", " ", " ", "\n", "\n", "\n", "\n", " ", " ", "\n", "\n", "\n", "\n", " ", " ", "\n", "\n", "\n", " ", " ", " ", " ", " ", "\n", "\n", "\n", " ", " ", " ", " ", "\n", " ", " ", " ", " ", "\n", " ", " ", " ", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", " ", " ", " ", " ", "\n", " ", " ", " ", "\n", " ", "\n", " ", " ", " ", " ", " ", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", " ", " ", "\n", " ", "\n", "\n", " ", " ", " ", " ", "\n", " ", " ", " ", "\n", " ", " ", "\n", " ", " ", " ", " ", " ", "\n", "\n", "\n"], "language": "Python", "source": "python.org", "token_count": 4159} +{"url": "https://docs.python.org/3/howto/gdb_helpers.html", "title": "Debugging C API extensions and CPython Internals with GDB", "content": "Debugging C API extensions and CPython Internals with GDB\u00b6\nThis document explains how the Python GDB extension, python-gdb.py\n, can\nbe used with the GDB debugger to debug CPython extensions and the\nCPython interpreter itself.\nWhen debugging low-level problems such as crashes or deadlocks, a low-level debugger, such as GDB, is useful to diagnose and correct the issue. By default, GDB (or any of its front-ends) doesn\u2019t support high-level information specific to the CPython interpreter.\nThe python-gdb.py\nextension adds CPython interpreter information to GDB.\nThe extension helps introspect the stack of currently executing Python functions.\nGiven a Python object represented by a PyObject* pointer,\nthe extension surfaces the type and value of the object.\nDevelopers who are working on CPython extensions or tinkering with parts\nof CPython that are written in C can use this document to learn how to use the\npython-gdb.py\nextension with GDB.\nNote\nThis document assumes that you are familiar with the basics of GDB and the CPython C API. It consolidates guidance from the devguide and the Python wiki.\nPrerequisites\u00b6\nYou need to have:\nGDB 7 or later. (For earlier versions of GDB, see\nMisc/gdbinit\nin the sources of Python 3.11 or earlier.)GDB-compatible debugging information for Python and any extension you are debugging.\nThe\npython-gdb.py\nextension.\nThe extension is built with Python, but might be distributed separately or not at all. Below, we include tips for a few common systems as examples. Note that even if the instructions match your system, they might be outdated.\nSetup with Python built from source\u00b6\nWhen you build CPython from source, debugging information should be available,\nand the build should add a python-gdb.py\nfile to the root directory of\nyour repository.\nTo activate support, you must add the directory containing python-gdb.py\nto GDB\u2019s \u201cauto-load-safe-path\u201d.\nIf you haven\u2019t done this, recent versions of GDB will print out a warning\nwith instructions on how to do this.\nNote\nIf you do not see instructions for your version of GDB, put this in your\nconfiguration file (~/.gdbinit\nor ~/.config/gdb/gdbinit\n):\nadd-auto-load-safe-path /path/to/cpython\nYou can also add multiple paths, separated by :\n.\nSetup for Python from a Linux distro\u00b6\nMost Linux systems provide debug information for the system Python\nin a package called python-debuginfo\n, python-dbg\nor similar.\nFor example:\nFedora:\nsudo dnf install gdb sudo dnf debuginfo-install python3\nUbuntu:\nsudo apt install gdb python3-dbg\nOn several recent Linux systems, GDB can download debugging symbols\nautomatically using debuginfod.\nHowever, this will not install the python-gdb.py\nextension;\nyou generally do need to install the debug info package separately.\nUsing the Debug build and Development mode\u00b6\nFor easier debugging, you might want to:\nUse a debug build of Python. (When building from source, use\nconfigure --with-pydebug\n. On Linux distros, install and run a package likepython-debug\norpython-dbg\n, if available.)Use the runtime development mode (\n-X dev\n).\nBoth enable extra assertions and disable some optimizations. Sometimes this hides the bug you are trying to find, but in most cases they make the process easier.\nUsing the python-gdb\nextension\u00b6\nWhen the extension is loaded, it provides two main features: pretty printers for Python values, and additional commands.\nPretty-printers\u00b6\nThis is what a GDB backtrace looks like (truncated) when this extension is enabled:\n#0 0x000000000041a6b1 in PyObject_Malloc (nbytes=Cannot access memory at address 0x7fffff7fefe8\n) at Objects/obmalloc.c:748\n#1 0x000000000041b7c0 in _PyObject_DebugMallocApi (id=111 'o', nbytes=24) at Objects/obmalloc.c:1445\n#2 0x000000000041b717 in _PyObject_DebugMalloc (nbytes=24) at Objects/obmalloc.c:1412\n#3 0x000000000044060a in _PyUnicode_New (length=11) at Objects/unicodeobject.c:346\n#4 0x00000000004466aa in PyUnicodeUCS2_DecodeUTF8Stateful (s=0x5c2b8d \"__lltrace__\", size=11, errors=0x0, consumed=\n0x0) at Objects/unicodeobject.c:2531\n#5 0x0000000000446647 in PyUnicodeUCS2_DecodeUTF8 (s=0x5c2b8d \"__lltrace__\", size=11, errors=0x0)\nat Objects/unicodeobject.c:2495\n#6 0x0000000000440d1b in PyUnicodeUCS2_FromStringAndSize (u=0x5c2b8d \"__lltrace__\", size=11)\nat Objects/unicodeobject.c:551\n#7 0x0000000000440d94 in PyUnicodeUCS2_FromString (u=0x5c2b8d \"__lltrace__\") at Objects/unicodeobject.c:569\n#8 0x0000000000584abd in PyDict_GetItemString (v=\n{'Yuck': , '__builtins__': , '__file__': 'Lib/test/crashers/nasty_eq_vs_dict.py', '__package__': None, 'y': , 'dict': {0: 0, 1: 1, 2: 2, 3: 3}, '__cached__': None, '__name__': '__main__', 'z': , '__doc__': None}, key=\n0x5c2b8d \"__lltrace__\") at Objects/dictobject.c:2171\nNotice how the dictionary argument to PyDict_GetItemString\nis displayed\nas its repr()\n, rather than an opaque PyObject *\npointer.\nThe extension works by supplying a custom printing routine for values of type\nPyObject *\n. If you need to access lower-level details of an object, then\ncast the value to a pointer of the appropriate type. For example:\n(gdb) p globals\n$1 = {'__builtins__': , '__name__':\n'__main__', 'ctypes': , '__doc__': None,\n'__package__': None}\n(gdb) p *(PyDictObject*)globals\n$2 = {ob_refcnt = 3, ob_type = 0x3dbdf85820, ma_fill = 5, ma_used = 5,\nma_mask = 7, ma_table = 0x63d0f8, ma_lookup = 0x3dbdc7ea70\n, ma_smalltable = {{me_hash = 7065186196740147912,\nme_key = '__builtins__', me_value = },\n{me_hash = -368181376027291943, me_key = '__name__',\nme_value ='__main__'}, {me_hash = 0, me_key = 0x0, me_value = 0x0},\n{me_hash = 0, me_key = 0x0, me_value = 0x0},\n{me_hash = -9177857982131165996, me_key = 'ctypes',\nme_value = },\n{me_hash = -8518757509529533123, me_key = '__doc__', me_value = None},\n{me_hash = 0, me_key = 0x0, me_value = 0x0}, {\nme_hash = 6614918939584953775, me_key = '__package__', me_value = None}}}\nNote that the pretty-printers do not actually call repr()\n.\nFor basic types, they try to match its result closely.\nAn area that can be confusing is that the custom printer for some types look a\nlot like GDB\u2019s built-in printer for standard types. For example, the\npretty-printer for a Python int\n(PyLongObject*)\ngives a representation that is not distinguishable from one of a\nregular machine-level integer:\n(gdb) p some_machine_integer\n$3 = 42\n(gdb) p some_python_integer\n$4 = 42\nThe internal structure can be revealed with a cast to PyLongObject*:\n(gdb) p *(PyLongObject*)some_python_integer\n$5 = {ob_base = {ob_base = {ob_refcnt = 8, ob_type = 0x3dad39f5e0}, ob_size = 1},\nob_digit = {42}}\nA similar confusion can arise with the str\ntype, where the output looks a\nlot like gdb\u2019s built-in printer for char *\n:\n(gdb) p ptr_to_python_str\n$6 = '__builtins__'\nThe pretty-printer for str\ninstances defaults to using single-quotes (as\ndoes Python\u2019s repr\nfor strings) whereas the standard printer for char *\nvalues uses double-quotes and contains a hexadecimal address:\n(gdb) p ptr_to_char_star\n$7 = 0x6d72c0 \"hello world\"\nAgain, the implementation details can be revealed with a cast to PyUnicodeObject*:\n(gdb) p *(PyUnicodeObject*)$6\n$8 = {ob_base = {ob_refcnt = 33, ob_type = 0x3dad3a95a0}, length = 12,\nstr = 0x7ffff2128500, hash = 7065186196740147912, state = 1, defenc = 0x0}\npy-list\n\u00b6\nThe extension adds a\npy-list\ncommand, which lists the Python source code (if any) for the current frame in the selected thread. The current line is marked with a \u201c>\u201d:(gdb) py-list 901 if options.profile: 902 options.profile = False 903 profile_me() 904 return 905 >906 u = UI() 907 if not u.quit: 908 try: 909 gtk.main() 910 except KeyboardInterrupt: 911 # properly quit on a keyboard interrupt...Use\npy-list START\nto list at a different line number within the Python source, andpy-list START,END\nto list a specific range of lines within the Python source.\npy-up\nand py-down\n\u00b6\nThe\npy-up\nandpy-down\ncommands are analogous to GDB\u2019s regularup\nanddown\ncommands, but try to move at the level of CPython frames, rather than C frames.GDB is not always able to read the relevant frame information, depending on the optimization level with which CPython was compiled. Internally, the commands look for C frames that are executing the default frame evaluation function (that is, the core bytecode interpreter loop within CPython) and look up the value of the related\nPyFrameObject *\n.They emit the frame number (at the C level) within the thread.\nFor example:\n(gdb) py-up #37 Frame 0x9420b04, for file /usr/lib/python2.6/site-packages/ gnome_sudoku/main.py, line 906, in start_game () u = UI() (gdb) py-up #40 Frame 0x948e82c, for file /usr/lib/python2.6/site-packages/ gnome_sudoku/gnome_sudoku.py, line 22, in start_game(main=) main.start_game() (gdb) py-up Unable to find an older python frameso we\u2019re at the top of the Python stack.\nThe frame numbers correspond to those displayed by GDB\u2019s standard\nbacktrace\ncommand. The command skips C frames which are not executing Python code.Going back down:\n(gdb) py-down #37 Frame 0x9420b04, for file /usr/lib/python2.6/site-packages/gnome_sudoku/main.py, line 906, in start_game () u = UI() (gdb) py-down #34 (unable to read python frame information) (gdb) py-down #23 (unable to read python frame information) (gdb) py-down #19 (unable to read python frame information) (gdb) py-down #14 Frame 0x99262ac, for file /usr/lib/python2.6/site-packages/gnome_sudoku/game_selector.py, line 201, in run_swallowed_dialog (self=, puzzle=None, saved_games=[{'gsd.auto_fills': 0, 'tracking': {}, 'trackers': {}, 'notes': [], 'saved_at': 1270084485, 'game': '7 8 0 0 0 0 0 5 6 0 0 9 0 8 0 1 0 0 0 4 6 0 0 0 0 7 0 6 5 0 0 0 4 7 9 2 0 0 0 9 0 1 0 0 0 3 9 7 6 0 0 0 1 8 0 6 0 0 0 0 2 8 0 0 0 5 0 4 0 6 0 0 2 1 0 0 0 0 0 4 5\\n7 8 0 0 0 0 0 5 6 0 0 9 0 8 0 1 0 0 0 4 6 0 0 0 0 7 0 6 5 1 8 3 4 7 9 2 0 0 0 9 0 1 0 0 0 3 9 7 6 0 0 0 1 8 0 6 0 0 0 0 2 8 0 0 0 5 0 4 0 6 0 0 2 1 0 0 0 0 0 4 5', 'gsd.impossible_hints': 0, 'timer.__absolute_start_time__': , 'gsd.hints': 0, 'timer.active_time': , 'timer.total_time': }], dialog=, saved_game_model=, sudoku_maker=, main_page=0) at remote 0x98fa6e4>, d=) gtk.main() (gdb) py-down #8 (unable to read python frame information) (gdb) py-down Unable to find a newer python frameand we\u2019re at the bottom of the Python stack.\nNote that in Python 3.12 and newer, the same C stack frame can be used for multiple Python stack frames. This means that\npy-up\nandpy-down\nmay move multiple Python frames at once. For example:(gdb) py-up #6 Frame 0x7ffff7fb62b0, for file /tmp/rec.py, line 5, in recursive_function (n=0) time.sleep(5) #6 Frame 0x7ffff7fb6240, for file /tmp/rec.py, line 7, in recursive_function (n=1) recursive_function(n-1) #6 Frame 0x7ffff7fb61d0, for file /tmp/rec.py, line 7, in recursive_function (n=2) recursive_function(n-1) #6 Frame 0x7ffff7fb6160, for file /tmp/rec.py, line 7, in recursive_function (n=3) recursive_function(n-1) #6 Frame 0x7ffff7fb60f0, for file /tmp/rec.py, line 7, in recursive_function (n=4) recursive_function(n-1) #6 Frame 0x7ffff7fb6080, for file /tmp/rec.py, line 7, in recursive_function (n=5) recursive_function(n-1) #6 Frame 0x7ffff7fb6020, for file /tmp/rec.py, line 9, in () recursive_function(5) (gdb) py-up Unable to find an older python frame\npy-bt\n\u00b6\nThe\npy-bt\ncommand attempts to display a Python-level backtrace of the current thread.For example:\n(gdb) py-bt #8 (unable to read python frame information) #11 Frame 0x9aead74, for file /usr/lib/python2.6/site-packages/gnome_sudoku/dialog_swallower.py, line 48, in run_dialog (self=, main_page=0) at remote 0x98fa6e4>, d=) gtk.main() #14 Frame 0x99262ac, for file /usr/lib/python2.6/site-packages/gnome_sudoku/game_selector.py, line 201, in run_swallowed_dialog (self=, puzzle=None, saved_games=[{'gsd.auto_fills': 0, 'tracking': {}, 'trackers': {}, 'notes': [], 'saved_at': 1270084485, 'game': '7 8 0 0 0 0 0 5 6 0 0 9 0 8 0 1 0 0 0 4 6 0 0 0 0 7 0 6 5 0 0 0 4 7 9 2 0 0 0 9 0 1 0 0 0 3 9 7 6 0 0 0 1 8 0 6 0 0 0 0 2 8 0 0 0 5 0 4 0 6 0 0 2 1 0 0 0 0 0 4 5\\n7 8 0 0 0 0 0 5 6 0 0 9 0 8 0 1 0 0 0 4 6 0 0 0 0 7 0 6 5 1 8 3 4 7 9 2 0 0 0 9 0 1 0 0 0 3 9 7 6 0 0 0 1 8 0 6 0 0 0 0 2 8 0 0 0 5 0 4 0 6 0 0 2 1 0 0 0 0 0 4 5', 'gsd.impossible_hints': 0, 'timer.__absolute_start_time__': , 'gsd.hints': 0, 'timer.active_time': , 'timer.total_time': }], dialog=, saved_game_model=, sudoku_maker=) main.start_game()The frame numbers correspond to those displayed by GDB\u2019s standard\nbacktrace\ncommand.\npy-print\n\u00b6\nThe\npy-print\ncommand looks up a Python name and tries to print it. It looks in locals within the current thread, then globals, then finally builtins:(gdb) py-print self local 'self' = , main_page=0) at remote 0x98fa6e4> (gdb) py-print __name__ global '__name__' = 'gnome_sudoku.dialog_swallower' (gdb) py-print len builtin 'len' = (gdb) py-print scarlet_pimpernel 'scarlet_pimpernel' not foundIf the current C frame corresponds to multiple Python frames,\npy-print\nonly considers the first one.\npy-locals\n\u00b6\nThe\npy-locals\ncommand looks up all Python locals within the current Python frame in the selected thread, and prints their representations:(gdb) py-locals self = , main_page=0) at remote 0x98fa6e4> d = If the current C frame corresponds to multiple Python frames, locals from all of them will be shown:\n(gdb) py-locals Locals for recursive_function n = 0 Locals for recursive_function n = 1 Locals for recursive_function n = 2 Locals for recursive_function n = 3 Locals for recursive_function n = 4 Locals for recursive_function n = 5 Locals for \nUse with GDB commands\u00b6\nThe extension commands complement GDB\u2019s built-in commands.\nFor example, you can use a frame numbers shown by py-bt\nwith the frame\ncommand to go a specific frame within the selected thread, like this:\n(gdb) py-bt\n(output snipped)\n#68 Frame 0xaa4560, for file Lib/test/regrtest.py, line 1548, in ()\nmain()\n(gdb) frame 68\n#68 0x00000000004cd1e6 in PyEval_EvalFrameEx (f=Frame 0xaa4560, for file Lib/test/regrtest.py, line 1548, in (), throwflag=0) at Python/ceval.c:2665\n2665 x = call_function(&sp, oparg);\n(gdb) py-list\n1543 # Run the tests in a context manager that temporary changes the CWD to a\n1544 # temporary and writable directory. If it's not possible to create or\n1545 # change the CWD, the original CWD will be used. The original CWD is\n1546 # available from test_support.SAVEDCWD.\n1547 with test_support.temp_cwd(TESTCWD, quiet=True):\n>1548 main()\nThe info threads\ncommand will give you a list of the threads within the\nprocess, and you can use the thread\ncommand to select a different one:\n(gdb) info threads\n105 Thread 0x7fffefa18710 (LWP 10260) sem_wait () at ../nptl/sysdeps/unix/sysv/linux/x86_64/sem_wait.S:86\n104 Thread 0x7fffdf5fe710 (LWP 10259) sem_wait () at ../nptl/sysdeps/unix/sysv/linux/x86_64/sem_wait.S:86\n* 1 Thread 0x7ffff7fe2700 (LWP 10145) 0x00000038e46d73e3 in select () at ../sysdeps/unix/syscall-template.S:82\nYou can use thread apply all COMMAND\nor (t a a COMMAND\nfor short) to run\na command on all threads. With py-bt\n, this lets you see what every\nthread is doing at the Python level:\n(gdb) t a a py-bt\nThread 105 (Thread 0x7fffefa18710 (LWP 10260)):\n#5 Frame 0x7fffd00019d0, for file /home/david/coding/python-svn/Lib/threading.py, line 155, in _acquire_restore (self=<_RLock(_Verbose__verbose=False, _RLock__owner=140737354016512, _RLock__block=, _RLock__count=1) at remote 0xd7ff40>, count_owner=(1, 140737213728528), count=1, owner=140737213728528)\nself.__block.acquire()\n#8 Frame 0x7fffac001640, for file /home/david/coding/python-svn/Lib/threading.py, line 269, in wait (self=<_Condition(_Condition__lock=<_RLock(_Verbose__verbose=False, _RLock__owner=140737354016512, _RLock__block=, _RLock__count=1) at remote 0xd7ff40>, acquire=, _is_owned=, _release_save=, release=, _acquire_restore=, _Verbose__verbose=False, _Condition__waiters=[]) at remote 0xd7fd10>, timeout=None, waiter=, saved_state=(1, 140737213728528))\nself._acquire_restore(saved_state)\n#12 Frame 0x7fffb8001a10, for file /home/david/coding/python-svn/Lib/test/lock_tests.py, line 348, in f ()\ncond.wait()\n#16 Frame 0x7fffb8001c40, for file /home/david/coding/python-svn/Lib/test/lock_tests.py, line 37, in task (tid=140737213728528)\nf()\nThread 104 (Thread 0x7fffdf5fe710 (LWP 10259)):\n#5 Frame 0x7fffe4001580, for file /home/david/coding/python-svn/Lib/threading.py, line 155, in _acquire_restore (self=<_RLock(_Verbose__verbose=False, _RLock__owner=140737354016512, _RLock__block=, _RLock__count=1) at remote 0xd7ff40>, count_owner=(1, 140736940992272), count=1, owner=140736940992272)\nself.__block.acquire()\n#8 Frame 0x7fffc8002090, for file /home/david/coding/python-svn/Lib/threading.py, line 269, in wait (self=<_Condition(_Condition__lock=<_RLock(_Verbose__verbose=False, _RLock__owner=140737354016512, _RLock__block=, _RLock__count=1) at remote 0xd7ff40>, acquire=, _is_owned=, _release_save=, release=, _acquire_restore=, _Verbose__verbose=False, _Condition__waiters=[]) at remote 0xd7fd10>, timeout=None, waiter=, saved_state=(1, 140736940992272))\nself._acquire_restore(saved_state)\n#12 Frame 0x7fffac001c90, for file /home/david/coding/python-svn/Lib/test/lock_tests.py, line 348, in f ()\ncond.wait()\n#16 Frame 0x7fffac0011c0, for file /home/david/coding/python-svn/Lib/test/lock_tests.py, line 37, in task (tid=140736940992272)\nf()\nThread 1 (Thread 0x7ffff7fe2700 (LWP 10145)):\n#5 Frame 0xcb5380, for file /home/david/coding/python-svn/Lib/test/lock_tests.py, line 16, in _wait ()\ntime.sleep(0.01)\n#8 Frame 0x7fffd00024a0, for file /home/david/coding/python-svn/Lib/test/lock_tests.py, line 378, in _check_notify (self=, skipped=[], _mirrorOutput=False, testsRun=39, buffer=False, _original_stderr=, _stdout_buffer=, _stderr_buffer=, _moduleSetUpFailed=False, expectedFailures=[], errors=[], _previousTestClass=, unexpectedSuccesses=[], failures=[], shouldStop=False, failfast=False) at remote 0xc185a0>, _threads=(0,), _cleanups=[], _type_equality_funcs={: , : , : , : , \n):\nIndex |\nAttribute |\nMeaning |\n|---|---|---|\n0 |\ngr_name |\nthe name of the group |\n1 |\ngr_passwd |\nthe (encrypted) group password; often empty |\n2 |\ngr_gid |\nthe numerical group ID |\n3 |\ngr_mem |\nall the group member\u2019s user names |\nThe gid is an integer, name and password are strings, and the member list is a\nlist of strings. (Note that most users are not explicitly listed as members of\nthe group they are in according to the password database. Check both databases\nto get complete membership information. Also note that a gr_name\nthat\nstarts with a +\nor -\nis likely to be a YP/NIS reference and may not be\naccessible via getgrnam()\nor getgrgid()\n.)\nIt defines the following items:\n- grp.getgrgid(id)\u00b6\nReturn the group database entry for the given numeric group ID.\nKeyError\nis raised if the entry asked for cannot be found.Changed in version 3.10:\nTypeError\nis raised for non-integer arguments like floats or strings.\n- grp.getgrnam(name)\u00b6\nReturn the group database entry for the given group name.\nKeyError\nis raised if the entry asked for cannot be found.\n- grp.getgrall()\u00b6\nReturn a list of all available group entries, in arbitrary order.\nSee also\n- Module\npwd\nAn interface to the user database, similar to this.", "code_snippets": [], "language": "Python", "source": "python.org", "token_count": 388} +{"url": "https://docs.python.org/3/extending/newtypes_tutorial.html", "title": "Defining Extension Types: Tutorial", "content": "2. Defining Extension Types: Tutorial\u00b6\nPython allows the writer of a C extension module to define new types that\ncan be manipulated from Python code, much like the built-in str\nand list\ntypes. The code for all extension types follows a\npattern, but there are some details that you need to understand before you\ncan get started. This document is a gentle introduction to the topic.\n2.1. The Basics\u00b6\nThe CPython runtime sees all Python objects as variables of type\nPyObject*, which serves as a \u201cbase type\u201d for all Python objects.\nThe PyObject\nstructure itself only contains the object\u2019s\nreference count and a pointer to the object\u2019s \u201ctype object\u201d.\nThis is where the action is; the type object determines which (C) functions\nget called by the interpreter when, for instance, an attribute gets looked up\non an object, a method called, or it is multiplied by another object. These\nC functions are called \u201ctype methods\u201d.\nSo, if you want to define a new extension type, you need to create a new type object.\nThis sort of thing can only be explained by example, so here\u2019s a minimal, but\ncomplete, module that defines a new type named Custom\ninside a C\nextension module custom\n:\nNote\nWhat we\u2019re showing here is the traditional way of defining static\nextension types. It should be adequate for most uses. The C API also\nallows defining heap-allocated extension types using the\nPyType_FromSpec()\nfunction, which isn\u2019t covered in this tutorial.\n#define PY_SSIZE_T_CLEAN\n#include \ntypedef struct {\nPyObject_HEAD\n/* Type-specific fields go here. */\n} CustomObject;\nstatic PyTypeObject CustomType = {\n.ob_base = PyVarObject_HEAD_INIT(NULL, 0)\n.tp_name = \"custom.Custom\",\n.tp_doc = PyDoc_STR(\"Custom objects\"),\n.tp_basicsize = sizeof(CustomObject),\n.tp_itemsize = 0,\n.tp_flags = Py_TPFLAGS_DEFAULT,\n.tp_new = PyType_GenericNew,\n};\nstatic int\ncustom_module_exec(PyObject *m)\n{\nif (PyType_Ready(&CustomType) < 0) {\nreturn -1;\n}\nif (PyModule_AddObjectRef(m, \"Custom\", (PyObject *) &CustomType) < 0) {\nreturn -1;\n}\nreturn 0;\n}\nstatic PyModuleDef_Slot custom_module_slots[] = {\n{Py_mod_exec, custom_module_exec},\n// Just use this while using static types\n{Py_mod_multiple_interpreters, Py_MOD_MULTIPLE_INTERPRETERS_NOT_SUPPORTED},\n{0, NULL}\n};\nstatic PyModuleDef custom_module = {\n.m_base = PyModuleDef_HEAD_INIT,\n.m_name = \"custom\",\n.m_doc = \"Example module that creates an extension type.\",\n.m_size = 0,\n.m_slots = custom_module_slots,\n};\nPyMODINIT_FUNC\nPyInit_custom(void)\n{\nreturn PyModuleDef_Init(&custom_module);\n}\nNow that\u2019s quite a bit to take in at once, but hopefully bits will seem familiar from the previous chapter. This file defines three things:\nWhat a\nCustom\nobject contains: this is theCustomObject\nstruct, which is allocated once for eachCustom\ninstance.How the\nCustom\ntype behaves: this is theCustomType\nstruct, which defines a set of flags and function pointers that the interpreter inspects when specific operations are requested.How to define and execute the\ncustom\nmodule: this is thePyInit_custom\nfunction and the associatedcustom_module\nstruct for defining the module, and thecustom_module_exec\nfunction to set up a fresh module object.\nThe first bit is:\ntypedef struct {\nPyObject_HEAD\n} CustomObject;\nThis is what a Custom object will contain. PyObject_HEAD\nis mandatory\nat the start of each object struct and defines a field called ob_base\nof type PyObject\n, containing a pointer to a type object and a\nreference count (these can be accessed using the macros Py_TYPE\nand Py_REFCNT\nrespectively). The reason for the macro is to\nabstract away the layout and to enable additional fields in debug builds.\nNote\nThere is no semicolon above after the PyObject_HEAD\nmacro.\nBe wary of adding one by accident: some compilers will complain.\nOf course, objects generally store additional data besides the standard\nPyObject_HEAD\nboilerplate; for example, here is the definition for\nstandard Python floats:\ntypedef struct {\nPyObject_HEAD\ndouble ob_fval;\n} PyFloatObject;\nThe second bit is the definition of the type object.\nstatic PyTypeObject CustomType = {\n.ob_base = PyVarObject_HEAD_INIT(NULL, 0)\n.tp_name = \"custom.Custom\",\n.tp_doc = PyDoc_STR(\"Custom objects\"),\n.tp_basicsize = sizeof(CustomObject),\n.tp_itemsize = 0,\n.tp_flags = Py_TPFLAGS_DEFAULT,\n.tp_new = PyType_GenericNew,\n};\nNote\nWe recommend using C99-style designated initializers as above, to\navoid listing all the PyTypeObject\nfields that you don\u2019t care\nabout and also to avoid caring about the fields\u2019 declaration order.\nThe actual definition of PyTypeObject\nin object.h\nhas\nmany more fields than the definition above. The\nremaining fields will be filled with zeros by the C compiler, and it\u2019s\ncommon practice to not specify them explicitly unless you need them.\nWe\u2019re going to pick it apart, one field at a time:\n.ob_base = PyVarObject_HEAD_INIT(NULL, 0)\nThis line is mandatory boilerplate to initialize the ob_base\nfield mentioned above.\n.tp_name = \"custom.Custom\",\nThe name of our type. This will appear in the default textual representation of our objects and in some error messages, for example:\n>>> \"\" + custom.Custom()\nTraceback (most recent call last):\nFile \"\", line 1, in \nTypeError: can only concatenate str (not \"custom.Custom\") to str\nNote that the name is a dotted name that includes both the module name and the\nname of the type within the module. The module in this case is custom\nand\nthe type is Custom\n, so we set the type name to custom.Custom\n.\nUsing the real dotted import path is important to make your type compatible\nwith the pydoc\nand pickle\nmodules.\n.tp_basicsize = sizeof(CustomObject),\n.tp_itemsize = 0,\nThis is so that Python knows how much memory to allocate when creating\nnew Custom\ninstances. tp_itemsize\nis\nonly used for variable-sized objects and should otherwise be zero.\nNote\nIf you want your type to be subclassable from Python, and your type has the same\ntp_basicsize\nas its base type, you may have problems with multiple\ninheritance. A Python subclass of your type will have to list your type first\nin its __bases__\n, or else it will not be able to call your type\u2019s\n__new__()\nmethod without getting an error. You can avoid this problem by\nensuring that your type has a larger value for tp_basicsize\nthan its\nbase type does. Most of the time, this will be true anyway, because either your\nbase type will be object\n, or else you will be adding data members to\nyour base type, and therefore increasing its size.\nWe set the class flags to Py_TPFLAGS_DEFAULT\n.\n.tp_flags = Py_TPFLAGS_DEFAULT,\nAll types should include this constant in their flags. It enables all of the members defined until at least Python 3.3. If you need further members, you will need to OR the corresponding flags.\nWe provide a doc string for the type in tp_doc\n.\n.tp_doc = PyDoc_STR(\"Custom objects\"),\nTo enable object creation, we have to provide a tp_new\nhandler. This is the equivalent of the Python method __new__()\n, but\nhas to be specified explicitly. In this case, we can just use the default\nimplementation provided by the API function PyType_GenericNew()\n.\n.tp_new = PyType_GenericNew,\nEverything else in the file should be familiar, except for some code in\ncustom_module_exec()\n:\nif (PyType_Ready(&CustomType) < 0) {\nreturn -1;\n}\nThis initializes the Custom\ntype, filling in a number of members\nto the appropriate default values, including ob_type\nthat we initially\nset to NULL\n.\nif (PyModule_AddObjectRef(m, \"Custom\", (PyObject *) &CustomType) < 0) {\nreturn -1;\n}\nThis adds the type to the module dictionary. This allows us to create\nCustom\ninstances by calling the Custom\nclass:\n>>> import custom\n>>> mycustom = custom.Custom()\nThat\u2019s it! All that remains is to build it; put the above code in a file called\ncustom.c\n,\n[build-system]\nrequires = [\"setuptools\"]\nbuild-backend = \"setuptools.build_meta\"\n[project]\nname = \"custom\"\nversion = \"1\"\nin a file called pyproject.toml\n, and\nfrom setuptools import Extension, setup\nsetup(ext_modules=[Extension(\"custom\", [\"custom.c\"])])\nin a file called setup.py\n; then typing\n$ python -m pip install .\nin a shell should produce a file custom.so\nin a subdirectory\nand install it; now fire up Python \u2014 you should be able to import custom\nand play around with Custom\nobjects.\nThat wasn\u2019t so hard, was it?\nOf course, the current Custom type is pretty uninteresting. It has no data and doesn\u2019t do anything. It can\u2019t even be subclassed.\n2.2. Adding data and methods to the Basic example\u00b6\nLet\u2019s extend the basic example to add some data and methods. Let\u2019s also make\nthe type usable as a base class. We\u2019ll create a new module, custom2\nthat\nadds these capabilities:\n#define PY_SSIZE_T_CLEAN\n#include \n#include /* for offsetof() */\ntypedef struct {\nPyObject_HEAD\nPyObject *first; /* first name */\nPyObject *last; /* last name */\nint number;\n} CustomObject;\nstatic void\nCustom_dealloc(PyObject *op)\n{\nCustomObject *self = (CustomObject *) op;\nPy_XDECREF(self->first);\nPy_XDECREF(self->last);\nPy_TYPE(self)->tp_free(self);\n}\nstatic PyObject *\nCustom_new(PyTypeObject *type, PyObject *args, PyObject *kwds)\n{\nCustomObject *self;\nself = (CustomObject *) type->tp_alloc(type, 0);\nif (self != NULL) {\nself->first = Py_GetConstant(Py_CONSTANT_EMPTY_STR);\nif (self->first == NULL) {\nPy_DECREF(self);\nreturn NULL;\n}\nself->last = Py_GetConstant(Py_CONSTANT_EMPTY_STR);\nif (self->last == NULL) {\nPy_DECREF(self);\nreturn NULL;\n}\nself->number = 0;\n}\nreturn (PyObject *) self;\n}\nstatic int\nCustom_init(PyObject *op, PyObject *args, PyObject *kwds)\n{\nCustomObject *self = (CustomObject *) op;\nstatic char *kwlist[] = {\"first\", \"last\", \"number\", NULL};\nPyObject *first = NULL, *last = NULL;\nif (!PyArg_ParseTupleAndKeywords(args, kwds, \"|OOi\", kwlist,\n&first, &last,\n&self->number))\nreturn -1;\nif (first) {\nPy_XSETREF(self->first, Py_NewRef(first));\n}\nif (last) {\nPy_XSETREF(self->last, Py_NewRef(last));\n}\nreturn 0;\n}\nstatic PyMemberDef Custom_members[] = {\n{\"first\", Py_T_OBJECT_EX, offsetof(CustomObject, first), 0,\n\"first name\"},\n{\"last\", Py_T_OBJECT_EX, offsetof(CustomObject, last), 0,\n\"last name\"},\n{\"number\", Py_T_INT, offsetof(CustomObject, number), 0,\n\"custom number\"},\n{NULL} /* Sentinel */\n};\nstatic PyObject *\nCustom_name(PyObject *op, PyObject *Py_UNUSED(dummy))\n{\nCustomObject *self = (CustomObject *) op;\nif (self->first == NULL) {\nPyErr_SetString(PyExc_AttributeError, \"first\");\nreturn NULL;\n}\nif (self->last == NULL) {\nPyErr_SetString(PyExc_AttributeError, \"last\");\nreturn NULL;\n}\nreturn PyUnicode_FromFormat(\"%S %S\", self->first, self->last);\n}\nstatic PyMethodDef Custom_methods[] = {\n{\"name\", Custom_name, METH_NOARGS,\n\"Return the name, combining the first and last name\"\n},\n{NULL} /* Sentinel */\n};\nstatic PyTypeObject CustomType = {\n.ob_base = PyVarObject_HEAD_INIT(NULL, 0)\n.tp_name = \"custom2.Custom\",\n.tp_doc = PyDoc_STR(\"Custom objects\"),\n.tp_basicsize = sizeof(CustomObject),\n.tp_itemsize = 0,\n.tp_flags = Py_TPFLAGS_DEFAULT | Py_TPFLAGS_BASETYPE,\n.tp_new = Custom_new,\n.tp_init = Custom_init,\n.tp_dealloc = Custom_dealloc,\n.tp_members = Custom_members,\n.tp_methods = Custom_methods,\n};\nstatic int\ncustom_module_exec(PyObject *m)\n{\nif (PyType_Ready(&CustomType) < 0) {\nreturn -1;\n}\nif (PyModule_AddObjectRef(m, \"Custom\", (PyObject *) &CustomType) < 0) {\nreturn -1;\n}\nreturn 0;\n}\nstatic PyModuleDef_Slot custom_module_slots[] = {\n{Py_mod_exec, custom_module_exec},\n{Py_mod_multiple_interpreters, Py_MOD_MULTIPLE_INTERPRETERS_NOT_SUPPORTED},\n{0, NULL}\n};\nstatic PyModuleDef custom_module = {\n.m_base = PyModuleDef_HEAD_INIT,\n.m_name = \"custom2\",\n.m_doc = \"Example module that creates an extension type.\",\n.m_size = 0,\n.m_slots = custom_module_slots,\n};\nPyMODINIT_FUNC\nPyInit_custom2(void)\n{\nreturn PyModuleDef_Init(&custom_module);\n}\nThis version of the module has a number of changes.\nThe Custom\ntype now has three data attributes in its C struct,\nfirst, last, and number. The first and last variables are Python\nstrings containing first and last names. The number attribute is a C integer.\nThe object structure is updated accordingly:\ntypedef struct {\nPyObject_HEAD\nPyObject *first; /* first name */\nPyObject *last; /* last name */\nint number;\n} CustomObject;\nBecause we now have data to manage, we have to be more careful about object allocation and deallocation. At a minimum, we need a deallocation method:\nstatic void\nCustom_dealloc(PyObject *op)\n{\nCustomObject *self = (CustomObject *) op;\nPy_XDECREF(self->first);\nPy_XDECREF(self->last);\nPy_TYPE(self)->tp_free(self);\n}\nwhich is assigned to the tp_dealloc\nmember:\n.tp_dealloc = Custom_dealloc,\nThis method first clears the reference counts of the two Python attributes.\nPy_XDECREF()\ncorrectly handles the case where its argument is\nNULL\n(which might happen here if tp_new\nfailed midway). It then\ncalls the tp_free\nmember of the object\u2019s type\n(computed by Py_TYPE(self)\n) to free the object\u2019s memory. Note that\nthe object\u2019s type might not be CustomType\n, because the object may\nbe an instance of a subclass.\nNote\nThe explicit cast to CustomObject *\nabove is needed because we defined\nCustom_dealloc\nto take a PyObject *\nargument, as the tp_dealloc\nfunction pointer expects to receive a PyObject *\nargument.\nBy assigning to the tp_dealloc\nslot of a type, we declare\nthat it can only be called with instances of our CustomObject\nclass, so the cast to (CustomObject *)\nis safe.\nThis is object-oriented polymorphism, in C!\nIn existing code, or in previous versions of this tutorial,\nyou might see similar functions take a pointer to the subtype\nobject structure (CustomObject*\n) directly, like this:\nCustom_dealloc(CustomObject *self)\n{\nPy_XDECREF(self->first);\nPy_XDECREF(self->last);\nPy_TYPE(self)->tp_free((PyObject *) self);\n}\n...\n.tp_dealloc = (destructor) Custom_dealloc,\nThis does the same thing on all architectures that CPython supports, but according to the C standard, it invokes undefined behavior.\nWe want to make sure that the first and last names are initialized to empty\nstrings, so we provide a tp_new\nimplementation:\nstatic PyObject *\nCustom_new(PyTypeObject *type, PyObject *args, PyObject *kwds)\n{\nCustomObject *self;\nself = (CustomObject *) type->tp_alloc(type, 0);\nif (self != NULL) {\nself->first = PyUnicode_FromString(\"\");\nif (self->first == NULL) {\nPy_DECREF(self);\nreturn NULL;\n}\nself->last = PyUnicode_FromString(\"\");\nif (self->last == NULL) {\nPy_DECREF(self);\nreturn NULL;\n}\nself->number = 0;\n}\nreturn (PyObject *) self;\n}\nand install it in the tp_new\nmember:\n.tp_new = Custom_new,\nThe tp_new\nhandler is responsible for creating (as opposed to initializing)\nobjects of the type. It is exposed in Python as the __new__()\nmethod.\nIt is not required to define a tp_new\nmember, and indeed many extension\ntypes will simply reuse PyType_GenericNew()\nas done in the first\nversion of the Custom\ntype above. In this case, we use the tp_new\nhandler to initialize the first\nand last\nattributes to non-NULL\ndefault values.\ntp_new\nis passed the type being instantiated (not necessarily CustomType\n,\nif a subclass is instantiated) and any arguments passed when the type was\ncalled, and is expected to return the instance created. tp_new\nhandlers\nalways accept positional and keyword arguments, but they often ignore the\narguments, leaving the argument handling to initializer (a.k.a. tp_init\nin C or __init__\nin Python) methods.\nNote\ntp_new\nshouldn\u2019t call tp_init\nexplicitly, as the interpreter\nwill do it itself.\nThe tp_new\nimplementation calls the tp_alloc\nslot to allocate memory:\nself = (CustomObject *) type->tp_alloc(type, 0);\nSince memory allocation may fail, we must check the tp_alloc\nresult against NULL\nbefore proceeding.\nNote\nWe didn\u2019t fill the tp_alloc\nslot ourselves. Rather\nPyType_Ready()\nfills it for us by inheriting it from our base class,\nwhich is object\nby default. Most types use the default allocation\nstrategy.\nNote\nIf you are creating a co-operative tp_new\n(one\nthat calls a base type\u2019s tp_new\nor __new__()\n),\nyou must not try to determine what method to call using method resolution\norder at runtime. Always statically determine what type you are going to\ncall, and call its tp_new\ndirectly, or via\ntype->tp_base->tp_new\n. If you do not do this, Python subclasses of your\ntype that also inherit from other Python-defined classes may not work correctly.\n(Specifically, you may not be able to create instances of such subclasses\nwithout getting a TypeError\n.)\nWe also define an initialization function which accepts arguments to provide initial values for our instance:\nstatic int\nCustom_init(PyObject *op, PyObject *args, PyObject *kwds)\n{\nCustomObject *self = (CustomObject *) op;\nstatic char *kwlist[] = {\"first\", \"last\", \"number\", NULL};\nPyObject *first = NULL, *last = NULL, *tmp;\nif (!PyArg_ParseTupleAndKeywords(args, kwds, \"|OOi\", kwlist,\n&first, &last,\n&self->number))\nreturn -1;\nif (first) {\ntmp = self->first;\nPy_INCREF(first);\nself->first = first;\nPy_XDECREF(tmp);\n}\nif (last) {\ntmp = self->last;\nPy_INCREF(last);\nself->last = last;\nPy_XDECREF(tmp);\n}\nreturn 0;\n}\nby filling the tp_init\nslot.\n.tp_init = Custom_init,\nThe tp_init\nslot is exposed in Python as the\n__init__()\nmethod. It is used to initialize an object after it\u2019s\ncreated. Initializers always accept positional and keyword arguments,\nand they should return either 0\non success or -1\non error.\nUnlike the tp_new\nhandler, there is no guarantee that tp_init\nis called at all (for example, the pickle\nmodule by default\ndoesn\u2019t call __init__()\non unpickled instances). It can also be\ncalled multiple times. Anyone can call the __init__()\nmethod on\nour objects. For this reason, we have to be extra careful when assigning\nthe new attribute values. We might be tempted, for example to assign the\nfirst\nmember like this:\nif (first) {\nPy_XDECREF(self->first);\nPy_INCREF(first);\nself->first = first;\n}\nBut this would be risky. Our type doesn\u2019t restrict the type of the\nfirst\nmember, so it could be any kind of object. It could have a\ndestructor that causes code to be executed that tries to access the\nfirst\nmember; or that destructor could detach the\nthread state and let arbitrary code run in other\nthreads that accesses and modifies our object.\nTo be paranoid and protect ourselves against this possibility, we almost always reassign members before decrementing their reference counts. When don\u2019t we have to do this?\nwhen we absolutely know that the reference count is greater than 1;\nwhen we know that deallocation of the object [1] will neither detach the thread state nor cause any calls back into our type\u2019s code;\nwhen decrementing a reference count in a\ntp_dealloc\nhandler on a type which doesn\u2019t support cyclic garbage collection [2].\nWe want to expose our instance variables as attributes. There are a number of ways to do that. The simplest way is to define member definitions:\nstatic PyMemberDef Custom_members[] = {\n{\"first\", Py_T_OBJECT_EX, offsetof(CustomObject, first), 0,\n\"first name\"},\n{\"last\", Py_T_OBJECT_EX, offsetof(CustomObject, last), 0,\n\"last name\"},\n{\"number\", Py_T_INT, offsetof(CustomObject, number), 0,\n\"custom number\"},\n{NULL} /* Sentinel */\n};\nand put the definitions in the tp_members\nslot:\n.tp_members = Custom_members,\nEach member definition has a member name, type, offset, access flags and documentation string. See the Generic Attribute Management section below for details.\nA disadvantage of this approach is that it doesn\u2019t provide a way to restrict the\ntypes of objects that can be assigned to the Python attributes. We expect the\nfirst and last names to be strings, but any Python objects can be assigned.\nFurther, the attributes can be deleted, setting the C pointers to NULL\n. Even\nthough we can make sure the members are initialized to non-NULL\nvalues, the\nmembers can be set to NULL\nif the attributes are deleted.\nWe define a single method, Custom.name()\n, that outputs the objects name as the\nconcatenation of the first and last names.\nstatic PyObject *\nCustom_name(PyObject *op, PyObject *Py_UNUSED(dummy))\n{\nCustomObject *self = (CustomObject *) op;\nif (self->first == NULL) {\nPyErr_SetString(PyExc_AttributeError, \"first\");\nreturn NULL;\n}\nif (self->last == NULL) {\nPyErr_SetString(PyExc_AttributeError, \"last\");\nreturn NULL;\n}\nreturn PyUnicode_FromFormat(\"%S %S\", self->first, self->last);\n}\nThe method is implemented as a C function that takes a Custom\n(or\nCustom\nsubclass) instance as the first argument. Methods always take an\ninstance as the first argument. Methods often take positional and keyword\narguments as well, but in this case we don\u2019t take any and don\u2019t need to accept\na positional argument tuple or keyword argument dictionary. This method is\nequivalent to the Python method:\ndef name(self):\nreturn \"%s %s\" % (self.first, self.last)\nNote that we have to check for the possibility that our first\nand\nlast\nmembers are NULL\n. This is because they can be deleted, in which\ncase they are set to NULL\n. It would be better to prevent deletion of these\nattributes and to restrict the attribute values to be strings. We\u2019ll see how to\ndo that in the next section.\nNow that we\u2019ve defined the method, we need to create an array of method definitions:\nstatic PyMethodDef Custom_methods[] = {\n{\"name\", Custom_name, METH_NOARGS,\n\"Return the name, combining the first and last name\"\n},\n{NULL} /* Sentinel */\n};\n(note that we used the METH_NOARGS\nflag to indicate that the method\nis expecting no arguments other than self)\nand assign it to the tp_methods\nslot:\n.tp_methods = Custom_methods,\nFinally, we\u2019ll make our type usable as a base class for subclassing. We\u2019ve\nwritten our methods carefully so far so that they don\u2019t make any assumptions\nabout the type of the object being created or used, so all we need to do is\nto add the Py_TPFLAGS_BASETYPE\nto our class flag definition:\n.tp_flags = Py_TPFLAGS_DEFAULT | Py_TPFLAGS_BASETYPE,\nWe rename PyInit_custom()\nto PyInit_custom2()\n, update the\nmodule name in the PyModuleDef\nstruct, and update the full class\nname in the PyTypeObject\nstruct.\nFinally, we update our setup.py\nfile to include the new module,\nfrom setuptools import Extension, setup\nsetup(ext_modules=[\nExtension(\"custom\", [\"custom.c\"]),\nExtension(\"custom2\", [\"custom2.c\"]),\n])\nand then we re-install so that we can import custom2\n:\n$ python -m pip install .\n2.3. Providing finer control over data attributes\u00b6\nIn this section, we\u2019ll provide finer control over how the first\nand\nlast\nattributes are set in the Custom\nexample. In the previous\nversion of our module, the instance variables first\nand last\ncould be set to non-string values or even deleted. We want to make sure that\nthese attributes always contain strings.\n#define PY_SSIZE_T_CLEAN\n#include \n#include /* for offsetof() */\ntypedef struct {\nPyObject_HEAD\nPyObject *first; /* first name */\nPyObject *last; /* last name */\nint number;\n} CustomObject;\nstatic void\nCustom_dealloc(PyObject *op)\n{\nCustomObject *self = (CustomObject *) op;\nPy_XDECREF(self->first);\nPy_XDECREF(self->last);\nPy_TYPE(self)->tp_free(self);\n}\nstatic PyObject *\nCustom_new(PyTypeObject *type, PyObject *args, PyObject *kwds)\n{\nCustomObject *self;\nself = (CustomObject *) type->tp_alloc(type, 0);\nif (self != NULL) {\nself->first = Py_GetConstant(Py_CONSTANT_EMPTY_STR);\nif (self->first == NULL) {\nPy_DECREF(self);\nreturn NULL;\n}\nself->last = Py_GetConstant(Py_CONSTANT_EMPTY_STR);\nif (self->last == NULL) {\nPy_DECREF(self);\nreturn NULL;\n}\nself->number = 0;\n}\nreturn (PyObject *) self;\n}\nstatic int\nCustom_init(PyObject *op, PyObject *args, PyObject *kwds)\n{\nCustomObject *self = (CustomObject *) op;\nstatic char *kwlist[] = {\"first\", \"last\", \"number\", NULL};\nPyObject *first = NULL, *last = NULL;\nif (!PyArg_ParseTupleAndKeywords(args, kwds, \"|UUi\", kwlist,\n&first, &last,\n&self->number))\nreturn -1;\nif (first) {\nPy_SETREF(self->first, Py_NewRef(first));\n}\nif (last) {\nPy_SETREF(self->last, Py_NewRef(last));\n}\nreturn 0;\n}\nstatic PyMemberDef Custom_members[] = {\n{\"number\", Py_T_INT, offsetof(CustomObject, number), 0,\n\"custom number\"},\n{NULL} /* Sentinel */\n};\nstatic PyObject *\nCustom_getfirst(PyObject *op, void *closure)\n{\nCustomObject *self = (CustomObject *) op;\nreturn Py_NewRef(self->first);\n}\nstatic int\nCustom_setfirst(PyObject *op, PyObject *value, void *closure)\n{\nCustomObject *self = (CustomObject *) op;\nif (value == NULL) {\nPyErr_SetString(PyExc_TypeError, \"Cannot delete the first attribute\");\nreturn -1;\n}\nif (!PyUnicode_Check(value)) {\nPyErr_SetString(PyExc_TypeError,\n\"The first attribute value must be a string\");\nreturn -1;\n}\nPy_SETREF(self->first, Py_NewRef(value));\nreturn 0;\n}\nstatic PyObject *\nCustom_getlast(PyObject *op, void *closure)\n{\nCustomObject *self = (CustomObject *) op;\nreturn Py_NewRef(self->last);\n}\nstatic int\nCustom_setlast(PyObject *op, PyObject *value, void *closure)\n{\nCustomObject *self = (CustomObject *) op;\nif (value == NULL) {\nPyErr_SetString(PyExc_TypeError, \"Cannot delete the last attribute\");\nreturn -1;\n}\nif (!PyUnicode_Check(value)) {\nPyErr_SetString(PyExc_TypeError,\n\"The last attribute value must be a string\");\nreturn -1;\n}\nPy_SETREF(self->last, Py_NewRef(value));\nreturn 0;\n}\nstatic PyGetSetDef Custom_getsetters[] = {\n{\"first\", Custom_getfirst, Custom_setfirst,\n\"first name\", NULL},\n{\"last\", Custom_getlast, Custom_setlast,\n\"last name\", NULL},\n{NULL} /* Sentinel */\n};\nstatic PyObject *\nCustom_name(PyObject *op, PyObject *Py_UNUSED(dummy))\n{\nCustomObject *self = (CustomObject *) op;\nreturn PyUnicode_FromFormat(\"%S %S\", self->first, self->last);\n}\nstatic PyMethodDef Custom_methods[] = {\n{\"name\", Custom_name, METH_NOARGS,\n\"Return the name, combining the first and last name\"\n},\n{NULL} /* Sentinel */\n};\nstatic PyTypeObject CustomType = {\n.ob_base = PyVarObject_HEAD_INIT(NULL, 0)\n.tp_name = \"custom3.Custom\",\n.tp_doc = PyDoc_STR(\"Custom objects\"),\n.tp_basicsize = sizeof(CustomObject),\n.tp_itemsize = 0,\n.tp_flags = Py_TPFLAGS_DEFAULT | Py_TPFLAGS_BASETYPE,\n.tp_new = Custom_new,\n.tp_init = Custom_init,\n.tp_dealloc = Custom_dealloc,\n.tp_members = Custom_members,\n.tp_methods = Custom_methods,\n.tp_getset = Custom_getsetters,\n};\nstatic int\ncustom_module_exec(PyObject *m)\n{\nif (PyType_Ready(&CustomType) < 0) {\nreturn -1;\n}\nif (PyModule_AddObjectRef(m, \"Custom\", (PyObject *) &CustomType) < 0) {\nreturn -1;\n}\nreturn 0;\n}\nstatic PyModuleDef_Slot custom_module_slots[] = {\n{Py_mod_exec, custom_module_exec},\n{Py_mod_multiple_interpreters, Py_MOD_MULTIPLE_INTERPRETERS_NOT_SUPPORTED},\n{0, NULL}\n};\nstatic PyModuleDef custom_module = {\n.m_base = PyModuleDef_HEAD_INIT,\n.m_name = \"custom3\",\n.m_doc = \"Example module that creates an extension type.\",\n.m_size = 0,\n.m_slots = custom_module_slots,\n};\nPyMODINIT_FUNC\nPyInit_custom3(void)\n{\nreturn PyModuleDef_Init(&custom_module);\n}\nTo provide greater control, over the first\nand last\nattributes,\nwe\u2019ll use custom getter and setter functions. Here are the functions for\ngetting and setting the first\nattribute:\nstatic PyObject *\nCustom_getfirst(PyObject *op, void *closure)\n{\nCustomObject *self = (CustomObject *) op;\nPy_INCREF(self->first);\nreturn self->first;\n}\nstatic int\nCustom_setfirst(PyObject *op, PyObject *value, void *closure)\n{\nCustomObject *self = (CustomObject *) op;\nPyObject *tmp;\nif (value == NULL) {\nPyErr_SetString(PyExc_TypeError, \"Cannot delete the first attribute\");\nreturn -1;\n}\nif (!PyUnicode_Check(value)) {\nPyErr_SetString(PyExc_TypeError,\n\"The first attribute value must be a string\");\nreturn -1;\n}\ntmp = self->first;\nPy_INCREF(value);\nself->first = value;\nPy_DECREF(tmp);\nreturn 0;\n}\nThe getter function is passed a Custom\nobject and a \u201cclosure\u201d, which is\na void pointer. In this case, the closure is ignored. (The closure supports an\nadvanced usage in which definition data is passed to the getter and setter. This\ncould, for example, be used to allow a single set of getter and setter functions\nthat decide the attribute to get or set based on data in the closure.)\nThe setter function is passed the Custom\nobject, the new value, and the\nclosure. The new value may be NULL\n, in which case the attribute is being\ndeleted. In our setter, we raise an error if the attribute is deleted or if its\nnew value is not a string.\nWe create an array of PyGetSetDef\nstructures:\nstatic PyGetSetDef Custom_getsetters[] = {\n{\"first\", Custom_getfirst, Custom_setfirst,\n\"first name\", NULL},\n{\"last\", Custom_getlast, Custom_setlast,\n\"last name\", NULL},\n{NULL} /* Sentinel */\n};\nand register it in the tp_getset\nslot:\n.tp_getset = Custom_getsetters,\nThe last item in a PyGetSetDef\nstructure is the \u201cclosure\u201d mentioned\nabove. In this case, we aren\u2019t using a closure, so we just pass NULL\n.\nWe also remove the member definitions for these attributes:\nstatic PyMemberDef Custom_members[] = {\n{\"number\", Py_T_INT, offsetof(CustomObject, number), 0,\n\"custom number\"},\n{NULL} /* Sentinel */\n};\nWe also need to update the tp_init\nhandler to only\nallow strings [3] to be passed:\nstatic int\nCustom_init(PyObject *op, PyObject *args, PyObject *kwds)\n{\nCustomObject *self = (CustomObject *) op;\nstatic char *kwlist[] = {\"first\", \"last\", \"number\", NULL};\nPyObject *first = NULL, *last = NULL, *tmp;\nif (!PyArg_ParseTupleAndKeywords(args, kwds, \"|UUi\", kwlist,\n&first, &last,\n&self->number))\nreturn -1;\nif (first) {\ntmp = self->first;\nPy_INCREF(first);\nself->first = first;\nPy_DECREF(tmp);\n}\nif (last) {\ntmp = self->last;\nPy_INCREF(last);\nself->last = last;\nPy_DECREF(tmp);\n}\nreturn 0;\n}\nWith these changes, we can assure that the first\nand last\nmembers are\nnever NULL\nso we can remove checks for NULL\nvalues in almost all cases.\nThis means that most of the Py_XDECREF()\ncalls can be converted to\nPy_DECREF()\ncalls. The only place we can\u2019t change these calls is in\nthe tp_dealloc\nimplementation, where there is the possibility that the\ninitialization of these members failed in tp_new\n.\nWe also rename the module initialization function and module name in the\ninitialization function, as we did before, and we add an extra definition to the\nsetup.py\nfile.\n2.4. Supporting cyclic garbage collection\u00b6\nPython has a cyclic garbage collector (GC) that can identify unneeded objects even when their reference counts are not zero. This can happen when objects are involved in cycles. For example, consider:\n>>> l = []\n>>> l.append(l)\n>>> del l\nIn this example, we create a list that contains itself. When we delete it, it still has a reference from itself. Its reference count doesn\u2019t drop to zero. Fortunately, Python\u2019s cyclic garbage collector will eventually figure out that the list is garbage and free it.\nIn the second version of the Custom\nexample, we allowed any kind of\nobject to be stored in the first\nor last\nattributes [4].\nBesides, in the second and third versions, we allowed subclassing\nCustom\n, and subclasses may add arbitrary attributes. For any of\nthose two reasons, Custom\nobjects can participate in cycles:\n>>> import custom3\n>>> class Derived(custom3.Custom): pass\n...\n>>> n = Derived()\n>>> n.some_attribute = n\nTo allow a Custom\ninstance participating in a reference cycle to\nbe properly detected and collected by the cyclic GC, our Custom\ntype\nneeds to fill two additional slots and to enable a flag that enables these slots:\n#define PY_SSIZE_T_CLEAN\n#include \n#include /* for offsetof() */\ntypedef struct {\nPyObject_HEAD\nPyObject *first; /* first name */\nPyObject *last; /* last name */\nint number;\n} CustomObject;\nstatic int\nCustom_traverse(PyObject *op, visitproc visit, void *arg)\n{\nCustomObject *self = (CustomObject *) op;\nPy_VISIT(self->first);\nPy_VISIT(self->last);\nreturn 0;\n}\nstatic int\nCustom_clear(PyObject *op)\n{\nCustomObject *self = (CustomObject *) op;\nPy_CLEAR(self->first);\nPy_CLEAR(self->last);\nreturn 0;\n}\nstatic void\nCustom_dealloc(PyObject *op)\n{\nPyObject_GC_UnTrack(op);\n(void)Custom_clear(op);\nPy_TYPE(op)->tp_free(op);\n}\nstatic PyObject *\nCustom_new(PyTypeObject *type, PyObject *args, PyObject *kwds)\n{\nCustomObject *self;\nself = (CustomObject *) type->tp_alloc(type, 0);\nif (self != NULL) {\nself->first = Py_GetConstant(Py_CONSTANT_EMPTY_STR);\nif (self->first == NULL) {\nPy_DECREF(self);\nreturn NULL;\n}\nself->last = Py_GetConstant(Py_CONSTANT_EMPTY_STR);\nif (self->last == NULL) {\nPy_DECREF(self);\nreturn NULL;\n}\nself->number = 0;\n}\nreturn (PyObject *) self;\n}\nstatic int\nCustom_init(PyObject *op, PyObject *args, PyObject *kwds)\n{\nCustomObject *self = (CustomObject *) op;\nstatic char *kwlist[] = {\"first\", \"last\", \"number\", NULL};\nPyObject *first = NULL, *last = NULL;\nif (!PyArg_ParseTupleAndKeywords(args, kwds, \"|UUi\", kwlist,\n&first, &last,\n&self->number))\nreturn -1;\nif (first) {\nPy_SETREF(self->first, Py_NewRef(first));\n}\nif (last) {\nPy_SETREF(self->last, Py_NewRef(last));\n}\nreturn 0;\n}\nstatic PyMemberDef Custom_members[] = {\n{\"number\", Py_T_INT, offsetof(CustomObject, number), 0,\n\"custom number\"},\n{NULL} /* Sentinel */\n};\nstatic PyObject *\nCustom_getfirst(PyObject *op, void *closure)\n{\nCustomObject *self = (CustomObject *) op;\nreturn Py_NewRef(self->first);\n}\nstatic int\nCustom_setfirst(PyObject *op, PyObject *value, void *closure)\n{\nCustomObject *self = (CustomObject *) op;\nif (value == NULL) {\nPyErr_SetString(PyExc_TypeError, \"Cannot delete the first attribute\");\nreturn -1;\n}\nif (!PyUnicode_Check(value)) {\nPyErr_SetString(PyExc_TypeError,\n\"The first attribute value must be a string\");\nreturn -1;\n}\nPy_XSETREF(self->first, Py_NewRef(value));\nreturn 0;\n}\nstatic PyObject *\nCustom_getlast(PyObject *op, void *closure)\n{\nCustomObject *self = (CustomObject *) op;\nreturn Py_NewRef(self->last);\n}\nstatic int\nCustom_setlast(PyObject *op, PyObject *value, void *closure)\n{\nCustomObject *self = (CustomObject *) op;\nif (value == NULL) {\nPyErr_SetString(PyExc_TypeError, \"Cannot delete the last attribute\");\nreturn -1;\n}\nif (!PyUnicode_Check(value)) {\nPyErr_SetString(PyExc_TypeError,\n\"The last attribute value must be a string\");\nreturn -1;\n}\nPy_XSETREF(self->last, Py_NewRef(value));\nreturn 0;\n}\nstatic PyGetSetDef Custom_getsetters[] = {\n{\"first\", Custom_getfirst, Custom_setfirst,\n\"first name\", NULL},\n{\"last\", Custom_getlast, Custom_setlast,\n\"last name\", NULL},\n{NULL} /* Sentinel */\n};\nstatic PyObject *\nCustom_name(PyObject *op, PyObject *Py_UNUSED(dummy))\n{\nCustomObject *self = (CustomObject *) op;\nreturn PyUnicode_FromFormat(\"%S %S\", self->first, self->last);\n}\nstatic PyMethodDef Custom_methods[] = {\n{\"name\", Custom_name, METH_NOARGS,\n\"Return the name, combining the first and last name\"\n},\n{NULL} /* Sentinel */\n};\nstatic PyTypeObject CustomType = {\n.ob_base = PyVarObject_HEAD_INIT(NULL, 0)\n.tp_name = \"custom4.Custom\",\n.tp_doc = PyDoc_STR(\"Custom objects\"),\n.tp_basicsize = sizeof(CustomObject),\n.tp_itemsize = 0,\n.tp_flags = Py_TPFLAGS_DEFAULT | Py_TPFLAGS_BASETYPE | Py_TPFLAGS_HAVE_GC,\n.tp_new = Custom_new,\n.tp_init = Custom_init,\n.tp_dealloc = Custom_dealloc,\n.tp_traverse = Custom_traverse,\n.tp_clear = Custom_clear,\n.tp_members = Custom_members,\n.tp_methods = Custom_methods,\n.tp_getset = Custom_getsetters,\n};\nstatic int\ncustom_module_exec(PyObject *m)\n{\nif (PyType_Ready(&CustomType) < 0) {\nreturn -1;\n}\nif (PyModule_AddObjectRef(m, \"Custom\", (PyObject *) &CustomType) < 0) {\nreturn -1;\n}\nreturn 0;\n}\nstatic PyModuleDef_Slot custom_module_slots[] = {\n{Py_mod_exec, custom_module_exec},\n{Py_mod_multiple_interpreters, Py_MOD_MULTIPLE_INTERPRETERS_NOT_SUPPORTED},\n{0, NULL}\n};\nstatic PyModuleDef custom_module = {\n.m_base = PyModuleDef_HEAD_INIT,\n.m_name = \"custom4\",\n.m_doc = \"Example module that creates an extension type.\",\n.m_size = 0,\n.m_slots = custom_module_slots,\n};\nPyMODINIT_FUNC\nPyInit_custom4(void)\n{\nreturn PyModuleDef_Init(&custom_module);\n}\nFirst, the traversal method lets the cyclic GC know about subobjects that could participate in cycles:\nstatic int\nCustom_traverse(PyObject *op, visitproc visit, void *arg)\n{\nCustomObject *self = (CustomObject *) op;\nint vret;\nif (self->first) {\nvret = visit(self->first, arg);\nif (vret != 0)\nreturn vret;\n}\nif (self->last) {\nvret = visit(self->last, arg);\nif (vret != 0)\nreturn vret;\n}\nreturn 0;\n}\nFor each subobject that can participate in cycles, we need to call the\nvisit()\nfunction, which is passed to the traversal method. The\nvisit()\nfunction takes as arguments the subobject and the extra argument\narg passed to the traversal method. It returns an integer value that must be\nreturned if it is non-zero.\nPython provides a Py_VISIT()\nmacro that automates calling visit\nfunctions. With Py_VISIT()\n, we can minimize the amount of boilerplate\nin Custom_traverse\n:\nstatic int\nCustom_traverse(PyObject *op, visitproc visit, void *arg)\n{\nCustomObject *self = (CustomObject *) op;\nPy_VISIT(self->first);\nPy_VISIT(self->last);\nreturn 0;\n}\nNote\nThe tp_traverse\nimplementation must name its\narguments exactly visit and arg in order to use Py_VISIT()\n.\nSecond, we need to provide a method for clearing any subobjects that can participate in cycles:\nstatic int\nCustom_clear(PyObject *op)\n{\nCustomObject *self = (CustomObject *) op;\nPy_CLEAR(self->first);\nPy_CLEAR(self->last);\nreturn 0;\n}\nNotice the use of the Py_CLEAR()\nmacro. It is the recommended and safe\nway to clear data attributes of arbitrary types while decrementing\ntheir reference counts. If you were to call Py_XDECREF()\ninstead\non the attribute before setting it to NULL\n, there is a possibility\nthat the attribute\u2019s destructor would call back into code that reads the\nattribute again (especially if there is a reference cycle).\nNote\nYou could emulate Py_CLEAR()\nby writing:\nPyObject *tmp;\ntmp = self->first;\nself->first = NULL;\nPy_XDECREF(tmp);\nNevertheless, it is much easier and less error-prone to always\nuse Py_CLEAR()\nwhen deleting an attribute. Don\u2019t\ntry to micro-optimize at the expense of robustness!\nThe deallocator Custom_dealloc\nmay call arbitrary code when clearing\nattributes. It means the circular GC can be triggered inside the function.\nSince the GC assumes reference count is not zero, we need to untrack the object\nfrom the GC by calling PyObject_GC_UnTrack()\nbefore clearing members.\nHere is our reimplemented deallocator using PyObject_GC_UnTrack()\nand Custom_clear\n:\nstatic void\nCustom_dealloc(PyObject *op)\n{\nPyObject_GC_UnTrack(op);\n(void)Custom_clear(op);\nPy_TYPE(op)->tp_free(op);\n}\nFinally, we add the Py_TPFLAGS_HAVE_GC\nflag to the class flags:\n.tp_flags = Py_TPFLAGS_DEFAULT | Py_TPFLAGS_BASETYPE | Py_TPFLAGS_HAVE_GC,\nThat\u2019s pretty much it. If we had written custom tp_alloc\nor\ntp_free\nhandlers, we\u2019d need to modify them for cyclic\ngarbage collection. Most extensions will use the versions automatically provided.\n2.5. Subclassing other types\u00b6\nIt is possible to create new extension types that are derived from existing\ntypes. It is easiest to inherit from the built in types, since an extension can\neasily use the PyTypeObject\nit needs. It can be difficult to share\nthese PyTypeObject\nstructures between extension modules.\nIn this example we will create a SubList\ntype that inherits from the\nbuilt-in list\ntype. The new type will be completely compatible with\nregular lists, but will have an additional increment()\nmethod that\nincreases an internal counter:\n>>> import sublist\n>>> s = sublist.SubList(range(3))\n>>> s.extend(s)\n>>> print(len(s))\n6\n>>> print(s.increment())\n1\n>>> print(s.increment())\n2\n#define PY_SSIZE_T_CLEAN\n#include \ntypedef struct {\nPyListObject list;\nint state;\n} SubListObject;\nstatic PyObject *\nSubList_increment(PyObject *op, PyObject *Py_UNUSED(dummy))\n{\nSubListObject *self = (SubListObject *) op;\nself->state++;\nreturn PyLong_FromLong(self->state);\n}\nstatic PyMethodDef SubList_methods[] = {\n{\"increment\", SubList_increment, METH_NOARGS,\nPyDoc_STR(\"increment state counter\")},\n{NULL},\n};\nstatic int\nSubList_init(PyObject *op, PyObject *args, PyObject *kwds)\n{\nSubListObject *self = (SubListObject *) op;\nif (PyList_Type.tp_init(op, args, kwds) < 0)\nreturn -1;\nself->state = 0;\nreturn 0;\n}\nstatic PyTypeObject SubListType = {\n.ob_base = PyVarObject_HEAD_INIT(NULL, 0)\n.tp_name = \"sublist.SubList\",\n.tp_doc = PyDoc_STR(\"SubList objects\"),\n.tp_basicsize = sizeof(SubListObject),\n.tp_itemsize = 0,\n.tp_flags = Py_TPFLAGS_DEFAULT | Py_TPFLAGS_BASETYPE,\n.tp_init = SubList_init,\n.tp_methods = SubList_methods,\n};\nstatic int\nsublist_module_exec(PyObject *m)\n{\nSubListType.tp_base = &PyList_Type;\nif (PyType_Ready(&SubListType) < 0) {\nreturn -1;\n}\nif (PyModule_AddObjectRef(m, \"SubList\", (PyObject *) &SubListType) < 0) {\nreturn -1;\n}\nreturn 0;\n}\nstatic PyModuleDef_Slot sublist_module_slots[] = {\n{Py_mod_exec, sublist_module_exec},\n{Py_mod_multiple_interpreters, Py_MOD_MULTIPLE_INTERPRETERS_NOT_SUPPORTED},\n{0, NULL}\n};\nstatic PyModuleDef sublist_module = {\n.m_base = PyModuleDef_HEAD_INIT,\n.m_name = \"sublist\",\n.m_doc = \"Example module that creates an extension type.\",\n.m_size = 0,\n.m_slots = sublist_module_slots,\n};\nPyMODINIT_FUNC\nPyInit_sublist(void)\n{\nreturn PyModuleDef_Init(&sublist_module);\n}\nAs you can see, the source code closely resembles the Custom\nexamples in\nprevious sections. We will break down the main differences between them.\ntypedef struct {\nPyListObject list;\nint state;\n} SubListObject;\nThe primary difference for derived type objects is that the base type\u2019s\nobject structure must be the first value. The base type will already include\nthe PyObject_HEAD()\nat the beginning of its structure.\nWhen a Python object is a SubList\ninstance, its PyObject *\npointer\ncan be safely cast to both PyListObject *\nand SubListObject *\n:\nstatic int\nSubList_init(PyObject *op, PyObject *args, PyObject *kwds)\n{\nSubListObject *self = (SubListObject *) op;\nif (PyList_Type.tp_init(op, args, kwds) < 0)\nreturn -1;\nself->state = 0;\nreturn 0;\n}\nWe see above how to call through to the __init__()\nmethod of the base\ntype.\nThis pattern is important when writing a type with custom\ntp_new\nand tp_dealloc\nmembers. The tp_new\nhandler should not actually\ncreate the memory for the object with its tp_alloc\n,\nbut let the base class handle it by calling its own tp_new\n.\nThe PyTypeObject\nstruct supports a tp_base\nspecifying the type\u2019s concrete base class. Due to cross-platform compiler\nissues, you can\u2019t fill that field directly with a reference to\nPyList_Type\n; it should be done in the Py_mod_exec\nfunction:\nstatic int\nsublist_module_exec(PyObject *m)\n{\nSubListType.tp_base = &PyList_Type;\nif (PyType_Ready(&SubListType) < 0) {\nreturn -1;\n}\nif (PyModule_AddObjectRef(m, \"SubList\", (PyObject *) &SubListType) < 0) {\nreturn -1;\n}\nreturn 0;\n}\nBefore calling PyType_Ready()\n, the type structure must have the\ntp_base\nslot filled in. When we are deriving an\nexisting type, it is not necessary to fill out the tp_alloc\nslot with PyType_GenericNew()\n\u2013 the allocation function from the base\ntype will be inherited.\nAfter that, calling PyType_Ready()\nand adding the type object to the\nmodule is the same as with the basic Custom\nexamples.\nFootnotes", "code_snippets": [], "language": "Python", "source": "python.org", "token_count": 10635} +{"url": "https://docs.python.org/3/library/email.iterators.html", "title": ": Iterators", "content": "email.iterators\n: Iterators\u00b6\nSource code: Lib/email/iterators.py\nIterating over a message object tree is fairly easy with the\nMessage.walk\nmethod. The\nemail.iterators\nmodule provides some useful higher level iterations over\nmessage object trees.\n- email.iterators.body_line_iterator(msg, decode=False)\u00b6\nThis iterates over all the payloads in all the subparts of msg, returning the string payloads line-by-line. It skips over all the subpart headers, and it skips over any subpart with a payload that isn\u2019t a Python string. This is somewhat equivalent to reading the flat text representation of the message from a file using\nreadline()\n, skipping over all the intervening headers.Optional decode is passed through to\nMessage.get_payload\n.\n- email.iterators.typed_subpart_iterator(msg, maintype='text', subtype=None)\u00b6\nThis iterates over all the subparts of msg, returning only those subparts that match the MIME type specified by maintype and subtype.\nNote that subtype is optional; if omitted, then subpart MIME type matching is done only with the main type. maintype is optional too; it defaults to text.\nThus, by default\ntyped_subpart_iterator()\nreturns each subpart that has a MIME type of text/*.\nThe following function has been added as a useful debugging tool. It should not be considered part of the supported public interface for the package.\n- email.iterators._structure(msg, fp=None, level=0, include_default=False)\u00b6\nPrints an indented representation of the content types of the message object structure. For example:\n>>> msg = email.message_from_file(somefile) >>> _structure(msg) multipart/mixed text/plain text/plain multipart/digest message/rfc822 text/plain message/rfc822 text/plain message/rfc822 text/plain message/rfc822 text/plain message/rfc822 text/plain text/plain\nOptional fp is a file-like object to print the output to. It must be suitable for Python\u2019s\nprint()\nfunction. level is used internally. include_default, if true, prints the default type as well.", "code_snippets": [], "language": "Python", "source": "python.org", "token_count": 495} +{"url": "https://docs.python.org/3/c-api/codec.html", "title": "Codec registry and support functions", "content": "Codec registry and support functions\u00b6\n-\nint PyCodec_Register(PyObject *search_function)\u00b6\n- Part of the Stable ABI.\nRegister a new codec search function.\nAs a side effect, this tries to load the\nencodings\npackage, if not yet done, to make sure that it is always first in the list of search functions.\n-\nint PyCodec_Unregister(PyObject *search_function)\u00b6\n- Part of the Stable ABI since version 3.10.\nUnregister a codec search function and clear the registry\u2019s cache. If the search function is not registered, do nothing. Return 0 on success. Raise an exception and return -1 on error.\nAdded in version 3.10.\n-\nint PyCodec_KnownEncoding(const char *encoding)\u00b6\n- Part of the Stable ABI.\nReturn\n1\nor0\ndepending on whether there is a registered codec for the given encoding. This function always succeeds.\n-\nPyObject *PyCodec_Encode(PyObject *object, const char *encoding, const char *errors)\u00b6\n- Return value: New reference. Part of the Stable ABI.\nGeneric codec based encoding API.\nobject is passed through the encoder function found for the given encoding using the error handling method defined by errors. errors may be\nNULL\nto use the default method defined for the codec. Raises aLookupError\nif no encoder can be found.\n-\nPyObject *PyCodec_Decode(PyObject *object, const char *encoding, const char *errors)\u00b6\n- Return value: New reference. Part of the Stable ABI.\nGeneric codec based decoding API.\nobject is passed through the decoder function found for the given encoding using the error handling method defined by errors. errors may be\nNULL\nto use the default method defined for the codec. Raises aLookupError\nif no decoder can be found.\nCodec lookup API\u00b6\nIn the following functions, the encoding string is looked up converted to all\nlower-case characters, which makes encodings looked up through this mechanism\neffectively case-insensitive. If no codec is found, a KeyError\nis set\nand NULL\nreturned.\n-\nPyObject *PyCodec_Encoder(const char *encoding)\u00b6\n- Return value: New reference. Part of the Stable ABI.\nGet an encoder function for the given encoding.\n-\nPyObject *PyCodec_Decoder(const char *encoding)\u00b6\n- Return value: New reference. Part of the Stable ABI.\nGet a decoder function for the given encoding.\n-\nPyObject *PyCodec_IncrementalEncoder(const char *encoding, const char *errors)\u00b6\n- Return value: New reference. Part of the Stable ABI.\nGet an\nIncrementalEncoder\nobject for the given encoding.\n-\nPyObject *PyCodec_IncrementalDecoder(const char *encoding, const char *errors)\u00b6\n- Return value: New reference. Part of the Stable ABI.\nGet an\nIncrementalDecoder\nobject for the given encoding.\n-\nPyObject *PyCodec_StreamReader(const char *encoding, PyObject *stream, const char *errors)\u00b6\n- Return value: New reference. Part of the Stable ABI.\nGet a\nStreamReader\nfactory function for the given encoding.\n-\nPyObject *PyCodec_StreamWriter(const char *encoding, PyObject *stream, const char *errors)\u00b6\n- Return value: New reference. Part of the Stable ABI.\nGet a\nStreamWriter\nfactory function for the given encoding.\nRegistry API for Unicode encoding error handlers\u00b6\n-\nint PyCodec_RegisterError(const char *name, PyObject *error)\u00b6\n- Part of the Stable ABI.\nRegister the error handling callback function error under the given name. This callback function will be called by a codec when it encounters unencodable characters/undecodable bytes and name is specified as the error parameter in the call to the encode/decode function.\nThe callback gets a single argument, an instance of\nUnicodeEncodeError\n,UnicodeDecodeError\norUnicodeTranslateError\nthat holds information about the problematic sequence of characters or bytes and their offset in the original string (see Unicode Exception Objects for functions to extract this information). The callback must either raise the given exception, or return a two-item tuple containing the replacement for the problematic sequence, and an integer giving the offset in the original string at which encoding/decoding should be resumed.Return\n0\non success,-1\non error.\n-\nPyObject *PyCodec_LookupError(const char *name)\u00b6\n- Return value: New reference. Part of the Stable ABI.\nLookup the error handling callback function registered under name. As a special case\nNULL\ncan be passed, in which case the error handling callback for \u201cstrict\u201d will be returned.\n-\nPyObject *PyCodec_StrictErrors(PyObject *exc)\u00b6\n- Return value: Always NULL. Part of the Stable ABI.\nRaise exc as an exception.\n-\nPyObject *PyCodec_IgnoreErrors(PyObject *exc)\u00b6\n- Return value: New reference. Part of the Stable ABI.\nIgnore the unicode error, skipping the faulty input.\n-\nPyObject *PyCodec_ReplaceErrors(PyObject *exc)\u00b6\n- Return value: New reference. Part of the Stable ABI.\nReplace the unicode encode error with\n?\norU+FFFD\n.\n-\nPyObject *PyCodec_XMLCharRefReplaceErrors(PyObject *exc)\u00b6\n- Return value: New reference. Part of the Stable ABI.\nReplace the unicode encode error with XML character references.\n-\nPyObject *PyCodec_BackslashReplaceErrors(PyObject *exc)\u00b6\n- Return value: New reference. Part of the Stable ABI.\nReplace the unicode encode error with backslash escapes (\n\\x\n,\\u\nand\\U\n).\n-\nPyObject *PyCodec_NameReplaceErrors(PyObject *exc)\u00b6\n- Return value: New reference. Part of the Stable ABI since version 3.7.\nReplace the unicode encode error with\n\\N{...}\nescapes.Added in version 3.5.\nCodec utility variables\u00b6\n-\nconst char *Py_hexdigits\u00b6\nA string constant containing the lowercase hexadecimal digits:\n\"0123456789abcdef\"\n.Added in version 3.3.", "code_snippets": [], "language": "Python", "source": "python.org", "token_count": 1364} +{"url": "https://docs.python.org/3/library/pyclbr.html", "title": " \u2014 Python module browser support", "content": "pyclbr\n\u2014 Python module browser support\u00b6\nSource code: Lib/pyclbr.py\nThe pyclbr\nmodule provides limited information about the\nfunctions, classes, and methods defined in a Python-coded module. The\ninformation is sufficient to implement a module browser. The\ninformation is extracted from the Python source code rather than by\nimporting the module, so this module is safe to use with untrusted code.\nThis restriction makes it impossible to use this module with modules not\nimplemented in Python, including all standard and optional extension\nmodules.\n- pyclbr.readmodule(module, path=None)\u00b6\nReturn a dictionary mapping module-level class names to class descriptors. If possible, descriptors for imported base classes are included. Parameter module is a string with the name of the module to read; it may be the name of a module within a package. If given, path is a sequence of directory paths prepended to\nsys.path\n, which is used to locate the module source code.This function is the original interface and is only kept for back compatibility. It returns a filtered version of the following.\n- pyclbr.readmodule_ex(module, path=None)\u00b6\nReturn a dictionary-based tree containing a function or class descriptors for each function and class defined in the module with a\ndef\norclass\nstatement. The returned dictionary maps module-level function and class names to their descriptors. Nested objects are entered into the children dictionary of their parent. As with readmodule, module names the module to be read and path is prepended to sys.path. If the module being read is a package, the returned dictionary has a key'__path__'\nwhose value is a list containing the package search path.\nAdded in version 3.7: Descriptors for nested definitions. They are accessed through the new children attribute. Each has a new parent attribute.\nThe descriptors returned by these functions are instances of Function and Class classes. Users are not expected to create instances of these classes.\nFunction Objects\u00b6\n- class pyclbr.Function\u00b6\nClass\nFunction\ninstances describe functions defined by def statements. They have the following attributes:- file\u00b6\nName of the file in which the function is defined.\n- module\u00b6\nThe name of the module defining the function described.\n- name\u00b6\nThe name of the function.\n- lineno\u00b6\nThe line number in the file where the definition starts.\n- parent\u00b6\nFor top-level functions,\nNone\n. For nested functions, the parent.Added in version 3.7.\n- children\u00b6\nA\ndictionary\nmapping names to descriptors for nested functions and classes.Added in version 3.7.\nClass Objects\u00b6\n- class pyclbr.Class\u00b6\nClass\nClass\ninstances describe classes defined by class statements. They have the same attributes asFunctions\nand two more.- file\u00b6\nName of the file in which the class is defined.\n- module\u00b6\nThe name of the module defining the class described.\n- name\u00b6\nThe name of the class.\n- lineno\u00b6\nThe line number in the file where the definition starts.\n- parent\u00b6\nFor top-level classes,\nNone\n. For nested classes, the parent.Added in version 3.7.\n- children\u00b6\nA dictionary mapping names to descriptors for nested functions and classes.\nAdded in version 3.7.\n- super\u00b6\nA list of\nClass\nobjects which describe the immediate base classes of the class being described. Classes which are named as superclasses but which are not discoverable byreadmodule_ex()\nare listed as a string with the class name instead of asClass\nobjects.\n- methods\u00b6\nA\ndictionary\nmapping method names to line numbers. This can be derived from the newerchildren\ndictionary, but remains for back-compatibility.", "code_snippets": [], "language": "Python", "source": "python.org", "token_count": 888} +{"url": "https://docs.python.org/3/c-api/monitoring.html", "title": "Monitoring C API", "content": "Monitoring C API\u00b6\nAdded in version 3.13.\nAn extension may need to interact with the event monitoring system. Subscribing\nto events and registering callbacks can be done via the Python API exposed in\nsys.monitoring\n.\nGenerating Execution Events\u00b6\nThe functions below make it possible for an extension to fire monitoring\nevents as it emulates the execution of Python code. Each of these functions\naccepts a PyMonitoringState\nstruct which contains concise information\nabout the activation state of events, as well as the event arguments, which\ninclude a PyObject*\nrepresenting the code object, the instruction offset\nand sometimes additional, event-specific arguments (see sys.monitoring\nfor details about the signatures of the different event callbacks).\nThe codelike\nargument should be an instance of types.CodeType\nor of a type that emulates it.\nThe VM disables tracing when firing an event, so there is no need for user code to do that.\nMonitoring functions should not be called with an exception set, except those listed below as working with the current exception.\n-\ntype PyMonitoringState\u00b6\nRepresentation of the state of an event type. It is allocated by the user while its contents are maintained by the monitoring API functions described below.\nAll of the functions below return 0 on success and -1 (with an exception set) on error.\nSee sys.monitoring\nfor descriptions of the events.\n-\nint PyMonitoring_FirePyStartEvent(PyMonitoringState *state, PyObject *codelike, int32_t offset)\u00b6\nFire a\nPY_START\nevent.\n-\nint PyMonitoring_FirePyResumeEvent(PyMonitoringState *state, PyObject *codelike, int32_t offset)\u00b6\nFire a\nPY_RESUME\nevent.\n-\nint PyMonitoring_FirePyReturnEvent(PyMonitoringState *state, PyObject *codelike, int32_t offset, PyObject *retval)\u00b6\nFire a\nPY_RETURN\nevent.\n-\nint PyMonitoring_FirePyYieldEvent(PyMonitoringState *state, PyObject *codelike, int32_t offset, PyObject *retval)\u00b6\nFire a\nPY_YIELD\nevent.\n-\nint PyMonitoring_FireCallEvent(PyMonitoringState *state, PyObject *codelike, int32_t offset, PyObject *callable, PyObject *arg0)\u00b6\nFire a\nCALL\nevent.\n-\nint PyMonitoring_FireLineEvent(PyMonitoringState *state, PyObject *codelike, int32_t offset, int lineno)\u00b6\nFire a\nLINE\nevent.\n-\nint PyMonitoring_FireJumpEvent(PyMonitoringState *state, PyObject *codelike, int32_t offset, PyObject *target_offset)\u00b6\nFire a\nJUMP\nevent.\n-\nint PyMonitoring_FireBranchLeftEvent(PyMonitoringState *state, PyObject *codelike, int32_t offset, PyObject *target_offset)\u00b6\nFire a\nBRANCH_LEFT\nevent.\n-\nint PyMonitoring_FireBranchRightEvent(PyMonitoringState *state, PyObject *codelike, int32_t offset, PyObject *target_offset)\u00b6\nFire a\nBRANCH_RIGHT\nevent.\n-\nint PyMonitoring_FireCReturnEvent(PyMonitoringState *state, PyObject *codelike, int32_t offset, PyObject *retval)\u00b6\nFire a\nC_RETURN\nevent.\n-\nint PyMonitoring_FirePyThrowEvent(PyMonitoringState *state, PyObject *codelike, int32_t offset)\u00b6\nFire a\nPY_THROW\nevent with the current exception (as returned byPyErr_GetRaisedException()\n).\n-\nint PyMonitoring_FireRaiseEvent(PyMonitoringState *state, PyObject *codelike, int32_t offset)\u00b6\nFire a\nRAISE\nevent with the current exception (as returned byPyErr_GetRaisedException()\n).\n-\nint PyMonitoring_FireCRaiseEvent(PyMonitoringState *state, PyObject *codelike, int32_t offset)\u00b6\nFire a\nC_RAISE\nevent with the current exception (as returned byPyErr_GetRaisedException()\n).\n-\nint PyMonitoring_FireReraiseEvent(PyMonitoringState *state, PyObject *codelike, int32_t offset)\u00b6\nFire a\nRERAISE\nevent with the current exception (as returned byPyErr_GetRaisedException()\n).\n-\nint PyMonitoring_FireExceptionHandledEvent(PyMonitoringState *state, PyObject *codelike, int32_t offset)\u00b6\nFire an\nEXCEPTION_HANDLED\nevent with the current exception (as returned byPyErr_GetRaisedException()\n).\n-\nint PyMonitoring_FirePyUnwindEvent(PyMonitoringState *state, PyObject *codelike, int32_t offset)\u00b6\nFire a\nPY_UNWIND\nevent with the current exception (as returned byPyErr_GetRaisedException()\n).\n-\nint PyMonitoring_FireStopIterationEvent(PyMonitoringState *state, PyObject *codelike, int32_t offset, PyObject *value)\u00b6\nFire a\nSTOP_ITERATION\nevent. Ifvalue\nis an instance ofStopIteration\n, it is used. Otherwise, a newStopIteration\ninstance is created withvalue\nas its argument.\nManaging the Monitoring State\u00b6\nMonitoring states can be managed with the help of monitoring scopes. A scope would typically correspond to a Python function.\n-\nint PyMonitoring_EnterScope(PyMonitoringState *state_array, uint64_t *version, const uint8_t *event_types, Py_ssize_t length)\u00b6\nEnter a monitored scope.\nevent_types\nis an array of the event IDs for events that may be fired from the scope. For example, the ID of aPY_START\nevent is the valuePY_MONITORING_EVENT_PY_START\n, which is numerically equal to the base-2 logarithm ofsys.monitoring.events.PY_START\n.state_array\nis an array with a monitoring state entry for each event inevent_types\n, it is allocated by the user but populated byPyMonitoring_EnterScope()\nwith information about the activation state of the event. The size ofevent_types\n(and hence also ofstate_array\n) is given inlength\n.The\nversion\nargument is a pointer to a value which should be allocated by the user together withstate_array\nand initialized to 0, and then set only byPyMonitoring_EnterScope()\nitself. It allows this function to determine whether event states have changed since the previous call, and to return quickly if they have not.The scopes referred to here are lexical scopes: a function, class or method.\nPyMonitoring_EnterScope()\nshould be called whenever the lexical scope is entered. Scopes can be reentered, reusing the same state_array and version, in situations like when emulating a recursive Python function. When a code-like\u2019s execution is paused, such as when emulating a generator, the scope needs to be exited and re-entered.The macros for event_types are:\nMacro\nEvent\n-\nPY_MONITORING_EVENT_BRANCH_LEFT\u00b6\n-\nPY_MONITORING_EVENT_BRANCH_RIGHT\u00b6\n-\nPY_MONITORING_EVENT_CALL\u00b6\n-\nPY_MONITORING_EVENT_C_RAISE\u00b6\n-\nPY_MONITORING_EVENT_C_RETURN\u00b6\n-\nPY_MONITORING_EVENT_EXCEPTION_HANDLED\u00b6\n-\nPY_MONITORING_EVENT_INSTRUCTION\u00b6\n-\nPY_MONITORING_EVENT_JUMP\u00b6\n-\nPY_MONITORING_EVENT_LINE\u00b6\n-\nPY_MONITORING_EVENT_PY_RESUME\u00b6\n-\nPY_MONITORING_EVENT_PY_RETURN\u00b6\n-\nPY_MONITORING_EVENT_PY_START\u00b6\n-\nPY_MONITORING_EVENT_PY_THROW\u00b6\n-\nPY_MONITORING_EVENT_PY_UNWIND\u00b6\n-\nPY_MONITORING_EVENT_PY_YIELD\u00b6\n-\nPY_MONITORING_EVENT_RAISE\u00b6\n-\nPY_MONITORING_EVENT_RERAISE\u00b6\n-\nPY_MONITORING_EVENT_STOP_ITERATION\u00b6\n-\nPY_MONITORING_EVENT_BRANCH_LEFT\u00b6\n-\nint PyMonitoring_ExitScope(void)\u00b6\nExit the last scope that was entered with\nPyMonitoring_EnterScope()\n.\n-\nint PY_MONITORING_IS_INSTRUMENTED_EVENT(uint8_t ev)\u00b6\nReturn true if the event corresponding to the event ID ev is a local event.\nAdded in version 3.13.\nDeprecated since version 3.14: This function is soft deprecated.", "code_snippets": [], "language": "Python", "source": "python.org", "token_count": 1701} +{"url": "https://docs.python.org/3/library/symtable.html", "title": " \u2014 Access to the compiler\u2019s symbol tables", "content": "symtable\n\u2014 Access to the compiler\u2019s symbol tables\u00b6\nSource code: Lib/symtable.py\nSymbol tables are generated by the compiler from AST just before bytecode is\ngenerated. The symbol table is responsible for calculating the scope of every\nidentifier in the code. symtable\nprovides an interface to examine these\ntables.\nGenerating Symbol Tables\u00b6\n- symtable.symtable(code, filename, compile_type)\u00b6\nReturn the toplevel\nSymbolTable\nfor the Python source code. filename is the name of the file containing the code. compile_type is like the mode argument tocompile()\n.\nExamining Symbol Tables\u00b6\n- class symtable.SymbolTableType\u00b6\nAn enumeration indicating the type of a\nSymbolTable\nobject.- MODULE = \"module\"\u00b6\nUsed for the symbol table of a module.\n- FUNCTION = \"function\"\u00b6\nUsed for the symbol table of a function.\n- CLASS = \"class\"\u00b6\nUsed for the symbol table of a class.\nThe following members refer to different flavors of annotation scopes.\n- ANNOTATION = \"annotation\"\u00b6\nUsed for annotations if\nfrom __future__ import annotations\nis active.\n- TYPE_PARAMETERS = \"type parameters\"\u00b6\nUsed for the symbol table of generic functions or generic classes.\n- TYPE_VARIABLE = \"type variable\"\u00b6\nUsed for the symbol table of the bound, the constraint tuple or the default value of a single type variable in the formal sense, i.e., a TypeVar, a TypeVarTuple or a ParamSpec object (the latter two do not support a bound or a constraint tuple).\nAdded in version 3.13.\n- class symtable.SymbolTable\u00b6\nA namespace table for a block. The constructor is not public.\n- get_type()\u00b6\nReturn the type of the symbol table. Possible values are members of the\nSymbolTableType\nenumeration.Changed in version 3.12: Added\n'annotation'\n,'TypeVar bound'\n,'type alias'\n, and'type parameter'\nas possible return values.Changed in version 3.13: Return values are members of the\nSymbolTableType\nenumeration.The exact values of the returned string may change in the future, and thus, it is recommended to use\nSymbolTableType\nmembers instead of hard-coded strings.\n- get_id()\u00b6\nReturn the table\u2019s identifier.\n- get_name()\u00b6\nReturn the table\u2019s name. This is the name of the class if the table is for a class, the name of the function if the table is for a function, or\n'top'\nif the table is global (get_type()\nreturns'module'\n). For type parameter scopes (which are used for generic classes, functions, and type aliases), it is the name of the underlying class, function, or type alias. For type alias scopes, it is the name of the type alias. ForTypeVar\nbound scopes, it is the name of theTypeVar\n.\n- get_lineno()\u00b6\nReturn the number of the first line in the block this table represents.\n- is_optimized()\u00b6\nReturn\nTrue\nif the locals in this table can be optimized.\n- is_nested()\u00b6\nReturn\nTrue\nif the block is a nested class or function.\n- has_children()\u00b6\nReturn\nTrue\nif the block has nested namespaces within it. These can be obtained withget_children()\n.\n- get_identifiers()\u00b6\nReturn a view object containing the names of symbols in the table. See the documentation of view objects.\n- get_children()\u00b6\nReturn a list of the nested symbol tables.\n- class symtable.Function\u00b6\nA namespace for a function or method. This class inherits from\nSymbolTable\n.- get_parameters()\u00b6\nReturn a tuple containing names of parameters to this function.\n- get_locals()\u00b6\nReturn a tuple containing names of locals in this function.\n- get_globals()\u00b6\nReturn a tuple containing names of globals in this function.\n- get_nonlocals()\u00b6\nReturn a tuple containing names of explicitly declared nonlocals in this function.\n- get_frees()\u00b6\nReturn a tuple containing names of free (closure) variables in this function.\n- class symtable.Class\u00b6\nA namespace of a class. This class inherits from\nSymbolTable\n.- get_methods()\u00b6\nReturn a tuple containing the names of method-like functions declared in the class.\nHere, the term \u2018method\u2019 designates any function defined in the class body via\ndef\norasync def\n.Functions defined in a deeper scope (e.g., in an inner class) are not picked up by\nget_methods()\n.For example:\n>>> import symtable >>> st = symtable.symtable(''' ... def outer(): pass ... ... class A: ... def f(): ... def w(): pass ... ... def g(self): pass ... ... @classmethod ... async def h(cls): pass ... ... global outer ... def outer(self): pass ... ''', 'test', 'exec') >>> class_A = st.get_children()[2] >>> class_A.get_methods() ('f', 'g', 'h')\nAlthough\nA().f()\nraisesTypeError\nat runtime,A.f\nis still considered as a method-like function.Deprecated since version 3.14, will be removed in version 3.16.\n- class symtable.Symbol\u00b6\nAn entry in a\nSymbolTable\ncorresponding to an identifier in the source. The constructor is not public.- get_name()\u00b6\nReturn the symbol\u2019s name.\n- is_referenced()\u00b6\nReturn\nTrue\nif the symbol is used in its block.\n- is_imported()\u00b6\nReturn\nTrue\nif the symbol is created from an import statement.\n- is_parameter()\u00b6\nReturn\nTrue\nif the symbol is a parameter.\n- is_type_parameter()\u00b6\nReturn\nTrue\nif the symbol is a type parameter.Added in version 3.14.\n- is_global()\u00b6\nReturn\nTrue\nif the symbol is global.\n- is_nonlocal()\u00b6\nReturn\nTrue\nif the symbol is nonlocal.\n- is_declared_global()\u00b6\nReturn\nTrue\nif the symbol is declared global with a global statement.\n- is_local()\u00b6\nReturn\nTrue\nif the symbol is local to its block.\n- is_annotated()\u00b6\nReturn\nTrue\nif the symbol is annotated.Added in version 3.6.\n- is_free()\u00b6\nReturn\nTrue\nif the symbol is referenced in its block, but not assigned to.\n- is_free_class()\u00b6\nReturn True if a class-scoped symbol is free from the perspective of a method.\nConsider the following example:\ndef f(): x = 1 # function-scoped class C: x = 2 # class-scoped def method(self): return x\nIn this example, the class-scoped symbol\nx\nis considered to be free from the perspective ofC.method\n, thereby allowing the latter to return 1 at runtime and not 2.Added in version 3.14.\n- is_assigned()\u00b6\nReturn\nTrue\nif the symbol is assigned to in its block.\n- is_comp_iter()\u00b6\nReturn\nTrue\nif the symbol is a comprehension iteration variable.Added in version 3.14.\n- is_comp_cell()\u00b6\nReturn\nTrue\nif the symbol is a cell in an inlined comprehension.Added in version 3.14.\n- is_namespace()\u00b6\nReturn\nTrue\nif name binding introduces new namespace.If the name is used as the target of a function or class statement, this will be true.\nFor example:\n>>> table = symtable.symtable(\"def some_func(): pass\", \"string\", \"exec\") >>> table.lookup(\"some_func\").is_namespace() True\nNote that a single name can be bound to multiple objects. If the result is\nTrue\n, the name may also be bound to other objects, like an int or list, that does not introduce a new namespace.\n- get_namespaces()\u00b6\nReturn a list of namespaces bound to this name.\n- get_namespace()\u00b6\nReturn the namespace bound to this name. If more than one or no namespace is bound to this name, a\nValueError\nis raised.\nCommand-Line Usage\u00b6\nAdded in version 3.13.\nThe symtable\nmodule can be executed as a script from the command line.\npython -m symtable [infile...]\nSymbol tables are generated for the specified Python source files and dumped to stdout. If no input file is specified, the content is read from stdin.", "code_snippets": ["\n ", " ", " ", " ", "\n ", "\n ", " ", " ", " ", "\n ", "\n ", " ", "\n", " ", " ", " ", " ", "\n", "\n", "\n"], "language": "Python", "source": "python.org", "token_count": 1772} +{"url": "https://docs.python.org/3/about.html", "title": "About this documentation", "content": "About this documentation\u00b6\nPython\u2019s documentation is generated from reStructuredText sources using Sphinx, a documentation generator originally created for Python and now maintained as an independent project.\nDevelopment of the documentation and its toolchain is an entirely volunteer effort, just like Python itself. If you want to contribute, please take a look at the Dealing with Bugs page for information on how to do so. New volunteers are always welcome!\nMany thanks go to:\nFred L. Drake, Jr., the creator of the original Python documentation toolset and author of much of the content;\nthe Docutils project for creating reStructuredText and the Docutils suite;\nFredrik Lundh for his Alternative Python Reference project from which Sphinx got many good ideas.\nContributors to the Python documentation\u00b6\nMany people have contributed to the Python language, the Python standard library, and the Python documentation. See Misc/ACKS in the Python source distribution for a partial list of contributors.\nIt is only with the input and contributions of the Python community that Python has such wonderful documentation \u2013 Thank You!", "code_snippets": [], "language": "Python", "source": "python.org", "token_count": 282} +{"url": "https://docs.python.org/3/c-api/apiabiversion.html", "title": "API and ABI Versioning", "content": "API and ABI Versioning\u00b6\nBuild-time version constants\u00b6\nCPython exposes its version number in the following macros.\nNote that these correspond to the version code is built with.\nSee Py_Version\nfor the version used at run time.\nSee C API Stability for a discussion of API and ABI stability across versions.\n-\nPY_MAJOR_VERSION\u00b6\nThe\n3\nin3.4.1a2\n.\n-\nPY_MINOR_VERSION\u00b6\nThe\n4\nin3.4.1a2\n.\n-\nPY_MICRO_VERSION\u00b6\nThe\n1\nin3.4.1a2\n.\n-\nPY_RELEASE_LEVEL\u00b6\nThe\na\nin3.4.1a2\n. This can be0xA\nfor alpha,0xB\nfor beta,0xC\nfor release candidate or0xF\nfor final.\n-\nPY_RELEASE_SERIAL\u00b6\nThe\n2\nin3.4.1a2\n. Zero for final releases.\n-\nPY_VERSION_HEX\u00b6\nThe Python version number encoded in a single integer. See\nPy_PACK_FULL_VERSION()\nfor the encoding details.Use this for numeric comparisons, for example,\n#if PY_VERSION_HEX >= ...\n.\nThese macros are defined in Include/patchlevel.h.\nRun-time version\u00b6\n-\nconst unsigned long Py_Version\u00b6\n- Part of the Stable ABI since version 3.11.\nThe Python runtime version number encoded in a single constant integer. See\nPy_PACK_FULL_VERSION()\nfor the encoding details. This contains the Python version used at run time.Use this for numeric comparisons, for example,\nif (Py_Version >= ...)\n.Added in version 3.11.\nBit-packing macros\u00b6\n-\nuint32_t Py_PACK_FULL_VERSION(int major, int minor, int micro, int release_level, int release_serial)\u00b6\n- Part of the Stable ABI since version 3.14.\nReturn the given version, encoded as a single 32-bit integer with the following structure:\nArgument\nNo. of bits\nBit mask\nBit shift\nExample values\n3.4.1a2\n3.10.0\nmajor\n8\n0xFF000000\n24\n0x03\n0x03\nminor\n8\n0x00FF0000\n16\n0x04\n0x0A\nmicro\n8\n0x0000FF00\n8\n0x01\n0x00\nrelease_level\n4\n0x000000F0\n4\n0xA\n0xF\nrelease_serial\n4\n0x0000000F\n0\n0x2\n0x0\nFor example:\nVersion\nPy_PACK_FULL_VERSION\nargumentsEncoded version\n3.4.1a2\n(3, 4, 1, 0xA, 2)\n0x030401a2\n3.10.0\n(3, 10, 0, 0xF, 0)\n0x030a00f0\nOut-of range bits in the arguments are ignored. That is, the macro can be defined as:\n#ifndef Py_PACK_FULL_VERSION #define Py_PACK_FULL_VERSION(X, Y, Z, LEVEL, SERIAL) ( \\ (((X) & 0xff) << 24) | \\ (((Y) & 0xff) << 16) | \\ (((Z) & 0xff) << 8) | \\ (((LEVEL) & 0xf) << 4) | \\ (((SERIAL) & 0xf) << 0)) #endif\nPy_PACK_FULL_VERSION\nis primarily a macro, intended for use in#if\ndirectives, but it is also available as an exported function.Added in version 3.14.\n-\nuint32_t Py_PACK_VERSION(int major, int minor)\u00b6\n- Part of the Stable ABI since version 3.14.\nEquivalent to\nPy_PACK_FULL_VERSION(major, minor, 0, 0, 0)\n. The result does not correspond to any Python release, but is useful in numeric comparisons.Added in version 3.14.", "code_snippets": [], "language": "Python", "source": "python.org", "token_count": 645} +{"url": "https://docs.python.org/3/library/asyncio-runner.html", "title": "Runners", "content": "Runners\u00b6\nSource code: Lib/asyncio/runners.py\nThis section outlines high-level asyncio primitives to run asyncio code.\nThey are built on top of an event loop with the aim to simplify async code usage for common wide-spread scenarios.\nRunning an asyncio Program\u00b6\n- asyncio.run(coro, *, debug=None, loop_factory=None)\u00b6\nExecute coro in an asyncio event loop and return the result.\nThe argument can be any awaitable object.\nThis function runs the awaitable, taking care of managing the asyncio event loop, finalizing asynchronous generators, and closing the executor.\nThis function cannot be called when another asyncio event loop is running in the same thread.\nIf debug is\nTrue\n, the event loop will be run in debug mode.False\ndisables debug mode explicitly.None\nis used to respect the global Debug Mode settings.If loop_factory is not\nNone\n, it is used to create a new event loop; otherwiseasyncio.new_event_loop()\nis used. The loop is closed at the end. This function should be used as a main entry point for asyncio programs, and should ideally only be called once. It is recommended to use loop_factory to configure the event loop instead of policies. Passingasyncio.EventLoop\nallows running asyncio without the policy system.The executor is given a timeout duration of 5 minutes to shutdown. If the executor hasn\u2019t finished within that duration, a warning is emitted and the executor is closed.\nExample:\nasync def main(): await asyncio.sleep(1) print('hello') asyncio.run(main())\nAdded in version 3.7.\nChanged in version 3.9: Updated to use\nloop.shutdown_default_executor()\n.Changed in version 3.10: debug is\nNone\nby default to respect the global debug mode settings.Changed in version 3.12: Added loop_factory parameter.\nChanged in version 3.14: coro can be any awaitable object.\nNote\nThe\nasyncio\npolicy system is deprecated and will be removed in Python 3.16; from there on, an explicit loop_factory is needed to configure the event loop.\nRunner context manager\u00b6\n- class asyncio.Runner(*, debug=None, loop_factory=None)\u00b6\nA context manager that simplifies multiple async function calls in the same context.\nSometimes several top-level async functions should be called in the same event loop and\ncontextvars.Context\n.If debug is\nTrue\n, the event loop will be run in debug mode.False\ndisables debug mode explicitly.None\nis used to respect the global Debug Mode settings.loop_factory could be used for overriding the loop creation. It is the responsibility of the loop_factory to set the created loop as the current one. By default\nasyncio.new_event_loop()\nis used and set as current event loop withasyncio.set_event_loop()\nif loop_factory isNone\n.Basically,\nasyncio.run()\nexample can be rewritten with the runner usage:async def main(): await asyncio.sleep(1) print('hello') with asyncio.Runner() as runner: runner.run(main())\nAdded in version 3.11.\n- run(coro, *, context=None)\u00b6\nExecute coro in the embedded event loop.\nThe argument can be any awaitable object.\nIf the argument is a coroutine, it is wrapped in a Task.\nAn optional keyword-only context argument allows specifying a custom\ncontextvars.Context\nfor the code to run in. The runner\u2019s default context is used if context isNone\n.Returns the awaitable\u2019s result or raises an exception.\nThis function cannot be called when another asyncio event loop is running in the same thread.\nChanged in version 3.14: coro can be any awaitable object.\n- close()\u00b6\nClose the runner.\nFinalize asynchronous generators, shutdown default executor, close the event loop and release embedded\ncontextvars.Context\n.\n- get_loop()\u00b6\nReturn the event loop associated with the runner instance.\nNote\nRunner\nuses the lazy initialization strategy, its constructor doesn\u2019t initialize underlying low-level structures.Embedded loop and context are created at the\nwith\nbody entering or the first call ofrun()\norget_loop()\n.\nHandling Keyboard Interruption\u00b6\nAdded in version 3.11.\nWhen signal.SIGINT\nis raised by Ctrl-C, KeyboardInterrupt\nexception is raised in the main thread by default. However this doesn\u2019t work with\nasyncio\nbecause it can interrupt asyncio internals and can hang the program from\nexiting.\nTo mitigate this issue, asyncio\nhandles signal.SIGINT\nas follows:\nasyncio.Runner.run()\ninstalls a customsignal.SIGINT\nhandler before any user code is executed and removes it when exiting from the function.The\nRunner\ncreates the main task for the passed coroutine for its execution.When\nsignal.SIGINT\nis raised by Ctrl-C, the custom signal handler cancels the main task by callingasyncio.Task.cancel()\nwhich raisesasyncio.CancelledError\ninside the main task. This causes the Python stack to unwind,try/except\nandtry/finally\nblocks can be used for resource cleanup. After the main task is cancelled,asyncio.Runner.run()\nraisesKeyboardInterrupt\n.A user could write a tight loop which cannot be interrupted by\nasyncio.Task.cancel()\n, in which case the second following Ctrl-C immediately raises theKeyboardInterrupt\nwithout cancelling the main task.", "code_snippets": [" ", "\n ", " ", "\n ", "\n\n", "\n", " ", "\n ", " ", "\n ", "\n\n", " ", " ", " ", "\n ", "\n"], "language": "Python", "source": "python.org", "token_count": 1244} +{"url": "https://docs.python.org/3/library/pty.html", "title": " \u2014 Pseudo-terminal utilities", "content": "pty\n\u2014 Pseudo-terminal utilities\u00b6\nSource code: Lib/pty.py\nThe pty\nmodule defines operations for handling the pseudo-terminal\nconcept: starting another process and being able to write to and read from its\ncontrolling terminal programmatically.\nAvailability: Unix.\nPseudo-terminal handling is highly platform dependent. This code is mainly tested on Linux, FreeBSD, and macOS (it is supposed to work on other POSIX platforms but it\u2019s not been thoroughly tested).\nThe pty\nmodule defines the following functions:\n- pty.fork()\u00b6\nFork. Connect the child\u2019s controlling terminal to a pseudo-terminal. Return value is\n(pid, fd)\n. Note that the child gets pid 0, and the fd is invalid. The parent\u2019s return value is the pid of the child, and fd is a file descriptor connected to the child\u2019s controlling terminal (and also to the child\u2019s standard input and output).Warning\nOn macOS the use of this function is unsafe when mixed with using higher-level system APIs, and that includes using\nurllib.request\n.\n- pty.openpty()\u00b6\nOpen a new pseudo-terminal pair, using\nos.openpty()\nif possible, or emulation code for generic Unix systems. Return a pair of file descriptors(master, slave)\n, for the master and the slave end, respectively.\n- pty.spawn(argv[, master_read[, stdin_read]])\u00b6\nSpawn a process, and connect its controlling terminal with the current process\u2019s standard io. This is often used to baffle programs which insist on reading from the controlling terminal. It is expected that the process spawned behind the pty will eventually terminate, and when it does spawn will return.\nA loop copies STDIN of the current process to the child and data received from the child to STDOUT of the current process. It is not signaled to the child if STDIN of the current process closes down.\nThe functions master_read and stdin_read are passed a file descriptor which they should read from, and they should always return a byte string. In order to force spawn to return before the child process exits an empty byte array should be returned to signal end of file.\nThe default implementation for both functions will read and return up to 1024 bytes each time the function is called. The master_read callback is passed the pseudoterminal\u2019s master file descriptor to read output from the child process, and stdin_read is passed file descriptor 0, to read from the parent process\u2019s standard input.\nReturning an empty byte string from either callback is interpreted as an end-of-file (EOF) condition, and that callback will not be called after that. If stdin_read signals EOF the controlling terminal can no longer communicate with the parent process OR the child process. Unless the child process will quit without any input, spawn will then loop forever. If master_read signals EOF the same behavior results (on linux at least).\nReturn the exit status value from\nos.waitpid()\non the child process.os.waitstatus_to_exitcode()\ncan be used to convert the exit status into an exit code.Raises an auditing event\npty.spawn\nwith argumentargv\n.Changed in version 3.4:\nspawn()\nnow returns the status value fromos.waitpid()\non the child process.\nExample\u00b6\nThe following program acts like the Unix command script(1), using a pseudo-terminal to record all input and output of a terminal session in a \u201ctypescript\u201d.\nimport argparse\nimport os\nimport pty\nimport sys\nimport time\nparser = argparse.ArgumentParser()\nparser.add_argument('-a', dest='append', action='store_true')\nparser.add_argument('-p', dest='use_python', action='store_true')\nparser.add_argument('filename', nargs='?', default='typescript')\noptions = parser.parse_args()\nshell = sys.executable if options.use_python else os.environ.get('SHELL', 'sh')\nfilename = options.filename\nmode = 'ab' if options.append else 'wb'\nwith open(filename, mode) as script:\ndef read(fd):\ndata = os.read(fd, 1024)\nscript.write(data)\nreturn data\nprint('Script started, file is', filename)\nscript.write(('Script started on %s\\n' % time.asctime()).encode())\npty.spawn(shell, read)\nscript.write(('Script done on %s\\n' % time.asctime()).encode())\nprint('Script done, file is', filename)", "code_snippets": ["\n", "\n", "\n", "\n", "\n\n", " ", " ", "\n", " ", " ", "\n", " ", " ", "\n", " ", " ", "\n", " ", " ", "\n\n", " ", " ", " ", " ", " ", " ", " ", "\n", " ", " ", "\n", " ", " ", " ", " ", " ", " ", "\n\n", " ", " ", " ", " ", "\n ", "\n ", " ", " ", " ", "\n ", "\n ", " ", "\n\n ", " ", "\n ", " ", " ", "\n\n ", " ", "\n\n ", " ", " ", "\n ", " ", "\n"], "language": "Python", "source": "python.org", "token_count": 1021} +{"url": "https://docs.python.org/3/library/cmdline.html", "title": "Modules command-line interface (CLI)", "content": "Modules command-line interface (CLI)\u00b6\nThe following modules have a command-line interface.\nencodings.rot_13\nthis\nSee also the Python command-line interface.\nThe following modules have a command-line interface.\nencodings.rot_13\nthis\nSee also the Python command-line interface.", "code_snippets": [], "language": "Python", "source": "python.org", "token_count": 69} +{"url": "https://docs.python.org/3/tutorial/classes.html", "title": "Classes", "content": "9. Classes\u00b6\nClasses provide a means of bundling data and functionality together. Creating a new class creates a new type of object, allowing new instances of that type to be made. Each class instance can have attributes attached to it for maintaining its state. Class instances can also have methods (defined by its class) for modifying its state.\nCompared with other programming languages, Python\u2019s class mechanism adds classes with a minimum of new syntax and semantics. It is a mixture of the class mechanisms found in C++ and Modula-3. Python classes provide all the standard features of Object Oriented Programming: the class inheritance mechanism allows multiple base classes, a derived class can override any methods of its base class or classes, and a method can call the method of a base class with the same name. Objects can contain arbitrary amounts and kinds of data. As is true for modules, classes partake of the dynamic nature of Python: they are created at runtime, and can be modified further after creation.\nIn C++ terminology, normally class members (including the data members) are public (except see below Private Variables), and all member functions are virtual. As in Modula-3, there are no shorthands for referencing the object\u2019s members from its methods: the method function is declared with an explicit first argument representing the object, which is provided implicitly by the call. As in Smalltalk, classes themselves are objects. This provides semantics for importing and renaming. Unlike C++ and Modula-3, built-in types can be used as base classes for extension by the user. Also, like in C++, most built-in operators with special syntax (arithmetic operators, subscripting etc.) can be redefined for class instances.\n(Lacking universally accepted terminology to talk about classes, I will make occasional use of Smalltalk and C++ terms. I would use Modula-3 terms, since its object-oriented semantics are closer to those of Python than C++, but I expect that few readers have heard of it.)\n9.1. A Word About Names and Objects\u00b6\nObjects have individuality, and multiple names (in multiple scopes) can be bound to the same object. This is known as aliasing in other languages. This is usually not appreciated on a first glance at Python, and can be safely ignored when dealing with immutable basic types (numbers, strings, tuples). However, aliasing has a possibly surprising effect on the semantics of Python code involving mutable objects such as lists, dictionaries, and most other types. This is usually used to the benefit of the program, since aliases behave like pointers in some respects. For example, passing an object is cheap since only a pointer is passed by the implementation; and if a function modifies an object passed as an argument, the caller will see the change \u2014 this eliminates the need for two different argument passing mechanisms as in Pascal.\n9.2. Python Scopes and Namespaces\u00b6\nBefore introducing classes, I first have to tell you something about Python\u2019s scope rules. Class definitions play some neat tricks with namespaces, and you need to know how scopes and namespaces work to fully understand what\u2019s going on. Incidentally, knowledge about this subject is useful for any advanced Python programmer.\nLet\u2019s begin with some definitions.\nA namespace is a mapping from names to objects. Most namespaces are currently\nimplemented as Python dictionaries, but that\u2019s normally not noticeable in any\nway (except for performance), and it may change in the future. Examples of\nnamespaces are: the set of built-in names (containing functions such as abs()\n, and\nbuilt-in exception names); the global names in a module; and the local names in\na function invocation. In a sense the set of attributes of an object also form\na namespace. The important thing to know about namespaces is that there is\nabsolutely no relation between names in different namespaces; for instance, two\ndifferent modules may both define a function maximize\nwithout confusion \u2014\nusers of the modules must prefix it with the module name.\nBy the way, I use the word attribute for any name following a dot \u2014 for\nexample, in the expression z.real\n, real\nis an attribute of the object\nz\n. Strictly speaking, references to names in modules are attribute\nreferences: in the expression modname.funcname\n, modname\nis a module\nobject and funcname\nis an attribute of it. In this case there happens to be\na straightforward mapping between the module\u2019s attributes and the global names\ndefined in the module: they share the same namespace! [1]\nAttributes may be read-only or writable. In the latter case, assignment to\nattributes is possible. Module attributes are writable: you can write\nmodname.the_answer = 42\n. Writable attributes may also be deleted with the\ndel\nstatement. For example, del modname.the_answer\nwill remove\nthe attribute the_answer\nfrom the object named by modname\n.\nNamespaces are created at different moments and have different lifetimes. The\nnamespace containing the built-in names is created when the Python interpreter\nstarts up, and is never deleted. The global namespace for a module is created\nwhen the module definition is read in; normally, module namespaces also last\nuntil the interpreter quits. The statements executed by the top-level\ninvocation of the interpreter, either read from a script file or interactively,\nare considered part of a module called __main__\n, so they have their own\nglobal namespace. (The built-in names actually also live in a module; this is\ncalled builtins\n.)\nThe local namespace for a function is created when the function is called, and deleted when the function returns or raises an exception that is not handled within the function. (Actually, forgetting would be a better way to describe what actually happens.) Of course, recursive invocations each have their own local namespace.\nA scope is a textual region of a Python program where a namespace is directly accessible. \u201cDirectly accessible\u201d here means that an unqualified reference to a name attempts to find the name in the namespace.\nAlthough scopes are determined statically, they are used dynamically. At any time during execution, there are 3 or 4 nested scopes whose namespaces are directly accessible:\nthe innermost scope, which is searched first, contains the local names\nthe scopes of any enclosing functions, which are searched starting with the nearest enclosing scope, contain non-local, but also non-global names\nthe next-to-last scope contains the current module\u2019s global names\nthe outermost scope (searched last) is the namespace containing built-in names\nIf a name is declared global, then all references and assignments go directly to\nthe next-to-last scope containing the module\u2019s global names. To rebind variables\nfound outside of the innermost scope, the nonlocal\nstatement can be\nused; if not declared nonlocal, those variables are read-only (an attempt to\nwrite to such a variable will simply create a new local variable in the\ninnermost scope, leaving the identically named outer variable unchanged).\nUsually, the local scope references the local names of the (textually) current function. Outside functions, the local scope references the same namespace as the global scope: the module\u2019s namespace. Class definitions place yet another namespace in the local scope.\nIt is important to realize that scopes are determined textually: the global scope of a function defined in a module is that module\u2019s namespace, no matter from where or by what alias the function is called. On the other hand, the actual search for names is done dynamically, at run time \u2014 however, the language definition is evolving towards static name resolution, at \u201ccompile\u201d time, so don\u2019t rely on dynamic name resolution! (In fact, local variables are already determined statically.)\nA special quirk of Python is that \u2013 if no global\nor nonlocal\nstatement is in effect \u2013 assignments to names always go into the innermost scope.\nAssignments do not copy data \u2014 they just bind names to objects. The same is true\nfor deletions: the statement del x\nremoves the binding of x\nfrom the\nnamespace referenced by the local scope. In fact, all operations that introduce\nnew names use the local scope: in particular, import\nstatements and\nfunction definitions bind the module or function name in the local scope.\nThe global\nstatement can be used to indicate that particular\nvariables live in the global scope and should be rebound there; the\nnonlocal\nstatement indicates that particular variables live in\nan enclosing scope and should be rebound there.\n9.2.1. Scopes and Namespaces Example\u00b6\nThis is an example demonstrating how to reference the different scopes and\nnamespaces, and how global\nand nonlocal\naffect variable\nbinding:\ndef scope_test():\ndef do_local():\nspam = \"local spam\"\ndef do_nonlocal():\nnonlocal spam\nspam = \"nonlocal spam\"\ndef do_global():\nglobal spam\nspam = \"global spam\"\nspam = \"test spam\"\ndo_local()\nprint(\"After local assignment:\", spam)\ndo_nonlocal()\nprint(\"After nonlocal assignment:\", spam)\ndo_global()\nprint(\"After global assignment:\", spam)\nscope_test()\nprint(\"In global scope:\", spam)\nThe output of the example code is:\nAfter local assignment: test spam\nAfter nonlocal assignment: nonlocal spam\nAfter global assignment: nonlocal spam\nIn global scope: global spam\nNote how the local assignment (which is default) didn\u2019t change scope_test's\nbinding of spam. The nonlocal\nassignment changed scope_test's\nbinding of spam, and the global\nassignment changed the module-level\nbinding.\nYou can also see that there was no previous binding for spam before the\nglobal\nassignment.\n9.3. A First Look at Classes\u00b6\nClasses introduce a little bit of new syntax, three new object types, and some new semantics.\n9.3.1. Class Definition Syntax\u00b6\nThe simplest form of class definition looks like this:\nclass ClassName:\n\n.\n.\n.\n\nClass definitions, like function definitions (def\nstatements) must be\nexecuted before they have any effect. (You could conceivably place a class\ndefinition in a branch of an if\nstatement, or inside a function.)\nIn practice, the statements inside a class definition will usually be function definitions, but other statements are allowed, and sometimes useful \u2014 we\u2019ll come back to this later. The function definitions inside a class normally have a peculiar form of argument list, dictated by the calling conventions for methods \u2014 again, this is explained later.\nWhen a class definition is entered, a new namespace is created, and used as the local scope \u2014 thus, all assignments to local variables go into this new namespace. In particular, function definitions bind the name of the new function here.\nWhen a class definition is left normally (via the end), a class object is\ncreated. This is basically a wrapper around the contents of the namespace\ncreated by the class definition; we\u2019ll learn more about class objects in the\nnext section. The original local scope (the one in effect just before the class\ndefinition was entered) is reinstated, and the class object is bound here to the\nclass name given in the class definition header (ClassName\nin the\nexample).\n9.3.2. Class Objects\u00b6\nClass objects support two kinds of operations: attribute references and instantiation.\nAttribute references use the standard syntax used for all attribute references\nin Python: obj.name\n. Valid attribute names are all the names that were in\nthe class\u2019s namespace when the class object was created. So, if the class\ndefinition looked like this:\nclass MyClass:\n\"\"\"A simple example class\"\"\"\ni = 12345\ndef f(self):\nreturn 'hello world'\nthen MyClass.i\nand MyClass.f\nare valid attribute references, returning\nan integer and a function object, respectively. Class attributes can also be\nassigned to, so you can change the value of MyClass.i\nby assignment.\n__doc__\nis also a valid attribute, returning the docstring\nbelonging to the class: \"A simple example class\"\n.\nClass instantiation uses function notation. Just pretend that the class object is a parameterless function that returns a new instance of the class. For example (assuming the above class):\nx = MyClass()\ncreates a new instance of the class and assigns this object to the local\nvariable x\n.\nThe instantiation operation (\u201ccalling\u201d a class object) creates an empty object.\nMany classes like to create objects with instances customized to a specific\ninitial state. Therefore a class may define a special method named\n__init__()\n, like this:\ndef __init__(self):\nself.data = []\nWhen a class defines an __init__()\nmethod, class instantiation\nautomatically invokes __init__()\nfor the newly created class instance. So\nin this example, a new, initialized instance can be obtained by:\nx = MyClass()\nOf course, the __init__()\nmethod may have arguments for greater\nflexibility. In that case, arguments given to the class instantiation operator\nare passed on to __init__()\n. For example,\n>>> class Complex:\n... def __init__(self, realpart, imagpart):\n... self.r = realpart\n... self.i = imagpart\n...\n>>> x = Complex(3.0, -4.5)\n>>> x.r, x.i\n(3.0, -4.5)\n9.3.3. Instance Objects\u00b6\nNow what can we do with instance objects? The only operations understood by instance objects are attribute references. There are two kinds of valid attribute names: data attributes and methods.\nData attributes correspond to \u201cinstance variables\u201d in Smalltalk, and to \u201cdata\nmembers\u201d in C++. Data attributes need not be declared; like local variables,\nthey spring into existence when they are first assigned to. For example, if\nx\nis the instance of MyClass\ncreated above, the following piece of\ncode will print the value 16\n, without leaving a trace:\nx.counter = 1\nwhile x.counter < 10:\nx.counter = x.counter * 2\nprint(x.counter)\ndel x.counter\nThe other kind of instance attribute reference is a method. A method is a function that \u201cbelongs to\u201d an object.\nValid method names of an instance object depend on its class. By definition,\nall attributes of a class that are function objects define corresponding\nmethods of its instances. So in our example, x.f\nis a valid method\nreference, since MyClass.f\nis a function, but x.i\nis not, since\nMyClass.i\nis not. But x.f\nis not the same thing as MyClass.f\n\u2014 it\nis a method object, not a function object.\n9.3.4. Method Objects\u00b6\nUsually, a method is called right after it is bound:\nx.f()\nIf x = MyClass()\n, as above, this will return the string 'hello world'\n.\nHowever, it is not necessary to call a method right away: x.f\nis a method\nobject, and can be stored away and called at a later time. For example:\nxf = x.f\nwhile True:\nprint(xf())\nwill continue to print hello world\nuntil the end of time.\nWhat exactly happens when a method is called? You may have noticed that\nx.f()\nwas called without an argument above, even though the function\ndefinition for f()\nspecified an argument. What happened to the argument?\nSurely Python raises an exception when a function that requires an argument is\ncalled without any \u2014 even if the argument isn\u2019t actually used\u2026\nActually, you may have guessed the answer: the special thing about methods is\nthat the instance object is passed as the first argument of the function. In our\nexample, the call x.f()\nis exactly equivalent to MyClass.f(x)\n. In\ngeneral, calling a method with a list of n arguments is equivalent to calling\nthe corresponding function with an argument list that is created by inserting\nthe method\u2019s instance object before the first argument.\nIn general, methods work as follows. When a non-data attribute of an instance is referenced, the instance\u2019s class is searched. If the name denotes a valid class attribute that is a function object, references to both the instance object and the function object are packed into a method object. When the method object is called with an argument list, a new argument list is constructed from the instance object and the argument list, and the function object is called with this new argument list.\n9.3.5. Class and Instance Variables\u00b6\nGenerally speaking, instance variables are for data unique to each instance and class variables are for attributes and methods shared by all instances of the class:\nclass Dog:\nkind = 'canine' # class variable shared by all instances\ndef __init__(self, name):\nself.name = name # instance variable unique to each instance\n>>> d = Dog('Fido')\n>>> e = Dog('Buddy')\n>>> d.kind # shared by all dogs\n'canine'\n>>> e.kind # shared by all dogs\n'canine'\n>>> d.name # unique to d\n'Fido'\n>>> e.name # unique to e\n'Buddy'\nAs discussed in A Word About Names and Objects, shared data can have possibly surprising effects involving mutable objects such as lists and dictionaries. For example, the tricks list in the following code should not be used as a class variable because just a single list would be shared by all Dog instances:\nclass Dog:\ntricks = [] # mistaken use of a class variable\ndef __init__(self, name):\nself.name = name\ndef add_trick(self, trick):\nself.tricks.append(trick)\n>>> d = Dog('Fido')\n>>> e = Dog('Buddy')\n>>> d.add_trick('roll over')\n>>> e.add_trick('play dead')\n>>> d.tricks # unexpectedly shared by all dogs\n['roll over', 'play dead']\nCorrect design of the class should use an instance variable instead:\nclass Dog:\ndef __init__(self, name):\nself.name = name\nself.tricks = [] # creates a new empty list for each dog\ndef add_trick(self, trick):\nself.tricks.append(trick)\n>>> d = Dog('Fido')\n>>> e = Dog('Buddy')\n>>> d.add_trick('roll over')\n>>> e.add_trick('play dead')\n>>> d.tricks\n['roll over']\n>>> e.tricks\n['play dead']\n9.4. Random Remarks\u00b6\nIf the same attribute name occurs in both an instance and in a class, then attribute lookup prioritizes the instance:\n>>> class Warehouse:\n... purpose = 'storage'\n... region = 'west'\n...\n>>> w1 = Warehouse()\n>>> print(w1.purpose, w1.region)\nstorage west\n>>> w2 = Warehouse()\n>>> w2.region = 'east'\n>>> print(w2.purpose, w2.region)\nstorage east\nData attributes may be referenced by methods as well as by ordinary users (\u201cclients\u201d) of an object. In other words, classes are not usable to implement pure abstract data types. In fact, nothing in Python makes it possible to enforce data hiding \u2014 it is all based upon convention. (On the other hand, the Python implementation, written in C, can completely hide implementation details and control access to an object if necessary; this can be used by extensions to Python written in C.)\nClients should use data attributes with care \u2014 clients may mess up invariants maintained by the methods by stamping on their data attributes. Note that clients may add data attributes of their own to an instance object without affecting the validity of the methods, as long as name conflicts are avoided \u2014 again, a naming convention can save a lot of headaches here.\nThere is no shorthand for referencing data attributes (or other methods!) from within methods. I find that this actually increases the readability of methods: there is no chance of confusing local variables and instance variables when glancing through a method.\nOften, the first argument of a method is called self\n. This is nothing more\nthan a convention: the name self\nhas absolutely no special meaning to\nPython. Note, however, that by not following the convention your code may be\nless readable to other Python programmers, and it is also conceivable that a\nclass browser program might be written that relies upon such a convention.\nAny function object that is a class attribute defines a method for instances of that class. It is not necessary that the function definition is textually enclosed in the class definition: assigning a function object to a local variable in the class is also ok. For example:\n# Function defined outside the class\ndef f1(self, x, y):\nreturn min(x, x+y)\nclass C:\nf = f1\ndef g(self):\nreturn 'hello world'\nh = g\nNow f\n, g\nand h\nare all attributes of class C\nthat refer to\nfunction objects, and consequently they are all methods of instances of\nC\n\u2014 h\nbeing exactly equivalent to g\n. Note that this practice\nusually only serves to confuse the reader of a program.\nMethods may call other methods by using method attributes of the self\nargument:\nclass Bag:\ndef __init__(self):\nself.data = []\ndef add(self, x):\nself.data.append(x)\ndef addtwice(self, x):\nself.add(x)\nself.add(x)\nMethods may reference global names in the same way as ordinary functions. The global scope associated with a method is the module containing its definition. (A class is never used as a global scope.) While one rarely encounters a good reason for using global data in a method, there are many legitimate uses of the global scope: for one thing, functions and modules imported into the global scope can be used by methods, as well as functions and classes defined in it. Usually, the class containing the method is itself defined in this global scope, and in the next section we\u2019ll find some good reasons why a method would want to reference its own class.\nEach value is an object, and therefore has a class (also called its type).\nIt is stored as object.__class__\n.\n9.5. Inheritance\u00b6\nOf course, a language feature would not be worthy of the name \u201cclass\u201d without supporting inheritance. The syntax for a derived class definition looks like this:\nclass DerivedClassName(BaseClassName):\n\n.\n.\n.\n\nThe name BaseClassName\nmust be defined in a\nnamespace accessible from the scope containing the\nderived class definition. In place of a base class name, other arbitrary\nexpressions are also allowed. This can be useful, for example, when the base\nclass is defined in another module:\nclass DerivedClassName(modname.BaseClassName):\nExecution of a derived class definition proceeds the same as for a base class. When the class object is constructed, the base class is remembered. This is used for resolving attribute references: if a requested attribute is not found in the class, the search proceeds to look in the base class. This rule is applied recursively if the base class itself is derived from some other class.\nThere\u2019s nothing special about instantiation of derived classes:\nDerivedClassName()\ncreates a new instance of the class. Method references\nare resolved as follows: the corresponding class attribute is searched,\ndescending down the chain of base classes if necessary, and the method reference\nis valid if this yields a function object.\nDerived classes may override methods of their base classes. Because methods\nhave no special privileges when calling other methods of the same object, a\nmethod of a base class that calls another method defined in the same base class\nmay end up calling a method of a derived class that overrides it. (For C++\nprogrammers: all methods in Python are effectively virtual\n.)\nAn overriding method in a derived class may in fact want to extend rather than\nsimply replace the base class method of the same name. There is a simple way to\ncall the base class method directly: just call BaseClassName.methodname(self,\narguments)\n. This is occasionally useful to clients as well. (Note that this\nonly works if the base class is accessible as BaseClassName\nin the global\nscope.)\nPython has two built-in functions that work with inheritance:\nUse\nisinstance()\nto check an instance\u2019s type:isinstance(obj, int)\nwill beTrue\nonly ifobj.__class__\nisint\nor some class derived fromint\n.Use\nissubclass()\nto check class inheritance:issubclass(bool, int)\nisTrue\nsincebool\nis a subclass ofint\n. However,issubclass(float, int)\nisFalse\nsincefloat\nis not a subclass ofint\n.\n9.5.1. Multiple Inheritance\u00b6\nPython supports a form of multiple inheritance as well. A class definition with multiple base classes looks like this:\nclass DerivedClassName(Base1, Base2, Base3):\n\n.\n.\n.\n\nFor most purposes, in the simplest cases, you can think of the search for\nattributes inherited from a parent class as depth-first, left-to-right, not\nsearching twice in the same class where there is an overlap in the hierarchy.\nThus, if an attribute is not found in DerivedClassName\n, it is searched\nfor in Base1\n, then (recursively) in the base classes of Base1\n,\nand if it was not found there, it was searched for in Base2\n, and so on.\nIn fact, it is slightly more complex than that; the method resolution order\nchanges dynamically to support cooperative calls to super()\n. This\napproach is known in some other multiple-inheritance languages as\ncall-next-method and is more powerful than the super call found in\nsingle-inheritance languages.\nDynamic ordering is necessary because all cases of multiple inheritance exhibit\none or more diamond relationships (where at least one of the parent classes\ncan be accessed through multiple paths from the bottommost class). For example,\nall classes inherit from object\n, so any case of multiple inheritance\nprovides more than one path to reach object\n. To keep the base classes\nfrom being accessed more than once, the dynamic algorithm linearizes the search\norder in a way that preserves the left-to-right ordering specified in each\nclass, that calls each parent only once, and that is monotonic (meaning that a\nclass can be subclassed without affecting the precedence order of its parents).\nTaken together, these properties make it possible to design reliable and\nextensible classes with multiple inheritance. For more detail, see\nThe Python 2.3 Method Resolution Order.\n9.6. Private Variables\u00b6\n\u201cPrivate\u201d instance variables that cannot be accessed except from inside an\nobject don\u2019t exist in Python. However, there is a convention that is followed\nby most Python code: a name prefixed with an underscore (e.g. _spam\n) should\nbe treated as a non-public part of the API (whether it is a function, a method\nor a data member). It should be considered an implementation detail and subject\nto change without notice.\nSince there is a valid use-case for class-private members (namely to avoid name\nclashes of names with names defined by subclasses), there is limited support for\nsuch a mechanism, called name mangling. Any identifier of the form\n__spam\n(at least two leading underscores, at most one trailing underscore)\nis textually replaced with _classname__spam\n, where classname\nis the\ncurrent class name with leading underscore(s) stripped. This mangling is done\nwithout regard to the syntactic position of the identifier, as long as it\noccurs within the definition of a class.\nSee also\nThe private name mangling specifications for details and special cases.\nName mangling is helpful for letting subclasses override methods without breaking intraclass method calls. For example:\nclass Mapping:\ndef __init__(self, iterable):\nself.items_list = []\nself.__update(iterable)\ndef update(self, iterable):\nfor item in iterable:\nself.items_list.append(item)\n__update = update # private copy of original update() method\nclass MappingSubclass(Mapping):\ndef update(self, keys, values):\n# provides new signature for update()\n# but does not break __init__()\nfor item in zip(keys, values):\nself.items_list.append(item)\nThe above example would work even if MappingSubclass\nwere to introduce a\n__update\nidentifier since it is replaced with _Mapping__update\nin the\nMapping\nclass and _MappingSubclass__update\nin the MappingSubclass\nclass respectively.\nNote that the mangling rules are designed mostly to avoid accidents; it still is possible to access or modify a variable that is considered private. This can even be useful in special circumstances, such as in the debugger.\nNotice that code passed to exec()\nor eval()\ndoes not consider the\nclassname of the invoking class to be the current class; this is similar to the\neffect of the global\nstatement, the effect of which is likewise restricted\nto code that is byte-compiled together. The same restriction applies to\ngetattr()\n, setattr()\nand delattr()\n, as well as when referencing\n__dict__\ndirectly.\n9.7. Odds and Ends\u00b6\nSometimes it is useful to have a data type similar to the Pascal \u201crecord\u201d or C\n\u201cstruct\u201d, bundling together a few named data items. The idiomatic approach\nis to use dataclasses\nfor this purpose:\nfrom dataclasses import dataclass\n@dataclass\nclass Employee:\nname: str\ndept: str\nsalary: int\n>>> john = Employee('john', 'computer lab', 1000)\n>>> john.dept\n'computer lab'\n>>> john.salary\n1000\nA piece of Python code that expects a particular abstract data type can often be\npassed a class that emulates the methods of that data type instead. For\ninstance, if you have a function that formats some data from a file object, you\ncan define a class with methods read()\nand\nreadline()\nthat get the\ndata from a string buffer instead, and pass it as an argument.\nInstance method objects have attributes, too:\nm.__self__\nis the instance\nobject with the method m()\n, and m.__func__\nis\nthe function object\ncorresponding to the method.\n9.8. Iterators\u00b6\nBy now you have probably noticed that most container objects can be looped over\nusing a for\nstatement:\nfor element in [1, 2, 3]:\nprint(element)\nfor element in (1, 2, 3):\nprint(element)\nfor key in {'one':1, 'two':2}:\nprint(key)\nfor char in \"123\":\nprint(char)\nfor line in open(\"myfile.txt\"):\nprint(line, end='')\nThis style of access is clear, concise, and convenient. The use of iterators\npervades and unifies Python. Behind the scenes, the for\nstatement\ncalls iter()\non the container object. The function returns an iterator\nobject that defines the method __next__()\nwhich accesses\nelements in the container one at a time. When there are no more elements,\n__next__()\nraises a StopIteration\nexception which tells the\nfor\nloop to terminate. You can call the __next__()\nmethod\nusing the next()\nbuilt-in function; this example shows how it all works:\n>>> s = 'abc'\n>>> it = iter(s)\n>>> it\n\n>>> next(it)\n'a'\n>>> next(it)\n'b'\n>>> next(it)\n'c'\n>>> next(it)\nTraceback (most recent call last):\nFile \"\", line 1, in \nnext(it)\nStopIteration\nHaving seen the mechanics behind the iterator protocol, it is easy to add\niterator behavior to your classes. Define an __iter__()\nmethod which\nreturns an object with a __next__()\nmethod. If the class\ndefines __next__()\n, then __iter__()\ncan just return self\n:\nclass Reverse:\n\"\"\"Iterator for looping over a sequence backwards.\"\"\"\ndef __init__(self, data):\nself.data = data\nself.index = len(data)\ndef __iter__(self):\nreturn self\ndef __next__(self):\nif self.index == 0:\nraise StopIteration\nself.index = self.index - 1\nreturn self.data[self.index]\n>>> rev = Reverse('spam')\n>>> iter(rev)\n<__main__.Reverse object at 0x00A1DB50>\n>>> for char in rev:\n... print(char)\n...\nm\na\np\ns\n9.9. Generators\u00b6\nGenerators are a simple and powerful tool for creating iterators. They\nare written like regular functions but use the yield\nstatement\nwhenever they want to return data. Each time next()\nis called on it, the\ngenerator resumes where it left off (it remembers all the data values and which\nstatement was last executed). An example shows that generators can be trivially\neasy to create:\ndef reverse(data):\nfor index in range(len(data)-1, -1, -1):\nyield data[index]\n>>> for char in reverse('golf'):\n... print(char)\n...\nf\nl\no\ng\nAnything that can be done with generators can also be done with class-based\niterators as described in the previous section. What makes generators so\ncompact is that the __iter__()\nand __next__()\nmethods\nare created automatically.\nAnother key feature is that the local variables and execution state are\nautomatically saved between calls. This made the function easier to write and\nmuch more clear than an approach using instance variables like self.index\nand self.data\n.\nIn addition to automatic method creation and saving program state, when\ngenerators terminate, they automatically raise StopIteration\n. In\ncombination, these features make it easy to create iterators with no more effort\nthan writing a regular function.\n9.10. Generator Expressions\u00b6\nSome simple generators can be coded succinctly as expressions using a syntax similar to list comprehensions but with parentheses instead of square brackets. These expressions are designed for situations where the generator is used right away by an enclosing function. Generator expressions are more compact but less versatile than full generator definitions and tend to be more memory friendly than equivalent list comprehensions.\nExamples:\n>>> sum(i*i for i in range(10)) # sum of squares\n285\n>>> xvec = [10, 20, 30]\n>>> yvec = [7, 5, 3]\n>>> sum(x*y for x,y in zip(xvec, yvec)) # dot product\n260\n>>> unique_words = set(word for line in page for word in line.split())\n>>> valedictorian = max((student.gpa, student.name) for student in graduates)\n>>> data = 'golf'\n>>> list(data[i] for i in range(len(data)-1, -1, -1))\n['f', 'l', 'o', 'g']\nFootnotes", "code_snippets": ["\n ", "\n ", " ", " ", "\n\n ", "\n ", " ", "\n ", " ", " ", "\n\n ", "\n ", " ", "\n ", " ", " ", "\n\n ", " ", " ", "\n ", "\n ", " ", "\n ", "\n ", " ", "\n ", "\n ", " ", "\n\n", "\n", " ", "\n", "\n ", "\n ", "\n ", "\n ", "\n ", "\n", "\n", "\n ", " ", " ", "\n\n ", "\n ", " ", "\n", " ", " ", "\n", "\n ", " ", " ", "\n", " ", " ", "\n", "\n", " ", " ", " ", "\n", " ", " ", " ", "\n", " ", " ", " ", "\n", "\n", " ", " ", " ", "\n", " ", "\n", "\n", " ", " ", "\n", " ", " ", " ", "\n ", " ", " ", " ", " ", "\n", "\n", " ", "\n", "\n", " ", " ", "\n", " ", "\n ", "\n", "\n\n ", " ", " ", " ", "\n\n ", " ", "\n ", " ", " ", " ", "\n\n", " ", " ", " ", "\n", " ", " ", " ", "\n", " ", " ", "\n", "\n", " ", " ", "\n", "\n", " ", " ", "\n", "\n", " ", " ", "\n", "\n", "\n\n ", " ", " ", " ", "\n\n ", " ", "\n ", " ", " ", "\n\n ", " ", "\n ", "\n\n", " ", " ", " ", "\n", " ", " ", " ", "\n", " ", "\n", " ", "\n", " ", " ", "\n", " ", "\n", "\n\n ", " ", "\n ", " ", " ", "\n ", " ", " ", " ", "\n\n ", " ", "\n ", "\n\n", " ", " ", " ", "\n", " ", " ", " ", "\n", " ", "\n", " ", "\n", " ", "\n", "\n", " ", "\n", "\n", "\n", " ", " ", " ", "\n", " ", " ", " ", "\n", "\n", " ", " ", "\n", " ", "\n", "\n", " ", " ", "\n", " ", " ", "\n", " ", "\n", "\n", "\n", " ", " ", "\n ", " ", " ", "\n\n", "\n ", " ", " ", "\n\n ", "\n ", " ", "\n\n ", " ", " ", "\n", "\n ", "\n ", " ", " ", "\n\n ", " ", "\n ", "\n\n ", " ", "\n ", "\n ", "\n", "\n ", "\n ", "\n ", "\n ", "\n ", "\n", "\n", " ", " ", "\n ", "\n ", "\n ", "\n ", "\n ", "\n", "\n ", " ", "\n ", " ", " ", "\n ", "\n\n ", " ", "\n ", " ", " ", " ", "\n ", "\n\n ", " ", " ", " ", "\n\n", "\n\n ", " ", " ", "\n ", "\n ", "\n ", " ", " ", " ", " ", "\n ", "\n", " ", "\n\n", "\n", "\n ", " ", "\n ", " ", "\n ", " ", "\n", " ", " ", " ", " ", "\n", "\n", "\n", "\n", "\n", " ", " ", " ", " ", " ", "\n ", "\n", " ", " ", " ", " ", " ", "\n ", "\n", " ", " ", " ", " ", "\n ", "\n", " ", " ", " ", "\n ", "\n", " ", " ", " ", "\n ", " ", "\n", " ", " ", "\n", " ", " ", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n File ", ", line ", ", in ", "\n", "\n", "\n", "\n", "\n ", " ", "\n ", " ", " ", "\n ", " ", " ", "\n\n ", "\n ", " ", "\n\n ", "\n ", " ", " ", " ", "\n ", " ", "\n ", " ", " ", " ", " ", "\n ", " ", "\n", " ", " ", "\n", "\n", "\n", " ", " ", " ", "\n", " ", "\n", "\n", "\n", "\n", "\n", "\n", "\n ", " ", " ", " ", " ", " ", "\n ", " ", "\n", " ", " ", " ", "\n", " ", "\n", "\n", "\n", "\n", "\n", "\n", " ", " ", " ", " ", " ", "\n", "\n\n", " ", " ", " ", " ", "\n", " ", " ", " ", " ", "\n", " ", " ", " ", " ", " ", " ", "\n", "\n\n", " ", " ", " ", " ", " ", " ", " ", " ", " ", " ", "\n\n", " ", " ", " ", " ", " ", " ", " ", "\n\n", " ", " ", "\n", " ", " ", " ", " ", " ", " ", "\n", "\n"], "language": "Python", "source": "python.org", "token_count": 8139} +{"url": "https://docs.python.org/3/whatsnew/3.14.html", "title": "What\u2019s new in Python 3.14", "content": "What\u2019s new in Python 3.14\u00b6\n- Editors:\nAdam Turner and Hugo van Kemenade\nThis article explains the new features in Python 3.14, compared to 3.13. Python 3.14 was released on 7 October 2025. For full details, see the changelog.\nSee also\nPEP 745 \u2013 Python 3.14 release schedule\nSummary \u2013 Release highlights\u00b6\nPython 3.14 is the latest stable release of the Python programming language, with a mix of changes to the language, the implementation, and the standard library. The biggest changes include template string literals, deferred evaluation of annotations, and support for subinterpreters in the standard library.\nThe library changes include significantly improved capabilities for\nintrospection in asyncio,\nsupport for Zstandard via a new\ncompression.zstd\nmodule, syntax highlighting in the REPL,\nas well as the usual deprecations and removals,\nand improvements in user-friendliness and correctness.\nThis article doesn\u2019t attempt to provide a complete specification of all new features, but instead gives a convenient overview. For full details refer to the documentation, such as the Library Reference and Language Reference. To understand the complete implementation and design rationale for a change, refer to the PEP for a particular new feature; but note that PEPs usually are not kept up-to-date once a feature has been fully implemented. See Porting to Python 3.14 for guidance on upgrading from earlier versions of Python.\nInterpreter improvements:\nSignificant improvements in the standard library:\nSyntax highlighting in the default interactive shell, and color output in several standard library CLIs\nC API improvements:\nPlatform support:\nPEP 776: Emscripten is now an officially supported platform, at tier 3.\nRelease changes:\nNew features\u00b6\nPEP 649 & PEP 749: Deferred evaluation of annotations\u00b6\nThe annotations on functions, classes, and modules are no\nlonger evaluated eagerly. Instead, annotations are stored in special-purpose\nannotate functions and evaluated only when\nnecessary (except if from __future__ import annotations\nis used).\nThis change is designed to improve performance and usability of annotations in Python in most circumstances. The runtime cost for defining annotations is minimized, but it remains possible to introspect annotations at runtime. It is no longer necessary to enclose annotations in strings if they contain forward references.\nThe new annotationlib\nmodule provides tools for inspecting deferred\nannotations. Annotations may be evaluated in the VALUE\nformat (which evaluates annotations to runtime values, similar to the behavior in\nearlier Python versions), the FORWARDREF\nformat\n(which replaces undefined names with special markers), and the\nSTRING\nformat (which returns annotations as strings).\nThis example shows how these formats behave:\n>>> from annotationlib import get_annotations, Format\n>>> def func(arg: Undefined):\n... pass\n>>> get_annotations(func, format=Format.VALUE)\nTraceback (most recent call last):\n...\nNameError: name 'Undefined' is not defined\n>>> get_annotations(func, format=Format.FORWARDREF)\n{'arg': ForwardRef('Undefined', owner=)}\n>>> get_annotations(func, format=Format.STRING)\n{'arg': 'Undefined'}\nThe porting section contains guidance on changes that may be needed due to these changes, though in the majority of cases, code will continue working as-is.\n(Contributed by Jelle Zijlstra in PEP 749 and gh-119180; PEP 649 was written by Larry Hastings.)\nPEP 734: Multiple interpreters in the standard library\u00b6\nThe CPython runtime supports running multiple copies of Python in the same process simultaneously and has done so for over 20 years. Each of these separate copies is called an \u2018interpreter\u2019. However, the feature had been available only through the C-API.\nThat limitation is removed in Python 3.14,\nwith the new concurrent.interpreters\nmodule.\nThere are at least two notable reasons why using multiple interpreters has significant benefits:\nthey support a new (to Python), human-friendly concurrency model\ntrue multi-core parallelism\nFor some use cases, concurrency in software improves efficiency and\ncan simplify design, at a high level.\nAt the same time, implementing and maintaining all but the simplest concurrency\nis often a struggle for the human brain.\nThat especially applies to plain threads (for example, threading\n),\nwhere all memory is shared between all threads.\nWith multiple isolated interpreters, you can take advantage of a class of concurrency models, like Communicating Sequential Processes (CSP) or the actor model, that have found success in other programming languages, like Smalltalk, Erlang, Haskell, and Go. Think of multiple interpreters as threads but with opt-in sharing.\nRegarding multi-core parallelism: as of Python 3.12, interpreters are now sufficiently isolated from one another to be used in parallel (see PEP 684). This unlocks a variety of CPU-intensive use cases for Python that were limited by the GIL.\nUsing multiple interpreters is similar in many ways to\nmultiprocessing\n, in that they both provide isolated logical\n\u201cprocesses\u201d that can run in parallel, with no sharing by default.\nHowever, when using multiple interpreters, an application will use\nfewer system resources and will operate more efficiently (since it\nstays within the same process). Think of multiple interpreters as\nhaving the isolation of processes with the efficiency of threads.\nWhile the feature has been around for decades, multiple interpreters have not been used widely, due to low awareness and the lack of a standard library module. Consequently, they currently have several notable limitations, which are expected to improve significantly now that the feature is going mainstream.\nCurrent limitations:\nstarting each interpreter has not been optimized yet\neach interpreter uses more memory than necessary (work continues on extensive internal sharing between interpreters)\nthere aren\u2019t many options yet for truly sharing objects or other data between interpreters (other than\nmemoryview\n)many third-party extension modules on PyPI are not yet compatible with multiple interpreters (all standard library extension modules are compatible)\nthe approach to writing applications that use multiple isolated interpreters is mostly unfamiliar to Python users, for now\nThe impact of these limitations will depend on future CPython improvements, how interpreters are used, and what the community solves through PyPI packages. Depending on the use case, the limitations may not have much impact, so try it out!\nFurthermore, future CPython releases will reduce or eliminate overhead and provide utilities that are less appropriate on PyPI. In the meantime, most of the limitations can also be addressed through extension modules, meaning PyPI packages can fill any gap for 3.14, and even back to 3.12 where interpreters were finally properly isolated and stopped sharing the GIL. Likewise, libraries on PyPI are expected to emerge for high-level abstractions on top of interpreters.\nRegarding extension modules, work is in progress to update some PyPI projects, as well as tools like Cython, pybind11, nanobind, and PyO3. The steps for isolating an extension module are found at Isolating Extension Modules. Isolating a module has a lot of overlap with what is required to support free-threading, so the ongoing work in the community in that area will help accelerate support for multiple interpreters.\nAlso added in 3.14: concurrent.futures.InterpreterPoolExecutor.\n(Contributed by Eric Snow in gh-134939.)\nSee also\nPEP 750: Template string literals\u00b6\nTemplate strings are a new mechanism for custom string processing.\nThey share the familiar syntax of f-strings but, unlike f-strings,\nreturn an object representing the static and interpolated parts of\nthe string, instead of a simple str\n.\nTo write a t-string, use a 't'\nprefix instead of an 'f'\n:\n>>> variety = 'Stilton'\n>>> template = t'Try some {variety} cheese!'\n>>> type(template)\n\nTemplate\nobjects provide access to the static\nand interpolated (in curly braces) parts of a string before they are combined.\nIterate over Template\ninstances to access their parts in order:\n>>> list(template)\n['Try some ', Interpolation('Stilton', 'variety', None, ''), ' cheese!']\nIt\u2019s easy to write (or call) code to process Template\ninstances.\nFor example, here\u2019s a function that renders static parts lowercase and\nInterpolation\ninstances uppercase:\nfrom string.templatelib import Interpolation\ndef lower_upper(template):\n\"\"\"Render static parts lowercase and interpolations uppercase.\"\"\"\nparts = []\nfor part in template:\nif isinstance(part, Interpolation):\nparts.append(str(part.value).upper())\nelse:\nparts.append(part.lower())\nreturn ''.join(parts)\nname = 'Wenslydale'\ntemplate = t'Mister {name}'\nassert lower_upper(template) == 'mister WENSLYDALE'\nBecause Template\ninstances distinguish between static strings and\ninterpolations at runtime, they can be useful for sanitising user input.\nWriting a html()\nfunction that escapes user input in HTML is an exercise\nleft to the reader!\nTemplate processing code can provide improved flexibility.\nFor instance, a more advanced html()\nfunction could accept\na dict\nof HTML attributes directly in the template:\nattributes = {'src': 'limburger.jpg', 'alt': 'lovely cheese'}\ntemplate = t''\nassert html(template) == '\"lovely'\nOf course, template processing code does not need to return a string-like result.\nAn even more advanced html()\ncould return a custom type representing\na DOM-like structure.\nWith t-strings in place, developers can write systems that sanitise SQL, make safe shell operations, improve logging, tackle modern ideas in web development (HTML, CSS, and so on), and implement lightweight custom business DSLs.\n(Contributed by Jim Baker, Guido van Rossum, Paul Everitt, Koudai Aono, Lysandros Nikolaou, Dave Peck, Adam Turner, Jelle Zijlstra, B\u00e9n\u00e9dikt Tran, and Pablo Galindo Salgado in gh-132661.)\nSee also\nPEP 768: Safe external debugger interface\u00b6\nPython 3.14 introduces a zero-overhead debugging interface that allows debuggers and profilers to safely attach to running Python processes without stopping or restarting them. This is a significant enhancement to Python\u2019s debugging capabilities, meaning that unsafe alternatives are no longer required.\nThe new interface provides safe execution points for attaching debugger code without modifying the interpreter\u2019s normal execution path or adding any overhead at runtime. Due to this, tools can now inspect and interact with Python applications in real-time, which is a crucial capability for high-availability systems and production environments.\nFor convenience, this interface is implemented in the sys.remote_exec()\nfunction. For example:\nimport sys\nfrom tempfile import NamedTemporaryFile\nwith NamedTemporaryFile(mode='w', suffix='.py', delete=False) as f:\nscript_path = f.name\nf.write(f'import my_debugger; my_debugger.connect({os.getpid()})')\n# Execute in process with PID 1234\nprint('Behold! An offering:')\nsys.remote_exec(1234, script_path)\nThis function allows sending Python code to be executed in a target process at the next safe execution point. However, tool authors can also implement the protocol directly as described in the PEP, which details the underlying mechanisms used to safely attach to running processes.\nThe debugging interface has been carefully designed with security in mind and includes several mechanisms to control access:\nA\nPYTHON_DISABLE_REMOTE_DEBUG\nenvironment variable.A\n-X disable-remote-debug\ncommand-line option.A\n--without-remote-debug\nconfigure flag to completely disable the feature at build time.\n(Contributed by Pablo Galindo Salgado, Matt Wozniski, and Ivona Stojanovic in gh-131591.)\nSee also\nA new type of interpreter\u00b6\nA new type of interpreter has been added to CPython.\nIt uses tail calls between small C functions that implement individual\nPython opcodes, rather than one large C case\nstatement.\nFor certain newer compilers, this interpreter provides\nsignificantly better performance. Preliminary benchmarks suggest a geometric\nmean of 3-5% faster on the standard pyperformance\nbenchmark suite,\ndepending on platform and architecture.\nThe baseline is Python 3.14 built with Clang 19, without this new interpreter.\nThis interpreter currently only works with Clang 19 and newer on x86-64 and AArch64 architectures. However, a future release of GCC is expected to support this as well.\nThis feature is opt-in for now. Enabling profile-guided optimization is highly\nrecommendeded when using the new interpreter as it is the only configuration\nthat has been tested and validated for improved performance.\nFor further information, see --with-tail-call-interp\n.\nNote\nThis is not to be confused with tail call optimization of Python functions, which is currently not implemented in CPython.\nThis new interpreter type is an internal implementation detail of the CPython interpreter. It doesn\u2019t change the visible behavior of Python programs at all. It can improve their performance, but doesn\u2019t change anything else.\n(Contributed by Ken Jin in gh-128563, with ideas on how to implement this in CPython by Mark Shannon, Garrett Gu, Haoran Xu, and Josh Haberman.)\nFree-threaded mode improvements\u00b6\nCPython\u2019s free-threaded mode (PEP 703), initially added in 3.13, has been significantly improved in Python 3.14. The implementation described in PEP 703 has been finished, including C API changes, and temporary workarounds in the interpreter were replaced with more permanent solutions. The specializing adaptive interpreter (PEP 659) is now enabled in free-threaded mode, which along with many other optimizations greatly improves its performance. The performance penalty on single-threaded code in free-threaded mode is now roughly 5-10%, depending on the platform and C compiler used.\nFrom Python 3.14, when compiling extension modules for the free-threaded build of\nCPython on Windows, the preprocessor variable Py_GIL_DISABLED\nnow needs to\nbe specified by the build backend, as it will no longer be determined\nautomatically by the C compiler. For a running interpreter, the setting that\nwas used at compile time can be found using sysconfig.get_config_var()\n.\nThe new -X context_aware_warnings\nflag controls if\nconcurrent safe warnings control\nis enabled. The flag defaults to true for the free-threaded build\nand false for the GIL-enabled build.\nA new thread_inherit_context\nflag has been added,\nwhich if enabled means that threads created with threading.Thread\nstart with a copy of the Context()\nof the caller of\nstart()\n. Most significantly, this makes the warning\nfiltering context established by catch_warnings\nbe\n\u201cinherited\u201d by threads (or asyncio tasks) started within that context. It also\naffects other modules that use context variables, such as the decimal\ncontext manager.\nThis flag defaults to true for the free-threaded build and false for\nthe GIL-enabled build.\n(Contributed by Sam Gross, Matt Page, Neil Schemenauer, Thomas Wouters, Donghee Na, Kirill Podoprigora, Ken Jin, Itamar Oren, Brett Simmers, Dino Viehland, Nathan Goldbaum, Ralf Gommers, Lysandros Nikolaou, Kumar Aditya, Edgar Margffoy, and many others. Some of these contributors are employed by Meta, which has continued to provide significant engineering resources to support this project.)\nImproved error messages\u00b6\nThe interpreter now provides helpful suggestions when it detects typos in Python keywords. When a word that closely resembles a Python keyword is encountered, the interpreter will suggest the correct keyword in the error message. This feature helps programmers quickly identify and fix common typing mistakes. For example:\n>>> whille True: ... pass Traceback (most recent call last): File \"\", line 1 whille True: ^^^^^^ SyntaxError: invalid syntax. Did you mean 'while'?\nWhile the feature focuses on the most common cases, some variations of misspellings may still result in regular syntax errors. (Contributed by Pablo Galindo in gh-132449.)\nelif\nstatements that follow anelse\nblock now have a specific error message. (Contributed by Steele Farnsworth in gh-129902.)>>> if who == \"me\": ... print(\"It's me!\") ... else: ... print(\"It's not me!\") ... elif who is None: ... print(\"Who is it?\") File \"\", line 5 elif who is None: ^^^^ SyntaxError: 'elif' block follows an 'else' block\nIf a statement is passed to the Conditional expressions after\nelse\n, or one ofpass\n,break\n, orcontinue\nis passed beforeif\n, then the error message highlights where theexpression\nis required. (Contributed by Sergey Miryanov in gh-129515.)>>> x = 1 if True else pass Traceback (most recent call last): File \"\", line 1 x = 1 if True else pass ^^^^ SyntaxError: expected expression after 'else', but statement is given >>> x = continue if True else break Traceback (most recent call last): File \"\", line 1 x = continue if True else break ^^^^^^^^ SyntaxError: expected expression before 'if', but statement is given\nWhen incorrectly closed strings are detected, the error message suggests that the string may be intended to be part of the string. (Contributed by Pablo Galindo in gh-88535.)\n>>> \"The interesting object \"The important object\" is very important\" Traceback (most recent call last): SyntaxError: invalid syntax. Is this intended to be part of the string?\nWhen strings have incompatible prefixes, the error now shows which prefixes are incompatible. (Contributed by Nikita Sobolev in gh-133197.)\n>>> ub'abc' File \"\", line 1 ub'abc' ^^ SyntaxError: 'u' and 'b' prefixes are incompatible\nImproved error messages when using\nas\nwith incompatible targets in:Imports:\nimport ... as ...\nFrom imports:\nfrom ... import ... as ...\nExcept handlers:\nexcept ... as ...\nPattern-match cases:\ncase ... as ...\n(Contributed by Nikita Sobolev in gh-123539, gh-123562, and gh-123440.)\nImproved error message when trying to add an instance of an unhashable type to a\ndict\norset\n. (Contributed by CF Bolz-Tereick and Victor Stinner in gh-132828.)>>> s = set() >>> s.add({'pages': 12, 'grade': 'A'}) Traceback (most recent call last): File \"\", line 1, in s.add({'pages': 12, 'grade': 'A'}) ~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ TypeError: cannot use 'dict' as a set element (unhashable type: 'dict') >>> d = {} >>> l = [1, 2, 3] >>> d[l] = 12 Traceback (most recent call last): File \"\", line 1, in d[l] = 12 ~^^^ TypeError: cannot use 'list' as a dict key (unhashable type: 'list')\nImproved error message when an object supporting the synchronous context manager protocol is entered using\nasync with\ninstead ofwith\n, and vice versa for the asynchronous context manager protocol. (Contributed by B\u00e9n\u00e9dikt Tran in gh-128398.)\nPEP 784: Zstandard support in the standard library\u00b6\nThe new compression\npackage contains modules compression.lzma\n,\ncompression.bz2\n, compression.gzip\nand compression.zlib\nwhich re-export the lzma\n, bz2\n, gzip\nand zlib\nmodules respectively. The new import names under compression\nare the\npreferred names for importing these compression modules from Python 3.14. However,\nthe existing modules names have not been deprecated. Any deprecation or removal\nof the existing compression modules will occur no sooner than five years after\nthe release of 3.14.\nThe new compression.zstd\nmodule provides compression and decompression\nAPIs for the Zstandard format via bindings to Meta\u2019s zstd library. Zstandard is a widely adopted, highly\nefficient, and fast compression format. In addition to the APIs introduced in\ncompression.zstd\n, support for reading and writing Zstandard compressed\narchives has been added to the tarfile\n, zipfile\n, and\nshutil\nmodules.\nHere\u2019s an example of using the new module to compress some data:\nfrom compression import zstd\nimport math\ndata = str(math.pi).encode() * 20\ncompressed = zstd.compress(data)\nratio = len(compressed) / len(data)\nprint(f\"Achieved compression ratio of {ratio}\")\nAs can be seen, the API is similar to the APIs of the lzma\nand\nbz2\nmodules.\n(Contributed by Emma Harper Smith, Adam Turner, Gregory P. Smith, Tomas Roun, Victor Stinner, and Rogdham in gh-132983.)\nSee also\nAsyncio introspection capabilities\u00b6\nAdded a new command-line interface to inspect running Python processes\nusing asynchronous tasks, available via python -m asyncio ps PID\nor python -m asyncio pstree PID\n.\nThe ps\nsubcommand inspects the given process ID (PID) and displays\ninformation about currently running asyncio tasks.\nIt outputs a task table: a flat listing of all tasks, their names,\ntheir coroutine stacks, and which tasks are awaiting them.\nThe pstree\nsubcommand fetches the same information, but instead renders a\nvisual async call tree, showing coroutine relationships in a hierarchical format.\nThis command is particularly useful for debugging long-running or stuck\nasynchronous programs.\nIt can help developers quickly identify where a program is blocked,\nwhat tasks are pending, and how coroutines are chained together.\nFor example given this code:\nimport asyncio\nasync def play_track(track):\nawait asyncio.sleep(5)\nprint(f'\ud83c\udfb5 Finished: {track}')\nasync def play_album(name, tracks):\nasync with asyncio.TaskGroup() as tg:\nfor track in tracks:\ntg.create_task(play_track(track), name=track)\nasync def main():\nasync with asyncio.TaskGroup() as tg:\ntg.create_task(\nplay_album('Sundowning', ['TNDNBTG', 'Levitate']),\nname='Sundowning')\ntg.create_task(\nplay_album('TMBTE', ['DYWTYLM', 'Aqua Regia']),\nname='TMBTE')\nif __name__ == '__main__':\nasyncio.run(main())\nExecuting the new tool on the running process will yield a table like this:\npython -m asyncio ps 12345\ntid task id task name coroutine stack awaiter chain awaiter name awaiter id\n------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n1935500 0x7fc930c18050 Task-1 TaskGroup._aexit -> TaskGroup.__aexit__ -> main 0x0\n1935500 0x7fc930c18230 Sundowning TaskGroup._aexit -> TaskGroup.__aexit__ -> album TaskGroup._aexit -> TaskGroup.__aexit__ -> main Task-1 0x7fc930c18050\n1935500 0x7fc93173fa50 TMBTE TaskGroup._aexit -> TaskGroup.__aexit__ -> album TaskGroup._aexit -> TaskGroup.__aexit__ -> main Task-1 0x7fc930c18050\n1935500 0x7fc93173fdf0 TNDNBTG sleep -> play TaskGroup._aexit -> TaskGroup.__aexit__ -> album Sundowning 0x7fc930c18230\n1935500 0x7fc930d32510 Levitate sleep -> play TaskGroup._aexit -> TaskGroup.__aexit__ -> album Sundowning 0x7fc930c18230\n1935500 0x7fc930d32890 DYWTYLM sleep -> play TaskGroup._aexit -> TaskGroup.__aexit__ -> album TMBTE 0x7fc93173fa50\n1935500 0x7fc93161ec30 Aqua Regia sleep -> play TaskGroup._aexit -> TaskGroup.__aexit__ -> album TMBTE 0x7fc93173fa50\nor a tree like this:\npython -m asyncio pstree 12345\n\u2514\u2500\u2500 (T) Task-1\n\u2514\u2500\u2500 main example.py:13\n\u2514\u2500\u2500 TaskGroup.__aexit__ Lib/asyncio/taskgroups.py:72\n\u2514\u2500\u2500 TaskGroup._aexit Lib/asyncio/taskgroups.py:121\n\u251c\u2500\u2500 (T) Sundowning\n\u2502 \u2514\u2500\u2500 album example.py:8\n\u2502 \u2514\u2500\u2500 TaskGroup.__aexit__ Lib/asyncio/taskgroups.py:72\n\u2502 \u2514\u2500\u2500 TaskGroup._aexit Lib/asyncio/taskgroups.py:121\n\u2502 \u251c\u2500\u2500 (T) TNDNBTG\n\u2502 \u2502 \u2514\u2500\u2500 play example.py:4\n\u2502 \u2502 \u2514\u2500\u2500 sleep Lib/asyncio/tasks.py:702\n\u2502 \u2514\u2500\u2500 (T) Levitate\n\u2502 \u2514\u2500\u2500 play example.py:4\n\u2502 \u2514\u2500\u2500 sleep Lib/asyncio/tasks.py:702\n\u2514\u2500\u2500 (T) TMBTE\n\u2514\u2500\u2500 album example.py:8\n\u2514\u2500\u2500 TaskGroup.__aexit__ Lib/asyncio/taskgroups.py:72\n\u2514\u2500\u2500 TaskGroup._aexit Lib/asyncio/taskgroups.py:121\n\u251c\u2500\u2500 (T) DYWTYLM\n\u2502 \u2514\u2500\u2500 play example.py:4\n\u2502 \u2514\u2500\u2500 sleep Lib/asyncio/tasks.py:702\n\u2514\u2500\u2500 (T) Aqua Regia\n\u2514\u2500\u2500 play example.py:4\n\u2514\u2500\u2500 sleep Lib/asyncio/tasks.py:702\nIf a cycle is detected in the async await graph (which could indicate a programming issue), the tool raises an error and lists the cycle paths that prevent tree construction:\npython -m asyncio pstree 12345\nERROR: await-graph contains cycles - cannot print a tree!\ncycle: Task-2 \u2192 Task-3 \u2192 Task-2\n(Contributed by Pablo Galindo, \u0141ukasz Langa, Yury Selivanov, and Marta Gomez Macias in gh-91048.)\nConcurrent safe warnings control\u00b6\nThe warnings.catch_warnings\ncontext manager will now optionally\nuse a context variable for warning filters. This is enabled by setting\nthe context_aware_warnings\nflag, either with the -X\ncommand-line option or an environment variable. This gives predictable\nwarnings control when using catch_warnings\ncombined with\nmultiple threads or asynchronous tasks. The flag defaults to true for the\nfree-threaded build and false for the GIL-enabled build.\n(Contributed by Neil Schemenauer and Kumar Aditya in gh-130010.)\nOther language changes\u00b6\nAll Windows code pages are now supported as \u2018cpXXX\u2019 codecs on Windows. (Contributed by Serhiy Storchaka in gh-123803.)\nImplement mixed-mode arithmetic rules combining real and complex numbers as specified by the C standard since C99. (Contributed by Sergey B Kirpichev in gh-69639.)\nMore syntax errors are now detected regardless of optimisation and the\n-O\ncommand-line option. This includes writes to__debug__\n, incorrect use ofawait\n, and asynchronous comprehensions outside asynchronous functions. For example,python -O -c 'assert (__debug__ := 1)'\norpython -O -c 'assert await 1'\nnow produceSyntaxError\ns. (Contributed by Irit Katriel and Jelle Zijlstra in gh-122245 & gh-121637.)When subclassing a pure C type, the C slots for the new type are no longer replaced with a wrapped version on class creation if they are not explicitly overridden in the subclass. (Contributed by Tomasz Pytel in gh-132284.)\nBuilt-ins\u00b6\nThe\nbytes.fromhex()\nandbytearray.fromhex()\nmethods now accept ASCIIbytes\nand bytes-like objects. (Contributed by Daniel Pope in gh-129349.)Add class methods\nfloat.from_number()\nandcomplex.from_number()\nto convert a number tofloat\norcomplex\ntype correspondingly. They raise aTypeError\nif the argument is not a real number. (Contributed by Serhiy Storchaka in gh-84978.)Support underscore and comma as thousands separators in the fractional part for floating-point presentation types of the new-style string formatting (with\nformat()\nor f-strings). (Contributed by Sergey B Kirpichev in gh-87790.)The\nint()\nfunction no longer delegates to__trunc__()\n. Classes that want to support conversion toint()\nmust implement either__int__()\nor__index__()\n. (Contributed by Mark Dickinson in gh-119743.)The\nmap()\nfunction now has an optional keyword-only strict flag likezip()\nto check that all the iterables are of equal length. (Contributed by Wannes Boeykens in gh-119793.)The\nmemoryview\ntype now supports subscription, making it a generic type. (Contributed by Brian Schubert in gh-126012.)Using\nNotImplemented\nin a boolean context will now raise aTypeError\n. This has raised aDeprecationWarning\nsince Python 3.9. (Contributed by Jelle Zijlstra in gh-118767.)Three-argument\npow()\nnow tries calling__rpow__()\nif necessary. Previously it was only called in two-argumentpow()\nand the binary power operator. (Contributed by Serhiy Storchaka in gh-130104.)super\nobjects are nowcopyable\nandpickleable\n. (Contributed by Serhiy Storchaka in gh-125767.)\nCommand line and environment\u00b6\nThe import time flag can now track modules that are already loaded (\u2018cached\u2019), via the new\n-X importtime=2\n. When such a module is imported, theself\nandcumulative\ntimes are replaced by the stringcached\n.Values above\n2\nfor-X importtime\nare now reserved for future use.(Contributed by Noah Kim and Adam Turner in gh-118655.)\nThe command-line option\n-c\nnow automatically dedents its code argument before execution. The auto-dedentation behavior mirrorstextwrap.dedent()\n. (Contributed by Jon Crall and Steven Sun in gh-103998.)-J\nis no longer a reserved flag for Jython, and now has no special meaning. (Contributed by Adam Turner in gh-133336.)\nPEP 758: Allow except\nand except*\nexpressions without brackets\u00b6\nThe except\nand except*\nexpressions\nnow allow brackets to be omitted when there are multiple exception types\nand the as\nclause is not used.\nFor example:\ntry:\nconnect_to_server()\nexcept TimeoutError, ConnectionRefusedError:\nprint('The network has ceased to be!')\n(Contributed by Pablo Galindo and Brett Cannon in PEP 758 and gh-131831.)\nPEP 765: Control flow in finally\nblocks\u00b6\nThe compiler now emits a SyntaxWarning\nwhen a return\n,\nbreak\n, or continue\nstatement have the effect of\nleaving a finally\nblock.\nThis change is specified in PEP 765.\nIn situations where this change is inconvenient (such as those where the\nwarnings are redundant due to code linting), the warning filter can be used to turn off all syntax warnings by adding\nignore::SyntaxWarning\nas a filter. This can be specified in combination\nwith a filter that converts other warnings to errors (for example, passing\n-Werror -Wignore::SyntaxWarning\nas CLI options, or setting\nPYTHONWARNINGS=error,ignore::SyntaxWarning\n).\nNote that applying such a filter at runtime using the warnings\nmodule\nwill only suppress the warning in code that is compiled after the filter is\nadjusted. Code that is compiled prior to the filter adjustment (for example,\nwhen a module is imported) will still emit the syntax warning.\n(Contributed by Irit Katriel in gh-130080.)\nIncremental garbage collection\u00b6\nThe cycle garbage collector is now incremental. This means that maximum pause times are reduced by an order of magnitude or more for larger heaps.\nThere are now only two generations: young and old.\nWhen gc.collect()\nis not called directly, the\nGC is invoked a little less frequently. When invoked, it\ncollects the young generation and an increment of the\nold generation, instead of collecting one or more generations.\nThe behavior of gc.collect()\nchanges slightly:\ngc.collect(1)\n: Performs an increment of garbage collection, rather than collecting generation 1.Other calls to\ngc.collect()\nare unchanged.\n(Contributed by Mark Shannon in gh-108362.)\nDefault interactive shell\u00b6\nThe default interactive shell now highlights Python syntax. The feature is enabled by default, save if\nPYTHON_BASIC_REPL\nor any other environment variable that disables colour is set. See Controlling color for details.The default color theme for syntax highlighting strives for good contrast and exclusively uses the 4-bit VGA standard ANSI color codes for maximum compatibility. The theme can be customized using an experimental API\n_colorize.set_theme()\n. This can be called interactively or in thePYTHONSTARTUP\nscript. Note that this function has no stability guarantees, and may change or be removed.(Contributed by \u0141ukasz Langa in gh-131507.)\nThe default interactive shell now supports import auto-completion. This means that typing\nimport co\nand pressing will suggest modules starting withco\n. Similarly, typingfrom concurrent import i\nwill suggest submodules ofconcurrent\nstarting withi\n. Note that autocompletion of module attributes is not currently supported. (Contributed by Tomas Roun in gh-69605.)\nNew modules\u00b6\nannotationlib\n: For introspecting annotations. See PEP 749 for more details. (Contributed by Jelle Zijlstra in gh-119180.)compression\n(includingcompression.zstd\n): A package for compression-related modules, including a new module to support the Zstandard compression format. See PEP 784 for more details. (Contributed by Emma Harper Smith, Adam Turner, Gregory P. Smith, Tomas Roun, Victor Stinner, and Rogdham in gh-132983.)concurrent.interpreters\n: Support for multiple interpreters in the standard library. See PEP 734 for more details. (Contributed by Eric Snow in gh-134939.)string.templatelib\n: Support for template string literals (t-strings). See PEP 750 for more details. (Contributed by Jim Baker, Guido van Rossum, Paul Everitt, Koudai Aono, Lysandros Nikolaou, Dave Peck, Adam Turner, Jelle Zijlstra, B\u00e9n\u00e9dikt Tran, and Pablo Galindo Salgado in gh-132661.)\nImproved modules\u00b6\nargparse\u00b6\nThe default value of the program name for\nargparse.ArgumentParser\nnow reflects the way the Python interpreter was instructed to find the__main__\nmodule code. (Contributed by Serhiy Storchaka and Alyssa Coghlan in gh-66436.)Introduced the optional suggest_on_error parameter to\nargparse.ArgumentParser\n, enabling suggestions for argument choices and subparser names if mistyped by the user. (Contributed by Savannah Ostrowski in gh-124456.)Enable color for help text, which can be disabled with the optional color parameter to\nargparse.ArgumentParser\n. This can also be controlled by environment variables. (Contributed by Hugo van Kemenade in gh-130645.)\nast\u00b6\nAdd\ncompare()\n, a function for comparing two ASTs. (Contributed by Batuhan Taskaya and Jeremy Hylton in gh-60191.)Add support for\ncopy.replace()\nfor AST nodes. (Contributed by B\u00e9n\u00e9dikt Tran in gh-121141.)Docstrings are now removed from an optimized AST in optimization level 2. (Contributed by Irit Katriel in gh-123958.)\nThe\nrepr()\noutput for AST nodes now includes more information. (Contributed by Tomas Roun in gh-116022.)When called with an AST as input, the\nparse()\nfunction now always verifies that the root node type is appropriate. (Contributed by Irit Katriel in gh-130139.)Add new options to the command-line interface:\n--feature-version\n,--optimize\n, and--show-empty\n. (Contributed by Semyon Moroz in gh-133367.)\nasyncio\u00b6\nThe function and methods named\ncreate_task()\nnow take an arbitrary list of keyword arguments. All keyword arguments are passed to theTask\nconstructor or the custom task factory. (Seeset_task_factory()\nfor details.) Thename\nandcontext\nkeyword arguments are no longer special; the name should now be set using thename\nkeyword argument of the factory, andcontext\nmay beNone\n.This affects the following function and methods:\nasyncio.create_task()\n,asyncio.loop.create_task()\n,asyncio.TaskGroup.create_task()\n.(Contributed by Thomas Grainger in gh-128307.)\nThere are two new utility functions for introspecting and printing a program\u2019s call graph:\ncapture_call_graph()\nandprint_call_graph()\n. See Asyncio introspection capabilities for more details. (Contributed by Yury Selivanov, Pablo Galindo Salgado, and \u0141ukasz Langa in gh-91048.)\ncalendar\u00b6\nBy default, today\u2019s date is highlighted in color in\ncalendar\n\u2019s command-line text output. This can be controlled by environment variables. (Contributed by Hugo van Kemenade in gh-128317.)\nconcurrent.futures\u00b6\nAdd a new executor class,\nInterpreterPoolExecutor\n, which exposes multiple Python interpreters in the same process (\u2018subinterpreters\u2019) to Python code. This uses a pool of independent Python interpreters to execute calls asynchronously.This is separate from the new\ninterpreters\nmodule introduced by PEP 734. (Contributed by Eric Snow in gh-124548.)\nOn Unix platforms other than macOS, \u2018forkserver\u2019 is now the default start method for\nProcessPoolExecutor\n(replacing \u2018fork\u2019). This change does not affect Windows or macOS, where \u2018spawn\u2019 remains the default start method.If the threading incompatible fork method is required, you must explicitly request it by supplying a multiprocessing context mp_context to\nProcessPoolExecutor\n.See forkserver restrictions for information and differences with the fork method and how this change may affect existing code with mutable global shared variables and/or shared objects that can not be automatically\npickled\n.(Contributed by Gregory P. Smith in gh-84559.)\nAdd two new methods to\nProcessPoolExecutor\n,terminate_workers()\nandkill_workers()\n, as ways to terminate or kill all living worker processes in the given pool. (Contributed by Charles Machalow in gh-130849.)Add the optional buffersize parameter to\nExecutor.map\nto limit the number of submitted tasks whose results have not yet been yielded. If the buffer is full, iteration over the iterables pauses until a result is yielded from the buffer. (Contributed by Enzo Bonnal and Josh Rosenberg in gh-74028.)\nconfigparser\u00b6\nconfigparser\nwill no longer write config files it cannot read, to improve security. Attempting towrite()\nkeys containing delimiters or beginning with the section header pattern will raise anInvalidWriteError\n. (Contributed by Jacob Lincoln in gh-129270.)\ncontextvars\u00b6\nSupport the context manager protocol for\nToken\nobjects. (Contributed by Andrew Svetlov in gh-129889.)\nctypes\u00b6\nThe layout of bit fields in\nStructure\nandUnion\nobjects is now a closer match to platform defaults (GCC/Clang or MSVC). In particular, fields no longer overlap. (Contributed by Matthias G\u00f6rgens in gh-97702.)The\nStructure._layout_\nclass attribute can now be set to help match a non-default ABI. (Contributed by Petr Viktorin in gh-97702.)The class of\nStructure\n/Union\nfield descriptors is now available asCField\n, and has new attributes to aid debugging and introspection. (Contributed by Petr Viktorin in gh-128715.)On Windows, the\nCOMError\nexception is now public. (Contributed by Jun Komoda in gh-126686.)On Windows, the\nCopyComPointer()\nfunction is now public. (Contributed by Jun Komoda in gh-127275.)Add\nmemoryview_at()\n, a function to create amemoryview\nobject that refers to the supplied pointer and length. This works likectypes.string_at()\nexcept it avoids a buffer copy, and is typically useful when implementing pure Python callback functions that are passed dynamically-sized buffers. (Contributed by Rian Hunter in gh-112018.)Complex types,\nc_float_complex\n,c_double_complex\n, andc_longdouble_complex\n, are now available if both the compiler and thelibffi\nlibrary support complex C types. (Contributed by Sergey B Kirpichev in gh-61103.)Add\nctypes.util.dllist()\nfor listing the shared libraries loaded by the current process. (Contributed by Brian Ward in gh-119349.)Move\nctypes.POINTER()\ntypes cache from a global internal cache (_pointer_type_cache\n) to the_CData.__pointer_type__\nattribute of the correspondingctypes\ntypes. This will stop the cache from growing without limits in some situations. (Contributed by Sergey Miryanov in gh-100926.)The\npy_object\ntype now supports subscription, making it a generic type. (Contributed by Brian Schubert in gh-132168.)ctypes\nnow supports free-threading builds. (Contributed by Kumar Aditya and Peter Bierma in gh-127945.)\ncurses\u00b6\nAdd the\nassume_default_colors()\nfunction, a refinement of theuse_default_colors()\nfunction which allows changing the color pair0\n. (Contributed by Serhiy Storchaka in gh-133139.)\ndatetime\u00b6\nAdd the\nstrptime()\nmethod to thedatetime.date\nanddatetime.time\nclasses. (Contributed by Wannes Boeykens in gh-41431.)\ndecimal\u00b6\nAdd\nDecimal.from_number()\nas an alternative constructor forDecimal\n. (Contributed by Serhiy Storchaka in gh-121798.)Expose\nIEEEContext()\nto support creation of contexts corresponding to the IEEE 754 (2008) decimal interchange formats. (Contributed by Sergey B Kirpichev in gh-53032.)\ndifflib\u00b6\ndis\u00b6\nAdd support for rendering full source location information of\ninstructions\n, rather than only the line number. This feature is added to the following interfaces via the show_positions keyword argument:This feature is also exposed via\ndis --show-positions\n. (Contributed by B\u00e9n\u00e9dikt Tran in gh-123165.)Add the\ndis --specialized\ncommand-line option to show specialized bytecode. (Contributed by B\u00e9n\u00e9dikt Tran in gh-127413.)\nerrno\u00b6\nfaulthandler\u00b6\nAdd support for printing the C stack trace on systems that support it via the new\ndump_c_stack()\nfunction or via the c_stack argument infaulthandler.enable()\n. (Contributed by Peter Bierma in gh-127604.)\nfnmatch\u00b6\nAdd\nfilterfalse()\n, a function to reject names matching a given pattern. (Contributed by B\u00e9n\u00e9dikt Tran in gh-74598.)\nfractions\u00b6\nA\nFraction\nobject may now be constructed from any object with theas_integer_ratio()\nmethod. (Contributed by Serhiy Storchaka in gh-82017.)Add\nFraction.from_number()\nas an alternative constructor forFraction\n. (Contributed by Serhiy Storchaka in gh-121797.)\nfunctools\u00b6\nAdd the\nPlaceholder\nsentinel. This may be used with thepartial()\norpartialmethod()\nfunctions to reserve a place for positional arguments in the returned partial object. (Contributed by Dominykas Grigonis in gh-119127.)Allow the initial parameter of\nreduce()\nto be passed as a keyword argument. (Contributed by Sayandip Dutta in gh-125916.)\ngetopt\u00b6\ngetpass\u00b6\ngraphlib\u00b6\nAllow\nTopologicalSorter.prepare()\nto be called more than once as long as sorting has not started. (Contributed by Daniel Pope in gh-130914.)\nheapq\u00b6\nThe\nheapq\nmodule has improved support for working with max-heaps, via the following new functions:\nhmac\u00b6\nhttp\u00b6\nDirectory lists and error pages generated by the\nhttp.server\nmodule allow the browser to apply its default dark mode. (Contributed by Yorik Hansen in gh-123430.)The\nhttp.server\nmodule now supports serving over HTTPS using thehttp.server.HTTPSServer\nclass. This functionality is exposed by the command-line interface (python -m http.server\n) through the following options:--tls-cert \n: Path to the TLS certificate file.--tls-key \n: Optional path to the private key file.--tls-password-file \n: Optional path to the password file for the private key.\n(Contributed by Semyon Moroz in gh-85162.)\nimaplib\u00b6\nAdd\nIMAP4.idle()\n, implementing the IMAP4IDLE\ncommand as defined in RFC 2177. (Contributed by Forest in gh-55454.)\ninspect\u00b6\nsignature()\ntakes a new argument annotation_format to control theannotationlib.Format\nused for representing annotations. (Contributed by Jelle Zijlstra in gh-101552.)Signature.format()\ntakes a new argument unquote_annotations. If true, string annotations are displayed without surrounding quotes. (Contributed by Jelle Zijlstra in gh-101552.)Add function\nispackage()\nto determine whether an object is a package or not. (Contributed by Zhikang Yan in gh-125634.)\nio\u00b6\nReading text from a non-blocking stream with\nread\nmay now raise aBlockingIOError\nif the operation cannot immediately return bytes. (Contributed by Giovanni Siragusa in gh-109523.)Add the\nReader\nandWriter\nprotocols as simpler alternatives to the pseudo-protocolstyping.IO\n,typing.TextIO\n, andtyping.BinaryIO\n. (Contributed by Sebastian Rittau in gh-127648.)\njson\u00b6\nAdd exception notes for JSON serialization errors that allow identifying the source of the error. (Contributed by Serhiy Storchaka in gh-122163.)\nAllow using the\njson\nmodule as a script using the-m\nswitch: python -m json. This is now preferred to python -m json.tool, which is soft deprecated. See the JSON command-line interface documentation. (Contributed by Trey Hunner in gh-122873.)By default, the output of the JSON command-line interface is highlighted in color. This can be controlled by environment variables. (Contributed by Tomas Roun in gh-131952.)\nlinecache\u00b6\nlogging.handlers\u00b6\nQueueListener\nobjects now support the context manager protocol. (Contributed by Charles Machalow in gh-132106.)QueueListener.start\nnow raises aRuntimeError\nif the listener is already started. (Contributed by Charles Machalow in gh-132106.)\nmath\u00b6\nAdded more detailed error messages for domain errors in the module. (Contributed by Charlie Zhao and Sergey B Kirpichev in gh-101410.)\nmimetypes\u00b6\nAdd a public command-line for the module, invoked via python -m mimetypes. (Contributed by Oleg Iarygin and Hugo van Kemenade in gh-93096.)\nAdd several new MIME types based on RFCs and common usage:\nMicrosoft and RFC 8081 MIME types for fonts\nEmbedded OpenType:\napplication/vnd.ms-fontobject\nOpenType Layout (OTF)\nfont/otf\nTrueType:\nfont/ttf\nWOFF 1.0\nfont/woff\nWOFF 2.0\nfont/woff2\nRFC 9559 MIME types for Matroska audiovisual data container structures\naudio with no video:\naudio/matroska\n(.mka\n)video:\nvideo/matroska\n(.mkv\n)stereoscopic video:\nvideo/matroska-3d\n(.mk3d\n)\nImages with RFCs\nRFC 1494: CCITT Group 3 (\n.g3\n)RFC 3362: Real-time Facsimile, T.38 (\n.t38\n)RFC 3745: JPEG 2000 (\n.jp2\n), extension (.jpx\n) and compound (.jpm\n)RFC 3950: Tag Image File Format Fax eXtended, TIFF-FX (\n.tfx\n)RFC 4047: Flexible Image Transport System (\n.fits\n)RFC 7903: Enhanced Metafile (\n.emf\n) and Windows Metafile (.wmf\n)\nOther MIME type additions and changes\nRFC 2361: Change type for\n.avi\ntovideo/vnd.avi\nand for.wav\ntoaudio/vnd.wave\nRFC 4337: Add MPEG-4\naudio/mp4\n(.m4a\n)RFC 5334: Add Ogg media (\n.oga\n,.ogg\nand.ogx\n)RFC 6713: Add gzip\napplication/gzip\n(.gz\n)RFC 9639: Add FLAC\naudio/flac\n(.flac\n)RFC 9512\napplication/yaml\nMIME type for YAML files (.yaml\nand.yml\n)Add 7z\napplication/x-7z-compressed\n(.7z\n)Add Android Package\napplication/vnd.android.package-archive\n(.apk\n) when not strictAdd deb\napplication/x-debian-package\n(.deb\n)Add glTF binary\nmodel/gltf-binary\n(.glb\n)Add glTF JSON/ASCII\nmodel/gltf+json\n(.gltf\n)Add M4V\nvideo/x-m4v\n(.m4v\n)Add PHP\napplication/x-httpd-php\n(.php\n)Add RAR\napplication/vnd.rar\n(.rar\n)Add RPM\napplication/x-rpm\n(.rpm\n)Add STL\nmodel/stl\n(.stl\n)Add Windows Media Video\nvideo/x-ms-wmv\n(.wmv\n)De facto: Add WebM\naudio/webm\n(.weba\n)ECMA-376: Add\n.docx\n,.pptx\nand.xlsx\ntypesOASIS: Add OpenDocument\n.odg\n,.odp\n,.ods\nand.odt\ntypesW3C: Add EPUB\napplication/epub+zip\n(.epub\n)\n(Contributed by Sahil Prajapati and Hugo van Kemenade in gh-84852, by Sasha \u201cNelie\u201d Chernykh and Hugo van Kemenade in gh-132056, and by Hugo van Kemenade in gh-89416, gh-85957, and gh-129965.)\nmultiprocessing\u00b6\nOn Unix platforms other than macOS, \u2018forkserver\u2019 is now the default start method (replacing \u2018fork\u2019). This change does not affect Windows or macOS, where \u2018spawn\u2019 remains the default start method.\nIf the threading incompatible fork method is required, you must explicitly request it via a context from\nget_context()\n(preferred) or change the default viaset_start_method()\n.See forkserver restrictions for information and differences with the fork method and how this change may affect existing code with mutable global shared variables and/or shared objects that can not be automatically\npickled\n.(Contributed by Gregory P. Smith in gh-84559.)\nmultiprocessing\n\u2019s'forkserver'\nstart method now authenticates its control socket to avoid solely relying on filesystem permissions to restrict what other processes could cause the forkserver to spawn workers and run code. (Contributed by Gregory P. Smith for gh-97514.)The multiprocessing proxy objects for list and dict types gain previously overlooked missing methods:\nclear()\nandcopy()\nfor proxies oflist\nfromkeys()\n,reversed(d)\n,d | {}\n,{} | d\n,d |= {'b': 2}\nfor proxies ofdict\n(Contributed by Roy Hyunjin Han for gh-103134.)\nAdd support for shared\nset\nobjects viaSyncManager.set()\n. Theset()\ninManager()\nmethod is now available. (Contributed by Mingyu Park in gh-129949.)Add the\ninterrupt()\ntomultiprocessing.Process\nobjects, which terminates the child process by sendingSIGINT\n. This enablesfinally\nclauses to print a stack trace for the terminated process. (Contributed by Artem Pulkin in gh-131913.)\noperator\u00b6\nAdd\nis_none()\nandis_not_none()\nas a pair of functions, such thatoperator.is_none(obj)\nis equivalent toobj is None\nandoperator.is_not_none(obj)\nis equivalent toobj is not None\n. (Contributed by Raymond Hettinger and Nico Mexis in gh-115808.)\nos\u00b6\nAdd the\nreload_environ()\nfunction to updateos.environ\nandos.environb\nwith changes to the environment made byos.putenv()\n, byos.unsetenv()\n, or made outside Python in the same process. (Contributed by Victor Stinner in gh-120057.)Add the\nSCHED_DEADLINE\nandSCHED_NORMAL\nconstants to theos\nmodule. (Contributed by James Roy in gh-127688.)Add the\nreadinto()\nfunction to read into a buffer object from a file descriptor. (Contributed by Cody Maloney in gh-129205.)\nos.path\u00b6\nThe strict parameter to\nrealpath()\naccepts a new value,ALLOW_MISSING\n. If used, errors other thanFileNotFoundError\nwill be re-raised; the resulting path can be missing but it will be free of symlinks. (Contributed by Petr Viktorin for CVE 2025-4517.)\npathlib\u00b6\nAdd methods to\npathlib.Path\nto recursively copy or move files and directories:copy()\ncopies a file or directory tree to a destination.copy_into()\ncopies into a destination directory.move()\nmoves a file or directory tree to a destination.move_into()\nmoves into a destination directory.\n(Contributed by Barney Gale in gh-73991.)\nAdd the\ninfo\nattribute, which stores an object implementing the newpathlib.types.PathInfo\nprotocol. The object supports querying the file type and internally cachingstat()\nresults. Path objects generated byiterdir()\nare initialized with file type information gleaned from scanning the parent directory. (Contributed by Barney Gale in gh-125413.)\npdb\u00b6\nThe\npdb\nmodule now supports remote attaching to a running Python process using a new-p PID\ncommand-line option:python -m pdb -p 1234\nThis will connect to the Python process with the given PID and allow you to debug it interactively. Notice that due to how the Python interpreter works attaching to a remote process that is blocked in a system call or waiting for I/O will only work once the next bytecode instruction is executed or when the process receives a signal.\nThis feature uses PEP 768 and the new\nsys.remote_exec()\nfunction to attach to the remote process and send the PDB commands to it.(Contributed by Matt Wozniski and Pablo Galindo in gh-131591.)\nHardcoded breakpoints (\nbreakpoint()\nandset_trace()\n) now reuse the most recentPdb\ninstance that callsset_trace()\n, instead of creating a new one each time. As a result, all the instance specific data likedisplay\nandcommands\nare preserved across hardcoded breakpoints. (Contributed by Tian Gao in gh-121450.)Add a new argument mode to\npdb.Pdb\n. Disable therestart\ncommand whenpdb\nis ininline\nmode. (Contributed by Tian Gao in gh-123757.)A confirmation prompt will be shown when the user tries to quit\npdb\nininline\nmode.y\n,Y\n,\norEOF\nwill confirm the quit and callsys.exit()\n, instead of raisingbdb.BdbQuit\n. (Contributed by Tian Gao in gh-124704.)Inline breakpoints like\nbreakpoint()\norpdb.set_trace()\nwill always stop the program at calling frame, ignoring theskip\npattern (if any). (Contributed by Tian Gao in gh-130493.)\nat the beginning of the line inpdb\nmulti-line input will fill in a 4-space indentation now, instead of inserting a\\t\ncharacter. (Contributed by Tian Gao in gh-130471.)Auto-indent is introduced in\npdb\nmulti-line input. It will either keep the indentation of the last line or insert a 4-space indentation when it detects a new code block. (Contributed by Tian Gao in gh-133350.)$_asynctask\nis added to access the current asyncio task if applicable. (Contributed by Tian Gao in gh-124367.)pdb.set_trace_async()\nis added to support debugging asyncio coroutines.await\nstatements are supported with this function. (Contributed by Tian Gao in gh-132576.)Source code displayed in\npdb\nwill be syntax-highlighted. This feature can be controlled using the same methods as the default interactive shell, in addition to the newly addedcolorize\nargument ofpdb.Pdb\n. (Contributed by Tian Gao and \u0141ukasz Langa in gh-133355.)\npickle\u00b6\nSet the default protocol version on the\npickle\nmodule to 5. For more details, see pickle protocols.Add exception notes for pickle serialization errors that allow identifying the source of the error. (Contributed by Serhiy Storchaka in gh-122213.)\nplatform\u00b6\nAdd\ninvalidate_caches()\n, a function to invalidate cached results in theplatform\nmodule. (Contributed by B\u00e9n\u00e9dikt Tran in gh-122549.)\npydoc\u00b6\nAnnotations in help output are now usually displayed in a format closer to that in the original source. (Contributed by Jelle Zijlstra in gh-101552.)\nre\u00b6\nSupport\n\\z\nas a synonym for\\Z\ninregular expressions\n. It is interpreted unambiguously in many other regular expression engines, unlike\\Z\n, which has subtly different behavior. (Contributed by Serhiy Storchaka in gh-133306.)\\B\ninregular expression\nnow matches the empty input string, meaning that it is now always the opposite of\\b\n. (Contributed by Serhiy Storchaka in gh-124130.)\nsocket\u00b6\nImprove and fix support for Bluetooth sockets.\nFix support of Bluetooth sockets on NetBSD and DragonFly BSD. (Contributed by Serhiy Storchaka in gh-132429.)\nFix support for\nBTPROTO_HCI\non FreeBSD. (Contributed by Victor Stinner in gh-111178.)Add support for\nBTPROTO_SCO\non FreeBSD. (Contributed by Serhiy Storchaka in gh-85302.)Add support for cid and bdaddr_type in the address for\nBTPROTO_L2CAP\non FreeBSD. (Contributed by Serhiy Storchaka in gh-132429.)Add support for channel in the address for\nBTPROTO_HCI\non Linux. (Contributed by Serhiy Storchaka in gh-70145.)Accept an integer as the address for\nBTPROTO_HCI\non Linux. (Contributed by Serhiy Storchaka in gh-132099.)Return cid in\ngetsockname()\nforBTPROTO_L2CAP\n. (Contributed by Serhiy Storchaka in gh-132429.)Add many new constants. (Contributed by Serhiy Storchaka in gh-132734.)\nssl\u00b6\nstruct\u00b6\nsymtable\u00b6\nsys\u00b6\nThe previously undocumented special function\nsys.getobjects()\n, which only exists in specialized builds of Python, may now return objects from other interpreters than the one it\u2019s called in. (Contributed by Eric Snow in gh-125286.)Add\nsys._is_immortal()\nfor determining if an object is immortal. (Contributed by Peter Bierma in gh-128509.)On FreeBSD,\nsys.platform\nno longer contains the major version number. It is always'freebsd'\n, instead of'freebsd13'\nor'freebsd14'\n. (Contributed by Michael Osipov in gh-129393.)Raise\nDeprecationWarning\nforsys._clear_type_cache()\n. This function was deprecated in Python 3.13 but it didn\u2019t raise a runtime warning.Add\nsys.remote_exec()\nto implement the new external debugger interface. See PEP 768 for details. (Contributed by Pablo Galindo Salgado, Matt Wozniski, and Ivona Stojanovic in gh-131591.)Add the\nsys._jit\nnamespace, containing utilities for introspecting just-in-time compilation. (Contributed by Brandt Bucher in gh-133231.)\nsys.monitoring\u00b6\nAdd two new monitoring events,\nBRANCH_LEFT\nandBRANCH_RIGHT\n. These replace and deprecate theBRANCH\nevent. (Contributed by Mark Shannon in gh-122548.)\nsysconfig\u00b6\nAdd\nABIFLAGS\nkey toget_config_vars()\non Windows. (Contributed by Xuehai Pan in gh-131799.)\ntarfile\u00b6\ndata_filter()\nnow normalizes symbolic link targets in order to avoid path traversal attacks. (Contributed by Petr Viktorin in gh-127987 and CVE 2025-4138.)extractall()\nnow skips fixing up directory attributes when a directory was removed or replaced by another kind of file. (Contributed by Petr Viktorin in gh-127987 and CVE 2024-12718.)extract()\nandextractall()\nnow (re-)apply the extraction filter when substituting a link (hard or symbolic) with a copy of another archive member, and when fixing up directory attributes. The former raises a new exception,LinkFallbackError\n. (Contributed by Petr Viktorin for CVE 2025-4330 and CVE 2024-12718.)extract()\nandextractall()\nno longer extract rejected members whenerrorlevel()\nis zero. (Contributed by Matt Prodani and Petr Viktorin in gh-112887 and CVE 2025-4435.)\nthreading\u00b6\nthreading.Thread.start()\nnow sets the operating system thread name tothreading.Thread.name\n. (Contributed by Victor Stinner in gh-59705.)\ntkinter\u00b6\nturtle\u00b6\nAdd context managers for\nturtle.fill()\n,turtle.poly()\n, andturtle.no_animation()\n. (Contributed by Marie Roald and Yngve Mardal Moe in gh-126350.)\ntypes\u00b6\ntypes.UnionType\nis now an alias fortyping.Union\n. See below for more details. (Contributed by Jelle Zijlstra in gh-105499.)\ntyping\u00b6\nThe\ntypes.UnionType\nandtyping.Union\ntypes are now aliases for each other, meaning that both old-style unions (created withUnion[int, str]\n) and new-style unions (int | str\n) now create instances of the same runtime type. This unifies the behavior between the two syntaxes, but leads to some differences in behavior that may affect users who introspect types at runtime:Both syntaxes for creating a union now produce the same string representation in\nrepr()\n. For example,repr(Union[int, str])\nis now\"int | str\"\ninstead of\"typing.Union[int, str]\"\n.Unions created using the old syntax are no longer cached. Previously, running\nUnion[int, str]\nmultiple times would return the same object (Union[int, str] is Union[int, str]\nwould beTrue\n), but now it will return two different objects. Use==\nto compare unions for equality, notis\n. New-style unions have never been cached this way. This change could increase memory usage for some programs that use a large number of unions created by subscriptingtyping.Union\n. However, several factors offset this cost: unions used in annotations are no longer evaluated by default in Python 3.14 because of PEP 649; an instance oftypes.UnionType\nis itself much smaller than the object returned byUnion[]\nwas on prior Python versions; and removing the cache also saves some space. It is therefore unlikely that this change will cause a significant increase in memory usage for most users.Previously, old-style unions were implemented using the private class\ntyping._UnionGenericAlias\n. This class is no longer needed for the implementation, but it has been retained for backward compatibility, with removal scheduled for Python 3.17. Users should use documented introspection helpers likeget_origin()\nandtyping.get_args()\ninstead of relying on private implementation details.It is now possible to use\ntyping.Union\nitself inisinstance()\nchecks. For example,isinstance(int | str, typing.Union)\nwill returnTrue\n; previously this raisedTypeError\n.The\n__args__\nattribute oftyping.Union\nobjects is no longer writable.It is no longer possible to set any attributes on\nUnion\nobjects. This only ever worked for dunder attributes on previous versions, was never documented to work, and was subtly broken in many cases.\n(Contributed by Jelle Zijlstra in gh-105499.)\nTypeAliasType\nnow supports star unpacking.\nunicodedata\u00b6\nThe Unicode database has been updated to Unicode 16.0.0.\nunittest\u00b6\nunittest\noutput is now colored by default. This can be controlled by environment variables. (Contributed by Hugo van Kemenade in gh-127221.)unittest discovery supports namespace package as start directory again. It was removed in Python 3.11. (Contributed by Jacob Walls in gh-80958.)\nA number of new methods were added in the\nTestCase\nclass that provide more specialized tests.assertHasAttr()\nandassertNotHasAttr()\ncheck whether the object has a particular attribute.assertIsSubclass()\nandassertNotIsSubclass()\ncheck whether the object is a subclass of a particular class, or of one of a tuple of classes.assertStartsWith()\n,assertNotStartsWith()\n,assertEndsWith()\nandassertNotEndsWith()\ncheck whether the Unicode or byte string starts or ends with particular strings.\n(Contributed by Serhiy Storchaka in gh-71339.)\nurllib\u00b6\nUpgrade HTTP digest authentication algorithm for\nurllib.request\nby supporting SHA-256 digest authentication as specified in RFC 7616. (Contributed by Calvin Bui in gh-128193.)Improve ergonomics and standards compliance when parsing and emitting\nfile:\nURLs.In\nurl2pathname()\n:Accept a complete URL when the new require_scheme argument is set to true.\nDiscard URL authority if it matches the local hostname.\nDiscard URL authority if it resolves to a local IP address when the new resolve_host argument is set to true.\nDiscard URL query and fragment components.\nRaise\nURLError\nif a URL authority isn\u2019t local, except on Windows where we return a UNC path as before.\nIn\npathname2url()\n:Return a complete URL when the new add_scheme argument is set to true.\nInclude an empty URL authority when a path begins with a slash. For example, the path\n/etc/hosts\nis converted to the URL///etc/hosts\n.\nOn Windows, drive letters are no longer converted to uppercase, and\n:\ncharacters not following a drive letter no longer cause anOSError\nexception to be raised.(Contributed by Barney Gale in gh-125866.)\nuuid\u00b6\nAdd support for UUID versions 6, 7, and 8 via\nuuid6()\n,uuid7()\n, anduuid8()\nrespectively, as specified in RFC 9562. (Contributed by B\u00e9n\u00e9dikt Tran in gh-89083.)NIL\nandMAX\nare now available to represent the Nil and Max UUID formats as defined by RFC 9562. (Contributed by Nick Pope in gh-128427.)Allow generating multiple UUIDs simultaneously on the command-line via\npython -m uuid --count\n. (Contributed by Simon Legner in gh-131236.)\nwebbrowser\u00b6\nNames in the\nBROWSER\nenvironment variable can now refer to already registered browsers for thewebbrowser\nmodule, instead of always generating a new browser command.This makes it possible to set\nBROWSER\nto the value of one of the supported browsers on macOS.\nzipfile\u00b6\nAdded\nZipInfo._for_archive\n, a method to resolve suitable defaults for aZipInfo\nobject as used byZipFile.writestr\n. (Contributed by B\u00e9n\u00e9dikt Tran in gh-123424.)ZipFile.writestr()\nnow respects theSOURCE_DATE_EPOCH\nenvironment variable in order to better support reproducible builds. (Contributed by Jiahao Li in gh-91279.)\nOptimizations\u00b6\nThe import time for several standard library modules has been improved, including\nannotationlib\n,ast\n,asyncio\n,base64\n,cmd\n,csv\n,gettext\n,importlib.util\n,locale\n,mimetypes\n,optparse\n,pickle\n,pprint\n,pstats\n,shlex\n,socket\n,string\n,subprocess\n,threading\n,tomllib\n,types\n, andzipfile\n.(Contributed by Adam Turner, B\u00e9n\u00e9dikt Tran, Chris Markiewicz, Eli Schwartz, Hugo van Kemenade, Jelle Zijlstra, and others in gh-118761.)\nThe interpreter now avoids some reference count modifications internally when it\u2019s safe to do so. This can lead to different values being returned from\nsys.getrefcount()\nandPy_REFCNT()\ncompared to previous versions of Python. See below for details.\nasyncio\u00b6\nStandard benchmark results have improved by 10-20% following the implementation of a new per-thread doubly linked list for\nnative tasks\n, also reducing memory usage. This enables external introspection tools such as python -m asyncio pstree to introspect the call graph of asyncio tasks running in all threads. (Contributed by Kumar Aditya in gh-107803.)The module now has first class support for free-threading builds. This enables parallel execution of multiple event loops across different threads, scaling linearly with the number of threads. (Contributed by Kumar Aditya in gh-128002.)\nbase64\u00b6\nb16decode()\nis now up to six times faster. (Contributed by B\u00e9n\u00e9dikt Tran, Chris Markiewicz, and Adam Turner in gh-118761.)\nbdb\u00b6\nThe basic debugger now has a\nsys.monitoring\n-based backend, which can be selected via the passing'monitoring'\nto theBdb\nclass\u2019s new backend parameter. (Contributed by Tian Gao in gh-124533.)\ndifflib\u00b6\nThe\nIS_LINE_JUNK()\nfunction is now up to twice as fast. (Contributed by Adam Turner and Semyon Moroz in gh-130167.)\ngc\u00b6\nThe new incremental garbage collector means that maximum pause times are reduced by an order of magnitude or more for larger heaps.\nBecause of this optimization, the meaning of the results of\nget_threshold()\nandset_threshold()\nhave changed, along withget_count()\nandget_stats()\n.For backwards compatibility,\nget_threshold()\ncontinues to return a three-item tuple. The first value is the threshold for young collections, as before; the second value determines the rate at which the old collection is scanned (the default is 10, and higher values mean that the old collection is scanned more slowly). The third value is now meaningless and is always zero.set_threshold()\nnow ignores any items after the second.get_count()\nandget_stats()\ncontinue to return the same format of results. The only difference is that instead of the results referring to the young, aging and old generations, the results refer to the young generation and the aging and collecting spaces of the old generation.\nIn summary, code that attempted to manipulate the behavior of the cycle GC may not work exactly as intended, but it is very unlikely to be harmful. All other code will work just fine.\n(Contributed by Mark Shannon in gh-108362.)\nio\u00b6\npathlib\u00b6\nPath.read_bytes\nnow uses unbuffered mode to open files, which is between 9% and 17% faster to read in full. (Contributed by Cody Maloney in gh-120754.)\npdb\u00b6\npdb\nnow supports two backends, based on eithersys.settrace()\norsys.monitoring\n. Using the pdb CLI orbreakpoint()\nwill always use thesys.monitoring\nbackend. Explicitly instantiatingpdb.Pdb\nand its derived classes will use thesys.settrace()\nbackend by default, which is configurable. (Contributed by Tian Gao in gh-124533.)\ntextwrap\u00b6\nOptimize the\ndedent()\nfunction, improving performance by an average of 2.4x, with larger improvements for bigger inputs, and fix a bug with incomplete normalization of blank lines with whitespace characters other than space and tab.\nuuid\u00b6\nzlib\u00b6\nOn Windows, zlib-ng is now used as the implementation of the\nzlib\nmodule in the default binaries. There are no known incompatibilities betweenzlib-ng\nand the previously-usedzlib\nimplementation. This should result in better performance at all compression levels.It is worth noting that\nzlib.Z_BEST_SPEED\n(1\n) may result in significantly less compression than the previous implementation, whilst also significantly reducing the time taken to compress.(Contributed by Steve Dower in gh-91349.)\nRemoved\u00b6\nargparse\u00b6\nRemove the type, choices, and metavar parameters of\nBooleanOptionalAction\n. These have been deprecated since Python 3.12. (Contributed by Nikita Sobolev in gh-118805.)Calling\nadd_argument_group()\non an argument group now raises aValueError\n. Similarly,add_argument_group()\noradd_mutually_exclusive_group()\non a mutually exclusive group now both raiseValueError\ns. This \u2018nesting\u2019 was never supported, often failed to work correctly, and was unintentionally exposed through inheritance. This functionality has been deprecated since Python 3.11. (Contributed by Savannah Ostrowski in gh-127186.)\nast\u00b6\nRemove the following classes, which have been deprecated aliases of\nConstant\nsince Python 3.8 and have emitted deprecation warnings since Python 3.12:Bytes\nEllipsis\nNameConstant\nNum\nStr\nAs a consequence of these removals, user-defined\nvisit_Num\n,visit_Str\n,visit_Bytes\n,visit_NameConstant\nandvisit_Ellipsis\nmethods on customNodeVisitor\nsubclasses will no longer be called when theNodeVisitor\nsubclass is visiting an AST. Define avisit_Constant\nmethod instead.(Contributed by Alex Waygood in gh-119562.)\nRemove the following deprecated properties on\nast.Constant\n, which were present for compatibility with the now-removed AST classes:Constant.n\nConstant.s\nUse\nConstant.value\ninstead. (Contributed by Alex Waygood in gh-119562.)\nasyncio\u00b6\nRemove the following classes, methods, and functions, which have been deprecated since Python 3.12:\nAbstractChildWatcher\nFastChildWatcher\nMultiLoopChildWatcher\nPidfdChildWatcher\nSafeChildWatcher\nThreadedChildWatcher\nAbstractEventLoopPolicy.get_child_watcher()\nAbstractEventLoopPolicy.set_child_watcher()\nget_child_watcher()\nset_child_watcher()\n(Contributed by Kumar Aditya in gh-120804.)\nasyncio.get_event_loop()\nnow raises aRuntimeError\nif there is no current event loop, and no longer implicitly creates an event loop.(Contributed by Kumar Aditya in gh-126353.)\nThere\u2019s a few patterns that use\nasyncio.get_event_loop()\n, most of them can be replaced withasyncio.run()\n.If you\u2019re running an async function, simply use\nasyncio.run()\n.Before:\nasync def main(): ... loop = asyncio.get_event_loop() try: loop.run_until_complete(main()) finally: loop.close()\nAfter:\nasync def main(): ... asyncio.run(main())\nIf you need to start something, for example, a server listening on a socket and then run forever, use\nasyncio.run()\nand anasyncio.Event\n.Before:\ndef start_server(loop): ... loop = asyncio.get_event_loop() try: start_server(loop) loop.run_forever() finally: loop.close()\nAfter:\ndef start_server(loop): ... async def main(): start_server(asyncio.get_running_loop()) await asyncio.Event().wait() asyncio.run(main())\nIf you need to run something in an event loop, then run some blocking code around it, use\nasyncio.Runner\n.Before:\nasync def operation_one(): ... def blocking_code(): ... async def operation_two(): ... loop = asyncio.get_event_loop() try: loop.run_until_complete(operation_one()) blocking_code() loop.run_until_complete(operation_two()) finally: loop.close()\nAfter:\nasync def operation_one(): ... def blocking_code(): ... async def operation_two(): ... with asyncio.Runner() as runner: runner.run(operation_one()) blocking_code() runner.run(operation_two())\nemail\u00b6\nRemove\nemail.utils.localtime()\n\u2019s isdst parameter, which was deprecated in and has been ignored since Python 3.12. (Contributed by Hugo van Kemenade in gh-118798.)\nimportlib.abc\u00b6\nRemove deprecated\nimportlib.abc\nclasses:ResourceReader\n(useTraversableResources\n)Traversable\n(useTraversable\n)TraversableResources\n(useTraversableResources\n)\n(Contributed by Jason R. Coombs and Hugo van Kemenade in gh-93963.)\nitertools\u00b6\nRemove support for copy, deepcopy, and pickle operations from\nitertools\niterators. These have emitted aDeprecationWarning\nsince Python 3.12. (Contributed by Raymond Hettinger in gh-101588.)\npathlib\u00b6\nRemove support for passing additional keyword arguments to\nPath\n. In previous versions, any such arguments are ignored. (Contributed by Barney Gale in gh-74033.)Remove support for passing additional positional arguments to\nPurePath.relative_to()\nandis_relative_to()\n. In previous versions, any such arguments are joined onto other. (Contributed by Barney Gale in gh-78707.)\npkgutil\u00b6\nRemove the\nget_loader()\nandfind_loader()\nfunctions, which have been deprecated since Python 3.12. (Contributed by B\u00e9n\u00e9dikt Tran in gh-97850.)\npty\u00b6\nRemove the\nmaster_open()\nandslave_open()\nfunctions, which have been deprecated since Python 3.12. Usepty.openpty()\ninstead. (Contributed by Nikita Sobolev in gh-118824.)\nsqlite3\u00b6\nRemove\nversion\nandversion_info\nfrom thesqlite3\nmodule; usesqlite_version\nandsqlite_version_info\nfor the actual version number of the runtime SQLite library. (Contributed by Hugo van Kemenade in gh-118924.)Using a sequence of parameters with named placeholders now raises a\nProgrammingError\n, having been deprecated since Python 3.12. (Contributed by Erlend E. Aasland in gh-118928 and gh-101693.)\nurllib\u00b6\nRemove the\nQuoter\nclass fromurllib.parse\n, which has been deprecated since Python 3.11. (Contributed by Nikita Sobolev in gh-118827.)Remove the\nURLopener\nandFancyURLopener\nclasses fromurllib.request\n, which have been deprecated since Python 3.3.myopener.open()\ncan be replaced withurlopen()\n.myopener.retrieve()\ncan be replaced withurlretrieve()\n. Customisations to the opener classes can be replaced by passing customized handlers tobuild_opener()\n. (Contributed by Barney Gale in gh-84850.)\nDeprecated\u00b6\nNew deprecations\u00b6\nPassing a complex number as the real or imag argument in the\ncomplex()\nconstructor is now deprecated; complex numbers should only be passed as a single positional argument. (Contributed by Serhiy Storchaka in gh-109218.)-\nPassing the undocumented keyword argument prefix_chars to the\nadd_argument_group()\nmethod is now deprecated. (Contributed by Savannah Ostrowski in gh-125563.)Deprecated the\nargparse.FileType\ntype converter. Anything relating to resource management should be handled downstream, after the arguments have been parsed. (Contributed by Serhiy Storchaka in gh-58032.)\n-\nThe\nasyncio.iscoroutinefunction()\nis now deprecated and will be removed in Python 3.16; useinspect.iscoroutinefunction()\ninstead. (Contributed by Jiahao Li and Kumar Aditya in gh-122875.)The\nasyncio\npolicy system is deprecated and will be removed in Python 3.16. In particular, the following classes and functions are deprecated:Users should use\nasyncio.run()\norasyncio.Runner\nwith the loop_factory argument to use the desired event loop implementation.For example, to use\nasyncio.SelectorEventLoop\non Windows:import asyncio async def main(): ... asyncio.run(main(), loop_factory=asyncio.SelectorEventLoop)\n(Contributed by Kumar Aditya in gh-127949.)\ncodecs\n: Thecodecs.open()\nfunction is now deprecated, and will be removed in a future version of Python. Useopen()\ninstead. (Contributed by Inada Naoki in gh-133036.)-\nOn non-Windows platforms, setting\nStructure._pack_\nto use a MSVC-compatible default memory layout is now deprecated in favor of settingStructure._layout_\nto'ms'\n, and will be removed in Python 3.19. (Contributed by Petr Viktorin in gh-131747.)Calling\nctypes.POINTER()\non a string is now deprecated. Use incomplete types for self-referential structures. Also, the internalctypes._pointer_type_cache\nis deprecated. Seectypes.POINTER()\nfor updated implementation details. (Contributed by Sergey Myrianov in gh-100926.)\nfunctools\n: Calling the Python implementation offunctools.reduce()\nwith function or sequence as keyword arguments is now deprecated; the parameters will be made positional-only in Python 3.16. (Contributed by Kirill Podoprigora in gh-121676.)logging\n: Support for custom logging handlers with the strm argument is now deprecated and scheduled for removal in Python 3.16. Define handlers with the stream argument instead. (Contributed by Mariusz Felisiak in gh-115032.)mimetypes\n: Valid extensions are either empty or must start with \u2018.\u2019 formimetypes.MimeTypes.add_type()\n. Undotted extensions are deprecated and will raise aValueError\nin Python 3.16. (Contributed by Hugo van Kemenade in gh-75223.)nturl2path\n: This module is now deprecated. Callurllib.request.url2pathname()\nandpathname2url()\ninstead. (Contributed by Barney Gale in gh-125866.)os\n: Theos.popen()\nandos.spawn*\nfunctions are now soft deprecated. They should no longer be used to write new code. Thesubprocess\nmodule is recommended instead. (Contributed by Victor Stinner in gh-120743.)pathlib\n:pathlib.PurePath.as_uri()\nis now deprecated and scheduled for removal in Python 3.19. Usepathlib.Path.as_uri()\ninstead. (Contributed by Barney Gale in gh-123599.)pdb\n: The undocumentedpdb.Pdb.curframe_locals\nattribute is now a deprecated read-only property, which will be removed in a future version of Python. The low overhead dynamic frame locals access added in Python 3.13 by PEP 667 means the frame locals cache reference previously stored in this attribute is no longer needed. Derived debuggers should accesspdb.Pdb.curframe.f_locals\ndirectly in Python 3.13 and later versions. (Contributed by Tian Gao in gh-124369 and gh-125951.)symtable\n: Deprecatesymtable.Class.get_methods()\ndue to the lack of interest, scheduled for removal in Python 3.16. (Contributed by B\u00e9n\u00e9dikt Tran in gh-119698.)tkinter\n: Thetkinter.Variable\nmethodstrace_variable()\n,trace_vdelete()\nandtrace_vinfo()\nare now deprecated. Usetrace_add()\n,trace_remove()\nandtrace_info()\ninstead. (Contributed by Serhiy Storchaka in gh-120220.)urllib.parse\n: Accepting objects with false values (like0\nand[]\n) except empty strings, bytes-like objects andNone\ninparse_qsl()\nandparse_qs()\nis now deprecated. (Contributed by Serhiy Storchaka in gh-116897.)\nPending removal in Python 3.15\u00b6\nThe import system:\nSetting\n__cached__\non a module while failing to set__spec__.cached\nis deprecated. In Python 3.15,__cached__\nwill cease to be set or take into consideration by the import system or standard library. (gh-97879)Setting\n__package__\non a module while failing to set__spec__.parent\nis deprecated. In Python 3.15,__package__\nwill cease to be set or take into consideration by the import system or standard library. (gh-97879)\n-\nThe undocumented\nctypes.SetPointerType()\nfunction has been deprecated since Python 3.13.\n-\nThe obsolete and rarely used\nCGIHTTPRequestHandler\nhas been deprecated since Python 3.13. No direct replacement exists. Anything is better than CGI to interface a web server with a request handler.The\n--cgi\nflag to the python -m http.server command-line interface has been deprecated since Python 3.13.\n-\nload_module()\nmethod: useexec_module()\ninstead.\n-\nThe\ngetdefaultlocale()\nfunction has been deprecated since Python 3.11. Its removal was originally planned for Python 3.13 (gh-90817), but has been postponed to Python 3.15. Usegetlocale()\n,setlocale()\n, andgetencoding()\ninstead. (Contributed by Hugo van Kemenade in gh-111187.)\n-\nPurePath.is_reserved()\nhas been deprecated since Python 3.13. Useos.path.isreserved()\nto detect reserved paths on Windows.\n-\njava_ver()\nhas been deprecated since Python 3.13. This function is only useful for Jython support, has a confusing API, and is largely untested.\n-\nThe check_home argument of\nsysconfig.is_python_build()\nhas been deprecated since Python 3.12.\n-\nRLock()\nwill take no arguments in Python 3.15. Passing any arguments has been deprecated since Python 3.14, as the Python version does not permit any arguments, but the C version allows any number of positional or keyword arguments, ignoring every argument.\n-\ntypes.CodeType\n: Accessingco_lnotab\nwas deprecated in PEP 626 since 3.10 and was planned to be removed in 3.12, but it only got a properDeprecationWarning\nin 3.12. May be removed in 3.15. (Contributed by Nikita Sobolev in gh-101866.)\n-\nThe undocumented keyword argument syntax for creating\nNamedTuple\nclasses (for example,Point = NamedTuple(\"Point\", x=int, y=int)\n) has been deprecated since Python 3.13. Use the class-based syntax or the functional syntax instead.When using the functional syntax of\nTypedDict\ns, failing to pass a value to the fields parameter (TD = TypedDict(\"TD\")\n) or passingNone\n(TD = TypedDict(\"TD\", None)\n) has been deprecated since Python 3.13. Useclass TD(TypedDict): pass\norTD = TypedDict(\"TD\", {})\nto create a TypedDict with zero field.The\ntyping.no_type_check_decorator()\ndecorator function has been deprecated since Python 3.13. After eight years in thetyping\nmodule, it has yet to be supported by any major type checker.\nwave\n:The\ngetmark()\n,setmark()\n, andgetmarkers()\nmethods of theWave_read\nandWave_write\nclasses have been deprecated since Python 3.13.\n-\nload_module()\nhas been deprecated since Python 3.10. Useexec_module()\ninstead. (Contributed by Jiahao Li in gh-125746.)\nPending removal in Python 3.16\u00b6\nThe import system:\nSetting\n__loader__\non a module while failing to set__spec__.loader\nis deprecated. In Python 3.16,__loader__\nwill cease to be set or taken into consideration by the import system or the standard library.\n-\nThe\n'u'\nformat code (wchar_t\n) has been deprecated in documentation since Python 3.3 and at runtime since Python 3.13. Use the'w'\nformat code (Py_UCS4\n) for Unicode characters instead.\n-\nasyncio.iscoroutinefunction()\nis deprecated and will be removed in Python 3.16; useinspect.iscoroutinefunction()\ninstead. (Contributed by Jiahao Li and Kumar Aditya in gh-122875.)asyncio\npolicy system is deprecated and will be removed in Python 3.16. In particular, the following classes and functions are deprecated:Users should use\nasyncio.run()\norasyncio.Runner\nwith loop_factory to use the desired event loop implementation.For example, to use\nasyncio.SelectorEventLoop\non Windows:import asyncio async def main(): ... asyncio.run(main(), loop_factory=asyncio.SelectorEventLoop)\n(Contributed by Kumar Aditya in gh-127949.)\n-\nBitwise inversion on boolean types,\n~True\nor~False\nhas been deprecated since Python 3.12, as it produces surprising and unintuitive results (-2\nand-1\n). Usenot x\ninstead for the logical negation of a Boolean. In the rare case that you need the bitwise inversion of the underlying integer, convert toint\nexplicitly (~int(x)\n).\n-\nCalling the Python implementation of\nfunctools.reduce()\nwith function or sequence as keyword arguments has been deprecated since Python 3.14.\n-\nSupport for custom logging handlers with the strm argument is deprecated and scheduled for removal in Python 3.16. Define handlers with the stream argument instead. (Contributed by Mariusz Felisiak in gh-115032.)\n-\nValid extensions start with a \u2018.\u2019 or are empty for\nmimetypes.MimeTypes.add_type()\n. Undotted extensions are deprecated and will raise aValueError\nin Python 3.16. (Contributed by Hugo van Kemenade in gh-75223.)\n-\nThe\nExecError\nexception has been deprecated since Python 3.14. It has not been used by any function inshutil\nsince Python 3.4, and is now an alias ofRuntimeError\n.\n-\nThe\nClass.get_methods\nmethod has been deprecated since Python 3.14.\nsys\n:The\n_enablelegacywindowsfsencoding()\nfunction has been deprecated since Python 3.13. Use thePYTHONLEGACYWINDOWSFSENCODING\nenvironment variable instead.\n-\nThe\nsysconfig.expand_makefile_vars()\nfunction has been deprecated since Python 3.14. Use thevars\nargument ofsysconfig.get_paths()\ninstead.\n-\nThe undocumented and unused\nTarFile.tarfile\nattribute has been deprecated since Python 3.13.\nPending removal in Python 3.17\u00b6\n-\ncollections.abc.ByteString\nis scheduled for removal in Python 3.17.Use\nisinstance(obj, collections.abc.Buffer)\nto test ifobj\nimplements the buffer protocol at runtime. For use in type annotations, either useBuffer\nor a union that explicitly specifies the types your code supports (e.g.,bytes | bytearray | memoryview\n).ByteString\nwas originally intended to be an abstract class that would serve as a supertype of bothbytes\nandbytearray\n. However, since the ABC never had any methods, knowing that an object was an instance ofByteString\nnever actually told you anything useful about the object. Other common buffer types such asmemoryview\nwere also never understood as subtypes ofByteString\n(either at runtime or by static type checkers).See PEP 688 for more details. (Contributed by Shantanu Jain in gh-91896.)\n-\nBefore Python 3.14, old-style unions were implemented using the private class\ntyping._UnionGenericAlias\n. This class is no longer needed for the implementation, but it has been retained for backward compatibility, with removal scheduled for Python 3.17. Users should use documented introspection helpers liketyping.get_origin()\nandtyping.get_args()\ninstead of relying on private implementation details.typing.ByteString\n, deprecated since Python 3.9, is scheduled for removal in Python 3.17.Use\nisinstance(obj, collections.abc.Buffer)\nto test ifobj\nimplements the buffer protocol at runtime. For use in type annotations, either useBuffer\nor a union that explicitly specifies the types your code supports (e.g.,bytes | bytearray | memoryview\n).ByteString\nwas originally intended to be an abstract class that would serve as a supertype of bothbytes\nandbytearray\n. However, since the ABC never had any methods, knowing that an object was an instance ofByteString\nnever actually told you anything useful about the object. Other common buffer types such asmemoryview\nwere also never understood as subtypes ofByteString\n(either at runtime or by static type checkers).See PEP 688 for more details. (Contributed by Shantanu Jain in gh-91896.)\nPending removal in Python 3.18\u00b6\nPending removal in Python 3.19\u00b6\nPending removal in future versions\u00b6\nThe following APIs will be removed in the future, although there is currently no date scheduled for their removal.\n-\nNesting argument groups and nesting mutually exclusive groups are deprecated.\nPassing the undocumented keyword argument prefix_chars to\nadd_argument_group()\nis now deprecated.The\nargparse.FileType\ntype converter is deprecated.\n-\nGenerators:\nthrow(type, exc, tb)\nandathrow(type, exc, tb)\nsignature is deprecated: usethrow(exc)\nandathrow(exc)\ninstead, the single argument signature.Currently Python accepts numeric literals immediately followed by keywords, for example\n0in x\n,1or x\n,0if 1else 2\n. It allows confusing and ambiguous expressions like[0x1for x in y]\n(which can be interpreted as[0x1 for x in y]\nor[0x1f or x in y]\n). A syntax warning is raised if the numeric literal is immediately followed by one of keywordsand\n,else\n,for\n,if\n,in\n,is\nandor\n. In a future release it will be changed to a syntax error. (gh-87999)Support for\n__index__()\nand__int__()\nmethod returning non-int type: these methods will be required to return an instance of a strict subclass ofint\n.Support for\n__float__()\nmethod returning a strict subclass offloat\n: these methods will be required to return an instance offloat\n.Support for\n__complex__()\nmethod returning a strict subclass ofcomplex\n: these methods will be required to return an instance ofcomplex\n.Passing a complex number as the real or imag argument in the\ncomplex()\nconstructor is now deprecated; it should only be passed as a single positional argument. (Contributed by Serhiy Storchaka in gh-109218.)\ncalendar\n:calendar.January\nandcalendar.February\nconstants are deprecated and replaced bycalendar.JANUARY\nandcalendar.FEBRUARY\n. (Contributed by Prince Roshan in gh-103636.)codecs\n: useopen()\ninstead ofcodecs.open()\n. (gh-133038)codeobject.co_lnotab\n: use thecodeobject.co_lines()\nmethod instead.-\nutcnow()\n: usedatetime.datetime.now(tz=datetime.UTC)\n.utcfromtimestamp()\n: usedatetime.datetime.fromtimestamp(timestamp, tz=datetime.UTC)\n.\ngettext\n: Plural value must be an integer.-\ncache_from_source()\ndebug_override parameter is deprecated: use the optimization parameter instead.\n-\nEntryPoints\ntuple interface.Implicit\nNone\non return values.\nlogging\n: thewarn()\nmethod has been deprecated since Python 3.3, usewarning()\ninstead.mailbox\n: Use of StringIO input and text mode is deprecated, use BytesIO and binary mode instead.os\n: Callingos.register_at_fork()\nin multi-threaded process.pydoc.ErrorDuringImport\n: A tuple value for exc_info parameter is deprecated, use an exception instance.re\n: More strict rules are now applied for numerical group references and group names in regular expressions. Only sequence of ASCII digits is now accepted as a numerical reference. The group name in bytes patterns and replacement strings can now only contain ASCII letters and digits and underscore. (Contributed by Serhiy Storchaka in gh-91760.)sre_compile\n,sre_constants\nandsre_parse\nmodules.shutil\n:rmtree()\n\u2019s onerror parameter is deprecated in Python 3.12; use the onexc parameter instead.ssl\noptions and protocols:ssl.SSLContext\nwithout protocol argument is deprecated.ssl.SSLContext\n:set_npn_protocols()\nandselected_npn_protocol()\nare deprecated: use ALPN instead.ssl.OP_NO_SSL*\noptionsssl.OP_NO_TLS*\noptionsssl.PROTOCOL_SSLv3\nssl.PROTOCOL_TLS\nssl.PROTOCOL_TLSv1\nssl.PROTOCOL_TLSv1_1\nssl.PROTOCOL_TLSv1_2\nssl.TLSVersion.SSLv3\nssl.TLSVersion.TLSv1\nssl.TLSVersion.TLSv1_1\nthreading\nmethods:threading.Condition.notifyAll()\n: usenotify_all()\n.threading.Event.isSet()\n: useis_set()\n.threading.Thread.isDaemon()\n,threading.Thread.setDaemon()\n: usethreading.Thread.daemon\nattribute.threading.Thread.getName()\n,threading.Thread.setName()\n: usethreading.Thread.name\nattribute.threading.currentThread()\n: usethreading.current_thread()\n.threading.activeCount()\n: usethreading.active_count()\n.\nThe internal class\ntyping._UnionGenericAlias\nis no longer used to implementtyping.Union\n. To preserve compatibility with users using this private class, a compatibility shim will be provided until at least Python 3.17. (Contributed by Jelle Zijlstra in gh-105499.)unittest.IsolatedAsyncioTestCase\n: it is deprecated to return a value that is notNone\nfrom a test case.urllib.parse\ndeprecated functions:urlparse()\ninsteadsplitattr()\nsplithost()\nsplitnport()\nsplitpasswd()\nsplitport()\nsplitquery()\nsplittag()\nsplittype()\nsplituser()\nsplitvalue()\nto_bytes()\nwsgiref\n:SimpleHandler.stdout.write()\nshould not do partial writes.xml.etree.ElementTree\n: Testing the truth value of anElement\nis deprecated. In a future release it will always returnTrue\n. Prefer explicitlen(elem)\norelem is not None\ntests instead.sys._clear_type_cache()\nis deprecated: usesys._clear_internal_caches()\ninstead.\nCPython bytecode changes\u00b6\nReplaced the opcode\nBINARY_SUBSCR\nby theBINARY_OP\nopcode with theNB_SUBSCR\noparg. (Contributed by Irit Katriel in gh-100239.)Add the\nBUILD_INTERPOLATION\nandBUILD_TEMPLATE\nopcodes to construct newInterpolation\nandTemplate\ninstances, respectively. (Contributed by Lysandros Nikolaou and others in gh-132661; see also PEP 750: Template strings).Remove the\nBUILD_CONST_KEY_MAP\nopcode. UseBUILD_MAP\ninstead. (Contributed by Mark Shannon in gh-122160.)Replace the\nLOAD_ASSERTION_ERROR\nopcode withLOAD_COMMON_CONSTANT\nand add support for loadingNotImplementedError\n.Add the\nLOAD_FAST_BORROW\nandLOAD_FAST_BORROW_LOAD_FAST_BORROW\nopcodes to reduce reference counting overhead when the interpreter can prove that the reference in the frame outlives the reference loaded onto the stack. (Contributed by Matt Page in gh-130704.)Add the\nLOAD_SMALL_INT\nopcode, which pushes a small integer equal to theoparg\nto the stack. TheRETURN_CONST\nopcode is removed as it is no longer used. (Contributed by Mark Shannon in gh-125837.)Add the new\nLOAD_SPECIAL\ninstruction. Generate code forwith\nandasync with\nstatements using the new instruction. Removed theBEFORE_WITH\nandBEFORE_ASYNC_WITH\ninstructions. (Contributed by Mark Shannon in gh-120507.)Add the\nPOP_ITER\nopcode to support \u2018virtual\u2019 iterators. (Contributed by Mark Shannon in gh-132554.)\nPseudo-instructions\u00b6\nAdd the\nANNOTATIONS_PLACEHOLDER\npseudo instruction to support partially executed module-level annotations with deferred evaluation of annotations. (Contributed by Jelle Zijlstra in gh-130907.)Add the\nBINARY_OP_EXTEND\npseudo instruction, which executes a pair of functions (guard and specialization functions) accessed from the inline cache. (Contributed by Irit Katriel in gh-100239.)Add three specializations for\nCALL_KW\n;CALL_KW_PY\nfor calls to Python functions,CALL_KW_BOUND_METHOD\nfor calls to bound methods, andCALL_KW_NON_PY\nfor all other calls. (Contributed by Mark Shannon in gh-118093.)Add the\nJUMP_IF_TRUE\nandJUMP_IF_FALSE\npseudo instructions, conditional jumps which do not impact the stack. Replaced by the sequenceCOPY 1\n,TO_BOOL\n,POP_JUMP_IF_TRUE/FALSE\n. (Contributed by Irit Katriel in gh-124285.)Add the\nLOAD_CONST_MORTAL\npseudo instruction. (Contributed by Mark Shannon in gh-128685.)Add the\nLOAD_CONST_IMMORTAL\npseudo instruction, which does the same asLOAD_CONST\n, but is more efficient for immortal objects. (Contributed by Mark Shannon in gh-125837.)Add the\nNOT_TAKEN\npseudo instruction, used bysys.monitoring\nto record branch events (such asBRANCH_LEFT\n). (Contributed by Mark Shannon in gh-122548.)\nC API changes\u00b6\nPython configuration C API\u00b6\nAdd a PyInitConfig C API to configure the Python initialization without relying on C structures and the ability to make ABI-compatible changes in the future.\nComplete the PEP 587 PyConfig C API by adding\nPyInitConfig_AddModule()\nwhich can be used to add a built-in extension\nmodule; a feature previously referred to as the \u201cinittab\u201d.\nAdd PyConfig_Get()\nand PyConfig_Set()\nfunctions to get and set\nthe current runtime configuration.\nPEP 587 \u2018Python Initialization Configuration\u2019 unified all the ways to configure Python\u2019s initialization. This PEP also unifies the configuration of Python\u2019s preinitialization and initialization in a single API. Moreover, this PEP only provides a single choice to embed Python, instead of having two \u2018Python\u2019 and \u2018Isolated\u2019 choices (PEP 587), to further simplify the API.\nThe lower level PEP 587 PyConfig API remains available for use cases with an intentionally higher level of coupling to CPython implementation details (such as emulating the full functionality of CPython\u2019s CLI, including its configuration mechanisms).\n(Contributed by Victor Stinner in gh-107954.)\nNew features in the C API\u00b6\nAdd\nPy_PACK_VERSION()\nandPy_PACK_FULL_VERSION()\n, two new macros for bit-packing Python version numbers. This is useful for comparisons withPy_Version\norPY_VERSION_HEX\n. (Contributed by Petr Viktorin in gh-128629.)Add\nPyBytes_Join(sep, iterable)\nfunction, similar tosep.join(iterable)\nin Python. (Contributed by Victor Stinner in gh-121645.)Add functions to manipulate the configuration of the current runtime Python interpreter (PEP 741: Python configuration C API):\n(Contributed by Victor Stinner in gh-107954.)\nAdd functions to configure Python initialization (PEP 741: Python configuration C API):\n(Contributed by Victor Stinner in gh-107954.)\nAdd\nPy_fopen()\nfunction to open a file. This works similarly to the standard Cfopen()\nfunction, instead accepting a Python object for the path parameter and setting an exception on error. The corresponding newPy_fclose()\nfunction should be used to close a file. (Contributed by Victor Stinner in gh-127350.)Add\nPy_HashBuffer()\nto compute and return the hash value of a buffer. (Contributed by Antoine Pitrou and Victor Stinner in gh-122854.)Add\nPyImport_ImportModuleAttr()\nandPyImport_ImportModuleAttrString()\nhelper functions to import a module and get an attribute of the module. (Contributed by Victor Stinner in gh-128911.)Add\nPyIter_NextItem()\nto replacePyIter_Next()\n, which has an ambiguous return value. (Contributed by Irit Katriel and Erlend Aasland in gh-105201.)Add\nPyLong_GetSign()\nfunction to get the sign ofint\nobjects. (Contributed by Sergey B Kirpichev in gh-116560.)Add\nPyLong_IsPositive()\n,PyLong_IsNegative()\nandPyLong_IsZero()\nfor checking ifPyLongObject\nis positive, negative, or zero, respectively. (Contributed by James Roy and Sergey B Kirpichev in gh-126061.)Add new functions to convert C\n\nnumbers to/from Pythonint\nobjects:(Contributed by Victor Stinner in gh-120389.)\nAdd a new import and export API for Python\nint\nobjects (PEP 757):(Contributed by Sergey B Kirpichev and Victor Stinner in gh-102471.)\nAdd\nPyMonitoring_FireBranchLeftEvent()\nandPyMonitoring_FireBranchRightEvent()\nfor generatingBRANCH_LEFT\nandBRANCH_RIGHT\nevents, respectively. (Contributed by Mark Shannon in gh-122548.)Add\nPyType_Freeze()\nfunction to make a type immutable. (Contributed by Victor Stinner in gh-121654.)Add\nPyType_GetBaseByToken()\nandPy_tp_token\nslot for easier superclass identification, which attempts to resolve the type checking issue mentioned in PEP 630. (Contributed in gh-124153.)Add a new\nPyUnicode_Equal()\nfunction to test if two strings are equal. The function is also added to the Limited C API. (Contributed by Victor Stinner in gh-124502.)Add a new\nPyUnicodeWriter\nAPI to create a Pythonstr\nobject, with the following functions:(Contributed by Victor Stinner in gh-119182.)\nThe\nk\nandK\nformats inPyArg_ParseTuple()\nand similar functions now use__index__()\nif available, like all other integer formats. (Contributed by Serhiy Storchaka in gh-112068.)Add support for a new\np\nformat unit inPy_BuildValue()\nthat produces a Pythonbool\nobject from a C integer. (Contributed by Pablo Galindo in bpo-45325.)Add\nPyUnstable_IsImmortal()\nfor determining if an object is immortal, for debugging purposes. (Contributed by Peter Bierma in gh-128509.)Add\nPyUnstable_Object_EnableDeferredRefcount()\nfor enabling deferred reference counting, as outlined in PEP 703.Add\nPyUnstable_Object_IsUniquelyReferenced()\nas a replacement forPy_REFCNT(op) == 1\non free threaded builds. (Contributed by Peter Bierma in gh-133140.)Add\nPyUnstable_Object_IsUniqueReferencedTemporary()\nto determine if an object is a unique temporary object on the interpreter\u2019s operand stack. This can be used in some cases as a replacement for checking ifPy_REFCNT()\nis1\nfor Python objects passed as arguments to C API functions. (Contributed by Sam Gross in gh-133164.)\nLimited C API changes\u00b6\nIn the limited C API version 3.14 and newer,\nPy_TYPE()\nandPy_REFCNT()\nare now implemented as an opaque function call to hide implementation details. (Contributed by Victor Stinner in gh-120600 and gh-124127.)Remove the\nPySequence_Fast_GET_SIZE\n,PySequence_Fast_GET_ITEM\n, andPySequence_Fast_ITEMS\nmacros from the limited C API, since they have always been broken in the limited C API. (Contributed by Victor Stinner in gh-91417.)\nRemoved C APIs\u00b6\nCreating\nimmutable types\nwith mutable bases was deprecated in Python 3.12, and now raises aTypeError\n. (Contributed by Nikita Sobolev in gh-119775.)Remove\nPyDictObject.ma_version_tag\nmember, which was deprecated in Python 3.12. Use thePyDict_AddWatcher()\nAPI instead. (Contributed by Sam Gross in gh-124296.)Remove the private\n_Py_InitializeMain()\nfunction. It was a provisional API added to Python 3.8 by PEP 587. (Contributed by Victor Stinner in gh-129033.)Remove the undocumented APIs\nPy_C_RECURSION_LIMIT\nandPyThreadState.c_recursion_remaining\n. These were added in 3.13 and have been removed without deprecation. UsePy_EnterRecursiveCall()\nto guard against runaway recursion in C code. (Removed by Petr Viktorin in gh-133079, see also gh-130396.)\nDeprecated C APIs\u00b6\nThe\nPy_HUGE_VAL\nmacro is now soft deprecated. UsePy_INFINITY\ninstead. (Contributed by Sergey B Kirpichev in gh-120026.)The\nPy_IS_NAN\n,Py_IS_INFINITY\n, andPy_IS_FINITE\nmacros are now soft deprecated. Useisnan\n,isinf\nandisfinite\ninstead, available frommath.h\nsince C99. (Contributed by Sergey B Kirpichev in gh-119613.)Non-tuple sequences are now deprecated as argument for the\n(items)\nformat unit inPyArg_ParseTuple()\nand other argument parsing functions if items contains format units which store a borrowed buffer or a borrowed reference. (Contributed by Serhiy Storchaka in gh-50333.)The\n_PyMonitoring_FireBranchEvent\nfunction is now deprecated and should be replaced with calls toPyMonitoring_FireBranchLeftEvent()\nandPyMonitoring_FireBranchRightEvent()\n.The previously undocumented function\nPySequence_In()\nis now soft deprecated. UsePySequence_Contains()\ninstead. (Contributed by Yuki Kobayashi in gh-127896.)\nPending removal in Python 3.15\u00b6\nThe\nPyImport_ImportModuleNoBlock()\n: UsePyImport_ImportModule()\ninstead.PyWeakref_GetObject()\nandPyWeakref_GET_OBJECT()\n: UsePyWeakref_GetRef()\ninstead. The pythoncapi-compat project can be used to getPyWeakref_GetRef()\non Python 3.12 and older.Py_UNICODE\ntype and thePy_UNICODE_WIDE\nmacro: Usewchar_t\ninstead.PyUnicode_AsDecodedObject()\n: UsePyCodec_Decode()\ninstead.PyUnicode_AsDecodedUnicode()\n: UsePyCodec_Decode()\ninstead; Note that some codecs (for example, \u201cbase64\u201d) may return a type other thanstr\n, such asbytes\n.PyUnicode_AsEncodedObject()\n: UsePyCodec_Encode()\ninstead.PyUnicode_AsEncodedUnicode()\n: UsePyCodec_Encode()\ninstead; Note that some codecs (for example, \u201cbase64\u201d) may return a type other thanbytes\n, such asstr\n.Python initialization functions, deprecated in Python 3.13:\nPy_GetPath()\n: UsePyConfig_Get(\"module_search_paths\")\n(sys.path\n) instead.Py_GetPrefix()\n: UsePyConfig_Get(\"base_prefix\")\n(sys.base_prefix\n) instead. UsePyConfig_Get(\"prefix\")\n(sys.prefix\n) if virtual environments need to be handled.Py_GetExecPrefix()\n: UsePyConfig_Get(\"base_exec_prefix\")\n(sys.base_exec_prefix\n) instead. UsePyConfig_Get(\"exec_prefix\")\n(sys.exec_prefix\n) if virtual environments need to be handled.Py_GetProgramFullPath()\n: UsePyConfig_Get(\"executable\")\n(sys.executable\n) instead.Py_GetProgramName()\n: UsePyConfig_Get(\"executable\")\n(sys.executable\n) instead.Py_GetPythonHome()\n: UsePyConfig_Get(\"home\")\nor thePYTHONHOME\nenvironment variable instead.\nThe pythoncapi-compat project can be used to get\nPyConfig_Get()\non Python 3.13 and older.Functions to configure Python\u2019s initialization, deprecated in Python 3.11:\nPySys_SetArgvEx()\n: SetPyConfig.argv\ninstead.PySys_SetArgv()\n: SetPyConfig.argv\ninstead.Py_SetProgramName()\n: SetPyConfig.program_name\ninstead.Py_SetPythonHome()\n: SetPyConfig.home\ninstead.PySys_ResetWarnOptions()\n: Clearsys.warnoptions\nandwarnings.filters\ninstead.\nThe\nPy_InitializeFromConfig()\nAPI should be used withPyConfig\ninstead.Global configuration variables:\nPy_DebugFlag\n: UsePyConfig.parser_debug\norPyConfig_Get(\"parser_debug\")\ninstead.Py_VerboseFlag\n: UsePyConfig.verbose\norPyConfig_Get(\"verbose\")\ninstead.Py_QuietFlag\n: UsePyConfig.quiet\norPyConfig_Get(\"quiet\")\ninstead.Py_InteractiveFlag\n: UsePyConfig.interactive\norPyConfig_Get(\"interactive\")\ninstead.Py_InspectFlag\n: UsePyConfig.inspect\norPyConfig_Get(\"inspect\")\ninstead.Py_OptimizeFlag\n: UsePyConfig.optimization_level\norPyConfig_Get(\"optimization_level\")\ninstead.Py_NoSiteFlag\n: UsePyConfig.site_import\norPyConfig_Get(\"site_import\")\ninstead.Py_BytesWarningFlag\n: UsePyConfig.bytes_warning\norPyConfig_Get(\"bytes_warning\")\ninstead.Py_FrozenFlag\n: UsePyConfig.pathconfig_warnings\norPyConfig_Get(\"pathconfig_warnings\")\ninstead.Py_IgnoreEnvironmentFlag\n: UsePyConfig.use_environment\norPyConfig_Get(\"use_environment\")\ninstead.Py_DontWriteBytecodeFlag\n: UsePyConfig.write_bytecode\norPyConfig_Get(\"write_bytecode\")\ninstead.Py_NoUserSiteDirectory\n: UsePyConfig.user_site_directory\norPyConfig_Get(\"user_site_directory\")\ninstead.Py_UnbufferedStdioFlag\n: UsePyConfig.buffered_stdio\norPyConfig_Get(\"buffered_stdio\")\ninstead.Py_HashRandomizationFlag\n: UsePyConfig.use_hash_seed\nandPyConfig.hash_seed\norPyConfig_Get(\"hash_seed\")\ninstead.Py_IsolatedFlag\n: UsePyConfig.isolated\norPyConfig_Get(\"isolated\")\ninstead.Py_LegacyWindowsFSEncodingFlag\n: UsePyPreConfig.legacy_windows_fs_encoding\norPyConfig_Get(\"legacy_windows_fs_encoding\")\ninstead.Py_LegacyWindowsStdioFlag\n: UsePyConfig.legacy_windows_stdio\norPyConfig_Get(\"legacy_windows_stdio\")\ninstead.Py_FileSystemDefaultEncoding\n,Py_HasFileSystemDefaultEncoding\n: UsePyConfig.filesystem_encoding\norPyConfig_Get(\"filesystem_encoding\")\ninstead.Py_FileSystemDefaultEncodeErrors\n: UsePyConfig.filesystem_errors\norPyConfig_Get(\"filesystem_errors\")\ninstead.Py_UTF8Mode\n: UsePyPreConfig.utf8_mode\norPyConfig_Get(\"utf8_mode\")\ninstead. (seePy_PreInitialize()\n)\nThe\nPy_InitializeFromConfig()\nAPI should be used withPyConfig\nto set these options. OrPyConfig_Get()\ncan be used to get these options at runtime.\nPending removal in Python 3.16\u00b6\nThe bundled copy of\nlibmpdec\n.\nPending removal in Python 3.18\u00b6\nThe following private functions are deprecated and planned for removal in Python 3.18:\n_PyBytes_Join()\n: usePyBytes_Join()\n._PyDict_GetItemStringWithError()\n: usePyDict_GetItemStringRef()\n._PyDict_Pop()\n: usePyDict_Pop()\n._PyLong_Sign()\n: usePyLong_GetSign()\n._PyLong_FromDigits()\nand_PyLong_New()\n: usePyLongWriter_Create()\n._PyThreadState_UncheckedGet()\n: usePyThreadState_GetUnchecked()\n._PyUnicode_AsString()\n: usePyUnicode_AsUTF8()\n._PyUnicodeWriter_Init()\n: replace_PyUnicodeWriter_Init(&writer)\nwithwriter = PyUnicodeWriter_Create(0)\n._PyUnicodeWriter_Finish()\n: replace_PyUnicodeWriter_Finish(&writer)\nwithPyUnicodeWriter_Finish(writer)\n._PyUnicodeWriter_Dealloc()\n: replace_PyUnicodeWriter_Dealloc(&writer)\nwithPyUnicodeWriter_Discard(writer)\n._PyUnicodeWriter_WriteChar()\n: replace_PyUnicodeWriter_WriteChar(&writer, ch)\nwithPyUnicodeWriter_WriteChar(writer, ch)\n._PyUnicodeWriter_WriteStr()\n: replace_PyUnicodeWriter_WriteStr(&writer, str)\nwithPyUnicodeWriter_WriteStr(writer, str)\n._PyUnicodeWriter_WriteSubstring()\n: replace_PyUnicodeWriter_WriteSubstring(&writer, str, start, end)\nwithPyUnicodeWriter_WriteSubstring(writer, str, start, end)\n._PyUnicodeWriter_WriteASCIIString()\n: replace_PyUnicodeWriter_WriteASCIIString(&writer, str)\nwithPyUnicodeWriter_WriteASCII(writer, str)\n._PyUnicodeWriter_WriteLatin1String()\n: replace_PyUnicodeWriter_WriteLatin1String(&writer, str)\nwithPyUnicodeWriter_WriteUTF8(writer, str)\n._PyUnicodeWriter_Prepare()\n: (no replacement)._PyUnicodeWriter_PrepareKind()\n: (no replacement)._Py_HashPointer()\n: usePy_HashPointer()\n._Py_fopen_obj()\n: usePy_fopen()\n.\nThe pythoncapi-compat project can be used to get these new public functions on Python 3.13 and older. (Contributed by Victor Stinner in gh-128863.)\nPending removal in future versions\u00b6\nThe following APIs are deprecated and will be removed, although there is currently no date scheduled for their removal.\nPy_TPFLAGS_HAVE_FINALIZE\n: Unneeded since Python 3.8.PyErr_Fetch()\n: UsePyErr_GetRaisedException()\ninstead.PyErr_NormalizeException()\n: UsePyErr_GetRaisedException()\ninstead.PyErr_Restore()\n: UsePyErr_SetRaisedException()\ninstead.PyModule_GetFilename()\n: UsePyModule_GetFilenameObject()\ninstead.PyOS_AfterFork()\n: UsePyOS_AfterFork_Child()\ninstead.PySlice_GetIndicesEx()\n: UsePySlice_Unpack()\nandPySlice_AdjustIndices()\ninstead.PyUnicode_READY()\n: Unneeded since Python 3.12PyErr_Display()\n: UsePyErr_DisplayException()\ninstead._PyErr_ChainExceptions()\n: Use_PyErr_ChainExceptions1()\ninstead.PyBytesObject.ob_shash\nmember: callPyObject_Hash()\ninstead.Thread Local Storage (TLS) API:\nPyThread_create_key()\n: UsePyThread_tss_alloc()\ninstead.PyThread_delete_key()\n: UsePyThread_tss_free()\ninstead.PyThread_set_key_value()\n: UsePyThread_tss_set()\ninstead.PyThread_get_key_value()\n: UsePyThread_tss_get()\ninstead.PyThread_delete_key_value()\n: UsePyThread_tss_delete()\ninstead.PyThread_ReInitTLS()\n: Unneeded since Python 3.7.\nBuild changes\u00b6\nPEP 776: Emscripten is now an officially supported platform at tier 3. As a part of this effort, more than 25 bugs in Emscripten libc were fixed. Emscripten now includes support for\nctypes\n,termios\n, andfcntl\n, as well as experimental support for the new default interactive shell. (Contributed by R. Hood Chatham in gh-127146, gh-127683, and gh-136931.)Official Android binary releases are now provided on python.org.\nGNU Autoconf 2.72 is now required to generate\nconfigure\n. (Contributed by Erlend Aasland in gh-115765.)wasm32-unknown-emscripten\nis now a PEP 11 tier 3 platform. (Contributed by R. Hood Chatham in gh-127146, gh-127683, and gh-136931.)#pragma\n-based linking withpython3*.lib\ncan now be switched off with Py_NO_LINK_LIB. (Contributed by Jean-Christophe Fillion-Robin in gh-82909.)CPython now enables a set of recommended compiler options by default for improved security. Use the\n--disable-safety\nconfigure\noption to disable them, or the--enable-slower-safety\noption for a larger set of compiler options, albeit with a performance cost.The\nWITH_FREELISTS\nmacro and--without-freelists\nconfigure\noption have been removed.The new\nconfigure\noption--with-tail-call-interp\nmay be used to enable the experimental tail call interpreter. See A new type of interpreter for further details.To disable the new remote debugging support, use the\n--without-remote-debug\nconfigure\noption. This may be useful for security reasons.iOS and macOS apps can now be configured to redirect\nstdout\nandstderr\ncontent to the system log. (Contributed by Russell Keith-Magee in gh-127592.)The iOS testbed is now able to stream test output while the test is running. The testbed can also be used to run the test suite of projects other than CPython itself. (Contributed by Russell Keith-Magee in gh-127592.)\nbuild-details.json\n\u00b6\nInstallations of Python now contain a new file, build-details.json\n.\nThis is a static JSON document containing build details for CPython,\nto allow for introspection without needing to run code.\nThis is helpful for use-cases such as Python launchers, cross-compilation,\nand so on.\nbuild-details.json\nmust be installed in the platform-independent\nstandard library directory. This corresponds to the \u2018stdlib\u2019 sysconfig\ninstallation path,\nwhich can be found by running sysconfig.get_path('stdlib')\n.\nSee also\nPEP 739 \u2013 build-details.json\n1.0 \u2013 a static description file\nfor Python build details\nDiscontinuation of PGP signatures\u00b6\nPGP (Pretty Good Privacy) signatures will not be provided for releases of Python 3.14 or future versions. To verify CPython artifacts, users must use Sigstore verification materials. Releases have been signed using Sigstore since Python 3.11.\nThis change in release process was specified in PEP 761.\nFree-threaded Python is officially supported\u00b6\nThe free-threaded build of Python is now supported and no longer experimental. This is the start of phase II where free-threaded Python is officially supported but still optional.\nThe free-threading team are confident that the project is on the right path, and appreciate the continued dedication from everyone working to make free-threading ready for broader adoption across the Python community.\nWith these recommendations and the acceptance of this PEP, the Python developer community should broadly advertise that free-threading is a supported Python build option now and into the future, and that it will not be removed without a proper deprecation schedule.\nAny decision to transition to phase III, with free-threading as the default or sole build of Python is still undecided, and dependent on many factors both within CPython itself and the community. This decision is for the future.\nBinary releases for the experimental just-in-time compiler\u00b6\nThe official macOS and Windows release binaries now include an experimental\njust-in-time (JIT) compiler. Although it is not recommended for production\nuse, it can be tested by setting PYTHON_JIT=1\nas an\nenvironment variable. Downstream source builds and redistributors can use the\n--enable-experimental-jit=yes-off\nconfiguration option for similar\nbehavior.\nThe JIT is at an early stage and still in active development. As such, the\ntypical performance impact of enabling it can range from 10% slower to 20%\nfaster, depending on workload. To aid in testing and evaluation, a set of\nintrospection functions has been provided in the sys._jit\nnamespace.\nsys._jit.is_available()\ncan be used to determine if the current executable\nsupports JIT compilation, while sys._jit.is_enabled()\ncan be used to tell\nif JIT compilation has been enabled for the current process.\nCurrently, the most significant missing functionality is that native debuggers\nand profilers like gdb\nand perf\nare unable to unwind through JIT frames\n(Python debuggers and profilers, like pdb\nor profile\n, continue to\nwork without modification). Free-threaded builds do not support JIT compilation.\nPlease report any bugs or major performance regressions that you encounter!\nSee also\nPorting to Python 3.14\u00b6\nThis section lists previously described changes and other bugfixes that may require changes to your code.\nChanges in the Python API\u00b6\nOn Unix platforms other than macOS, forkserver is now the default start method for\nmultiprocessing\nandProcessPoolExecutor\n, instead of fork.If you encounter\nNameError\ns or pickling errors coming out ofmultiprocessing\norconcurrent.futures\n, see the forkserver restrictions.This change does not affect Windows or macOS, where \u2018spawn\u2019 remains the default start method.\nfunctools.partial\nis now a method descriptor. Wrap it instaticmethod()\nif you want to preserve the old behavior. (Contributed by Serhiy Storchaka and Dominykas Grigonis in gh-121027.)The garbage collector is now incremental, which means that the behavior of\ngc.collect()\nchanges slightly:gc.collect(1)\n: Performs an increment of garbage collection, rather than collecting generation 1.Other calls to\ngc.collect()\nare unchanged.\nThe\nlocale.nl_langinfo()\nfunction now temporarily sets theLC_CTYPE\nlocale in some cases. This temporary change affects other threads. (Contributed by Serhiy Storchaka in gh-69998.)types.UnionType\nis now an alias fortyping.Union\n, causing changes in some behaviors. See above for more details. (Contributed by Jelle Zijlstra in gh-105499.)The runtime behavior of annotations has changed in various ways; see above for details. While most code that interacts with annotations should continue to work, some undocumented details may behave differently.\nAs part of making the\nmimetypes\nCLI public, it now exits with1\non failure instead of0\nand2\non incorrect command-line parameters instead of1\n. Error messages are now printed to stderr.The\n\\B\npattern in regular expression now matches the empty string when given as the entire pattern, which may cause behavioural changes.On FreeBSD,\nsys.platform\nno longer contains the major version number.\nChanges in annotations (PEP 649 and PEP 749)\u00b6\nThis section contains guidance on changes that may be needed to annotations or Python code that interacts with or introspects annotations, due to the changes related to deferred evaluation of annotations.\nIn the majority of cases, working code from older versions of Python will not require any changes.\nImplications for annotated code\u00b6\nIf you define annotations in your code (for example, for use with a static type checker), then this change probably does not affect you: you can keep writing annotations the same way you did with previous versions of Python.\nYou will likely be able to remove quoted strings in annotations, which are frequently\nused for forward references. Similarly, if you use from __future__ import annotations\nto avoid having to write strings in annotations, you may well be able to\nremove that import once you support only Python 3.14 and newer.\nHowever, if you rely on third-party libraries that read annotations,\nthose libraries may need changes to support unquoted annotations before they\nwork as expected.\nImplications for readers of __annotations__\n\u00b6\nIf your code reads the __annotations__\nattribute on objects,\nyou may want to make changes in order to support code that relies on\ndeferred evaluation of annotations.\nFor example, you may want to use annotationlib.get_annotations()\nwith\nthe FORWARDREF\nformat,\nas the dataclasses\nmodule now does.\nThe external typing_extensions package provides partial backports\nof some of the functionality of the annotationlib\nmodule,\nsuch as the Format\nenum and\nthe get_annotations()\nfunction.\nThese can be used to write cross-version code that takes advantage of\nthe new behavior in Python 3.14.\nfrom __future__ import annotations\n\u00b6\nIn Python 3.7, PEP 563 introduced the from __future__ import annotations\nfuture statement, which turns all annotations into strings.\nHowever, this statement is now deprecated and it is expected to be removed in a future version of Python. This removal will not happen until after Python 3.13 reaches its end of life in 2029, being the last version of Python without support for deferred evaluation of annotations.\nIn Python 3.14, the behavior of code using from __future__ import annotations\nis unchanged.\nChanges in the C API\u00b6\nPy_Finalize()\nnow deletes all interned strings. This is backwards incompatible to any C extension that holds onto an interned string after a call toPy_Finalize()\nand is then reused after a call toPy_Initialize()\n. Any issues arising from this behavior will normally result in crashes during the execution of the subsequent call toPy_Initialize()\nfrom accessing uninitialized memory. To fix, use an address sanitizer to identify any use-after-free coming from an interned string and deallocate it during module shutdown. (Contributed by Eddie Elizondo in gh-113601.)The Unicode Exception Objects C API now raises a\nTypeError\nif its exception argument is not aUnicodeError\nobject. (Contributed by B\u00e9n\u00e9dikt Tran in gh-127691.)\nThe interpreter internally avoids some reference count modifications when loading objects onto the operands stack by borrowing references when possible. This can lead to smaller reference count values compared to previous Python versions. C API extensions that checked\nPy_REFCNT()\nof1\nto determine if an function argument is not referenced by any other code should instead usePyUnstable_Object_IsUniqueReferencedTemporary()\nas a safer replacement.Private functions promoted to public C APIs:\n_PyBytes_Join()\n:PyBytes_Join()\n_PyLong_IsNegative()\n:PyLong_IsNegative()\n_PyLong_IsPositive()\n:PyLong_IsPositive()\n_PyLong_IsZero()\n:PyLong_IsZero()\n_PyLong_Sign()\n:PyLong_GetSign()\n_PyUnicodeWriter_Dealloc()\n:PyUnicodeWriter_Discard()\n_PyUnicodeWriter_Finish()\n:PyUnicodeWriter_Finish()\n_PyUnicodeWriter_Init()\n: usePyUnicodeWriter_Create()\n_PyUnicodeWriter_Prepare()\n: (no replacement)_PyUnicodeWriter_PrepareKind()\n: (no replacement)_PyUnicodeWriter_WriteChar()\n:PyUnicodeWriter_WriteChar()\n_PyUnicodeWriter_WriteStr()\n:PyUnicodeWriter_WriteStr()\n_PyUnicodeWriter_WriteSubstring()\n:PyUnicodeWriter_WriteSubstring()\n_PyUnicode_EQ()\n:PyUnicode_Equal()\n_PyUnicode_Equal()\n:PyUnicode_Equal()\n_Py_GetConfig()\n:PyConfig_Get()\nandPyConfig_GetInt()\n_Py_HashBytes()\n:Py_HashBuffer()\n_Py_fopen_obj()\n:Py_fopen()\nPyMutex_IsLocked()\n:PyMutex_IsLocked()\nThe pythoncapi-compat project can be used to get most of these new functions on Python 3.13 and older.\nNotable changes in 3.14.1\u00b6\nAdd\nPyUnstable_ThreadState_SetStackProtection()\nandPyUnstable_ThreadState_ResetStackProtection()\nfunctions to set the stack protection base address and stack protection size of a Python thread state. (Contributed by Victor Stinner in gh-139653.)", "code_snippets": ["\n\n", " ", "\n ", "\n\n", " ", "\n"], "language": "Python", "source": "python.org", "token_count": 28747} +{"url": "https://docs.python.org/3/whatsnew/index.html", "title": "What\u2019s New in Python", "content": "What\u2019s New in Python\u00b6\nThe \u201cWhat\u2019s New in Python\u201d series of essays takes tours through the most important changes between major Python versions. They are a \u201cmust read\u201d for anyone wishing to stay up-to-date after a new release.\n- What\u2019s new in Python 3.14\n- What\u2019s New In Python 3.13\n- What\u2019s New In Python 3.12\n- What\u2019s New In Python 3.11\n- Summary \u2013 Release highlights\n- New Features\n- New Features Related to Type Hints\n- Other Language Changes\n- Other CPython Implementation Changes\n- New Modules\n- Improved Modules\n- Optimizations\n- Faster CPython\n- CPython bytecode changes\n- Deprecated\n- Pending Removal in Python 3.12\n- Removed\n- Porting to Python 3.11\n- Build Changes\n- C API Changes\n- Notable changes in 3.11.4\n- Notable changes in 3.11.5\n- What\u2019s New In Python 3.10\n- Summary \u2013 Release highlights\n- New Features\n- New Features Related to Type Hints\n- Other Language Changes\n- New Modules\n- Improved Modules\n- Optimizations\n- Deprecated\n- Removed\n- Porting to Python 3.10\n- CPython bytecode changes\n- Build Changes\n- C API Changes\n- Notable security feature in 3.10.7\n- Notable security feature in 3.10.8\n- Notable changes in 3.10.12\n- What\u2019s New In Python 3.9\n- Summary \u2013 Release highlights\n- You should check for DeprecationWarning in your code\n- New Features\n- Other Language Changes\n- New Modules\n- Improved Modules\n- Optimizations\n- Deprecated\n- Removed\n- Porting to Python 3.9\n- Build Changes\n- C API Changes\n- Notable changes in Python 3.9.1\n- Notable changes in Python 3.9.2\n- Notable changes in Python 3.9.3\n- Notable changes in Python 3.9.5\n- Notable security feature in 3.9.14\n- Notable changes in 3.9.17\n- What\u2019s New In Python 3.8\n- Summary \u2013 Release highlights\n- New Features\n- Other Language Changes\n- New Modules\n- Improved Modules\n- Optimizations\n- Build and C API Changes\n- Deprecated\n- API and Feature Removals\n- Porting to Python 3.8\n- Notable changes in Python 3.8.1\n- Notable changes in Python 3.8.2\n- Notable changes in Python 3.8.3\n- Notable changes in Python 3.8.8\n- Notable changes in Python 3.8.9\n- Notable changes in Python 3.8.10\n- Notable changes in Python 3.8.10\n- Notable changes in Python 3.8.12\n- Notable security feature in 3.8.14\n- Notable changes in 3.8.17\n- What\u2019s New In Python 3.7\n- Summary \u2013 Release Highlights\n- New Features\n- Other Language Changes\n- New Modules\n- Improved Modules\n- C API Changes\n- Build Changes\n- Optimizations\n- Other CPython Implementation Changes\n- Deprecated Python Behavior\n- Deprecated Python modules, functions and methods\n- Deprecated functions and types of the C API\n- Platform Support Removals\n- API and Feature Removals\n- Module Removals\n- Windows-only Changes\n- Porting to Python 3.7\n- Notable changes in Python 3.7.1\n- Notable changes in Python 3.7.2\n- Notable changes in Python 3.7.6\n- Notable changes in Python 3.7.10\n- Notable changes in Python 3.7.11\n- Notable security feature in 3.7.14\n- What\u2019s New In Python 3.6\n- Summary \u2013 Release highlights\n- New Features\n- Other Language Changes\n- New Modules\n- Improved Modules\n- Optimizations\n- Build and C API Changes\n- Other Improvements\n- Deprecated\n- Removed\n- Porting to Python 3.6\n- Notable changes in Python 3.6.2\n- Notable changes in Python 3.6.4\n- Notable changes in Python 3.6.5\n- Notable changes in Python 3.6.7\n- Notable changes in Python 3.6.10\n- Notable changes in Python 3.6.13\n- Notable changes in Python 3.6.14\n- What\u2019s New In Python 3.5\n- What\u2019s New In Python 3.4\n- What\u2019s New In Python 3.3\n- Summary \u2013 Release highlights\n- PEP 405: Virtual Environments\n- PEP 420: Implicit Namespace Packages\n- PEP 3118: New memoryview implementation and buffer protocol documentation\n- PEP 393: Flexible String Representation\n- PEP 397: Python Launcher for Windows\n- PEP 3151: Reworking the OS and IO exception hierarchy\n- PEP 380: Syntax for Delegating to a Subgenerator\n- PEP 409: Suppressing exception context\n- PEP 414: Explicit Unicode literals\n- PEP 3155: Qualified name for classes and functions\n- PEP 412: Key-Sharing Dictionary\n- PEP 362: Function Signature Object\n- PEP 421: Adding sys.implementation\n- Using importlib as the Implementation of Import\n- Other Language Changes\n- A Finer-Grained Import Lock\n- Builtin functions and types\n- New Modules\n- Improved Modules\n- Optimizations\n- Build and C API Changes\n- Deprecated\n- Porting to Python 3.3\n- What\u2019s New In Python 3.2\n- PEP 384: Defining a Stable ABI\n- PEP 389: Argparse Command Line Parsing Module\n- PEP 391: Dictionary Based Configuration for Logging\n- PEP 3148: The\nconcurrent.futures\nmodule - PEP 3147: PYC Repository Directories\n- PEP 3149: ABI Version Tagged .so Files\n- PEP 3333: Python Web Server Gateway Interface v1.0.1\n- Other Language Changes\n- New, Improved, and Deprecated Modules\n- Multi-threading\n- Optimizations\n- Unicode\n- Codecs\n- Documentation\n- IDLE\n- Code Repository\n- Build and C API Changes\n- Porting to Python 3.2\n- What\u2019s New In Python 3.1\n- What\u2019s New In Python 3.0\n- What\u2019s New in Python 2.7\n- The Future for Python 2.x\n- Changes to the Handling of Deprecation Warnings\n- Python 3.1 Features\n- PEP 372: Adding an Ordered Dictionary to collections\n- PEP 378: Format Specifier for Thousands Separator\n- PEP 389: The argparse Module for Parsing Command Lines\n- PEP 391: Dictionary-Based Configuration For Logging\n- PEP 3106: Dictionary Views\n- PEP 3137: The memoryview Object\n- Other Language Changes\n- New and Improved Modules\n- Build and C API Changes\n- Other Changes and Fixes\n- Porting to Python 2.7\n- New Features Added to Python 2.7 Maintenance Releases\n- Acknowledgements\n- What\u2019s New in Python 2.6\n- Python 3.0\n- Changes to the Development Process\n- PEP 343: The \u2018with\u2019 statement\n- PEP 366: Explicit Relative Imports From a Main Module\n- PEP 370: Per-user\nsite-packages\nDirectory - PEP 371: The\nmultiprocessing\nPackage - PEP 3101: Advanced String Formatting\n- PEP 3105:\nprint\nAs a Function - PEP 3110: Exception-Handling Changes\n- PEP 3112: Byte Literals\n- PEP 3116: New I/O Library\n- PEP 3118: Revised Buffer Protocol\n- PEP 3119: Abstract Base Classes\n- PEP 3127: Integer Literal Support and Syntax\n- PEP 3129: Class Decorators\n- PEP 3141: A Type Hierarchy for Numbers\n- Other Language Changes\n- New and Improved Modules\n- Deprecations and Removals\n- Build and C API Changes\n- Porting to Python 2.6\n- Acknowledgements\n- What\u2019s New in Python 2.5\n- PEP 308: Conditional Expressions\n- PEP 309: Partial Function Application\n- PEP 314: Metadata for Python Software Packages v1.1\n- PEP 328: Absolute and Relative Imports\n- PEP 338: Executing Modules as Scripts\n- PEP 341: Unified try/except/finally\n- PEP 342: New Generator Features\n- PEP 343: The \u2018with\u2019 statement\n- PEP 352: Exceptions as New-Style Classes\n- PEP 353: Using ssize_t as the index type\n- PEP 357: The \u2018__index__\u2019 method\n- Other Language Changes\n- New, Improved, and Removed Modules\n- Build and C API Changes\n- Porting to Python 2.5\n- Acknowledgements\n- What\u2019s New in Python 2.4\n- PEP 218: Built-In Set Objects\n- PEP 237: Unifying Long Integers and Integers\n- PEP 289: Generator Expressions\n- PEP 292: Simpler String Substitutions\n- PEP 318: Decorators for Functions and Methods\n- PEP 322: Reverse Iteration\n- PEP 324: New subprocess Module\n- PEP 327: Decimal Data Type\n- PEP 328: Multi-line Imports\n- PEP 331: Locale-Independent Float/String Conversions\n- Other Language Changes\n- New, Improved, and Deprecated Modules\n- Build and C API Changes\n- Porting to Python 2.4\n- Acknowledgements\n- What\u2019s New in Python 2.3\n- PEP 218: A Standard Set Datatype\n- PEP 255: Simple Generators\n- PEP 263: Source Code Encodings\n- PEP 273: Importing Modules from ZIP Archives\n- PEP 277: Unicode file name support for Windows NT\n- PEP 278: Universal Newline Support\n- PEP 279: enumerate()\n- PEP 282: The logging Package\n- PEP 285: A Boolean Type\n- PEP 293: Codec Error Handling Callbacks\n- PEP 301: Package Index and Metadata for Distutils\n- PEP 302: New Import Hooks\n- PEP 305: Comma-separated Files\n- PEP 307: Pickle Enhancements\n- Extended Slices\n- Other Language Changes\n- New, Improved, and Deprecated Modules\n- Pymalloc: A Specialized Object Allocator\n- Build and C API Changes\n- Other Changes and Fixes\n- Porting to Python 2.3\n- Acknowledgements\n- What\u2019s New in Python 2.2\n- Introduction\n- PEPs 252 and 253: Type and Class Changes\n- PEP 234: Iterators\n- PEP 255: Simple Generators\n- PEP 237: Unifying Long Integers and Integers\n- PEP 238: Changing the Division Operator\n- Unicode Changes\n- PEP 227: Nested Scopes\n- New and Improved Modules\n- Interpreter Changes and Fixes\n- Other Changes and Fixes\n- Acknowledgements\n- What\u2019s New in Python 2.1\n- Introduction\n- PEP 227: Nested Scopes\n- PEP 236: __future__ Directives\n- PEP 207: Rich Comparisons\n- PEP 230: Warning Framework\n- PEP 229: New Build System\n- PEP 205: Weak References\n- PEP 232: Function Attributes\n- PEP 235: Importing Modules on Case-Insensitive Platforms\n- PEP 217: Interactive Display Hook\n- PEP 208: New Coercion Model\n- PEP 241: Metadata in Python Packages\n- New and Improved Modules\n- Other Changes and Fixes\n- Acknowledgements\n- What\u2019s New in Python 2.0\n- Introduction\n- What About Python 1.6?\n- New Development Process\n- Unicode\n- List Comprehensions\n- Augmented Assignment\n- String Methods\n- Garbage Collection of Cycles\n- Other Core Changes\n- Porting to 2.0\n- Extending/Embedding Changes\n- Distutils: Making Modules Easy to Install\n- XML Modules\n- Module changes\n- New modules\n- IDLE Improvements\n- Deleted and Deprecated Modules\n- Acknowledgements\nThe \u201cChangelog\u201d is an HTML version of the file built from the contents of the Misc/NEWS.d directory tree, which contains all nontrivial changes to Python for the current version.\n- Changelog\n- Python next\n- Python 3.14.3 final\n- Python 3.14.2 final\n- Python 3.14.1 final\n- Python 3.14.0 final\n- Python 3.14.0 release candidate 3\n- Python 3.14.0 release candidate 2\n- Python 3.14.0 release candidate 1\n- Python 3.14.0 beta 4\n- Python 3.14.0 beta 3\n- Python 3.14.0 beta 2\n- Python 3.14.0 beta 1\n- Python 3.14.0 alpha 7\n- Python 3.14.0 alpha 6\n- Python 3.14.0 alpha 5\n- Python 3.14.0 alpha 4\n- Python 3.14.0 alpha 3\n- Python 3.14.0 alpha 2\n- Python 3.14.0 alpha 1\n- Python 3.13.0 beta 1\n- Python 3.13.0 alpha 6\n- Python 3.13.0 alpha 5\n- Python 3.13.0 alpha 4\n- Python 3.13.0 alpha 3\n- Python 3.13.0 alpha 2\n- Python 3.13.0 alpha 1\n- Python 3.12.0 beta 1\n- Python 3.12.0 alpha 7\n- Python 3.12.0 alpha 6\n- Python 3.12.0 alpha 5\n- Python 3.12.0 alpha 4\n- Python 3.12.0 alpha 3\n- Python 3.12.0 alpha 2\n- Python 3.12.0 alpha 1\n- Python 3.11.0 beta 1\n- Python 3.11.0 alpha 7\n- Python 3.11.0 alpha 6\n- Python 3.11.0 alpha 5\n- Python 3.11.0 alpha 4\n- Python 3.11.0 alpha 3\n- Python 3.11.0 alpha 2\n- Python 3.11.0 alpha 1\n- Python 3.10.0 beta 1\n- Python 3.10.0 alpha 7\n- Python 3.10.0 alpha 6\n- Python 3.10.0 alpha 5\n- Python 3.10.0 alpha 4\n- Python 3.10.0 alpha 3\n- Python 3.10.0 alpha 2\n- Python 3.10.0 alpha 1\n- Python 3.9.0 beta 1\n- Python 3.9.0 alpha 6\n- Python 3.9.0 alpha 5\n- Python 3.9.0 alpha 4\n- Python 3.9.0 alpha 3\n- Python 3.9.0 alpha 2\n- Python 3.9.0 alpha 1\n- Python 3.8.0 beta 1\n- Python 3.8.0 alpha 4\n- Python 3.8.0 alpha 3\n- Python 3.8.0 alpha 2\n- Python 3.8.0 alpha 1\n- Python 3.7.0 final\n- Python 3.7.0 release candidate 1\n- Python 3.7.0 beta 5\n- Python 3.7.0 beta 4\n- Python 3.7.0 beta 3\n- Python 3.7.0 beta 2\n- Python 3.7.0 beta 1\n- Python 3.7.0 alpha 4\n- Python 3.7.0 alpha 3\n- Python 3.7.0 alpha 2\n- Python 3.7.0 alpha 1\n- Python 3.6.6 final\n- Python 3.6.6 release candidate 1\n- Python 3.6.5 final\n- Python 3.6.5 release candidate 1\n- Python 3.6.4 final\n- Python 3.6.4 release candidate 1\n- Python 3.6.3 final\n- Python 3.6.3 release candidate 1\n- Python 3.6.2 final\n- Python 3.6.2 release candidate 2\n- Python 3.6.2 release candidate 1\n- Python 3.6.1 final\n- Python 3.6.1 release candidate 1\n- Python 3.6.0 final\n- Python 3.6.0 release candidate 2\n- Python 3.6.0 release candidate 1\n- Python 3.6.0 beta 4\n- Python 3.6.0 beta 3\n- Python 3.6.0 beta 2\n- Python 3.6.0 beta 1\n- Python 3.6.0 alpha 4\n- Python 3.6.0 alpha 3\n- Python 3.6.0 alpha 2\n- Python 3.6.0 alpha 1\n- Python 3.5.5 final\n- Python 3.5.5 release candidate 1\n- Python 3.5.4 final\n- Python 3.5.4 release candidate 1\n- Python 3.5.3 final\n- Python 3.5.3 release candidate 1\n- Python 3.5.2 final\n- Python 3.5.2 release candidate 1\n- Python 3.5.1 final\n- Python 3.5.1 release candidate 1\n- Python 3.5.0 final\n- Python 3.5.0 release candidate 4\n- Python 3.5.0 release candidate 3\n- Python 3.5.0 release candidate 2\n- Python 3.5.0 release candidate 1\n- Python 3.5.0 beta 4\n- Python 3.5.0 beta 3\n- Python 3.5.0 beta 2\n- Python 3.5.0 beta 1\n- Python 3.5.0 alpha 4\n- Python 3.5.0 alpha 3\n- Python 3.5.0 alpha 2\n- Python 3.5.0 alpha 1", "code_snippets": [], "language": "Python", "source": "python.org", "token_count": 3149} +{"url": "https://docs.python.org/3/c-api/mapping.html", "title": "Mapping Protocol", "content": "Mapping Protocol\u00b6\nSee also PyObject_GetItem()\n, PyObject_SetItem()\nand\nPyObject_DelItem()\n.\n-\nint PyMapping_Check(PyObject *o)\u00b6\n- Part of the Stable ABI.\nReturn\n1\nif the object provides the mapping protocol or supports slicing, and0\notherwise. Note that it returns1\nfor Python classes with a__getitem__()\nmethod, since in general it is impossible to determine what type of keys the class supports. This function always succeeds.\n-\nPy_ssize_t PyMapping_Size(PyObject *o)\u00b6\n-\nPy_ssize_t PyMapping_Length(PyObject *o)\u00b6\n- Part of the Stable ABI.\nReturns the number of keys in object o on success, and\n-1\non failure. This is equivalent to the Python expressionlen(o)\n.\n-\nPyObject *PyMapping_GetItemString(PyObject *o, const char *key)\u00b6\n- Return value: New reference. Part of the Stable ABI.\nThis is the same as\nPyObject_GetItem()\n, but key is specified as a const char* UTF-8 encoded bytes string, rather than a PyObject*.\n-\nint PyMapping_GetOptionalItem(PyObject *obj, PyObject *key, PyObject **result)\u00b6\n- Part of the Stable ABI since version 3.13.\nVariant of\nPyObject_GetItem()\nwhich doesn\u2019t raiseKeyError\nif the key is not found.If the key is found, return\n1\nand set *result to a new strong reference to the corresponding value. If the key is not found, return0\nand set *result toNULL\n; theKeyError\nis silenced. If an error other thanKeyError\nis raised, return-1\nand set *result toNULL\n.Added in version 3.13.\n-\nint PyMapping_GetOptionalItemString(PyObject *obj, const char *key, PyObject **result)\u00b6\n- Part of the Stable ABI since version 3.13.\nThis is the same as\nPyMapping_GetOptionalItem()\n, but key is specified as a const char* UTF-8 encoded bytes string, rather than a PyObject*.Added in version 3.13.\n-\nint PyMapping_SetItemString(PyObject *o, const char *key, PyObject *v)\u00b6\n- Part of the Stable ABI.\nThis is the same as\nPyObject_SetItem()\n, but key is specified as a const char* UTF-8 encoded bytes string, rather than a PyObject*.\n-\nint PyMapping_DelItem(PyObject *o, PyObject *key)\u00b6\nThis is an alias of\nPyObject_DelItem()\n.\n-\nint PyMapping_DelItemString(PyObject *o, const char *key)\u00b6\nThis is the same as\nPyObject_DelItem()\n, but key is specified as a const char* UTF-8 encoded bytes string, rather than a PyObject*.\n-\nint PyMapping_HasKeyWithError(PyObject *o, PyObject *key)\u00b6\n- Part of the Stable ABI since version 3.13.\nReturn\n1\nif the mapping object has the key key and0\notherwise. This is equivalent to the Python expressionkey in o\n. On failure, return-1\n.Added in version 3.13.\n-\nint PyMapping_HasKeyStringWithError(PyObject *o, const char *key)\u00b6\n- Part of the Stable ABI since version 3.13.\nThis is the same as\nPyMapping_HasKeyWithError()\n, but key is specified as a const char* UTF-8 encoded bytes string, rather than a PyObject*.Added in version 3.13.\n-\nint PyMapping_HasKey(PyObject *o, PyObject *key)\u00b6\n- Part of the Stable ABI.\nReturn\n1\nif the mapping object has the key key and0\notherwise. This is equivalent to the Python expressionkey in o\n. This function always succeeds.Note\nExceptions which occur when this calls the\n__getitem__()\nmethod are silently ignored. For proper error handling, usePyMapping_HasKeyWithError()\n,PyMapping_GetOptionalItem()\norPyObject_GetItem()\ninstead.\n-\nint PyMapping_HasKeyString(PyObject *o, const char *key)\u00b6\n- Part of the Stable ABI.\nThis is the same as\nPyMapping_HasKey()\n, but key is specified as a const char* UTF-8 encoded bytes string, rather than a PyObject*.Note\nExceptions that occur when this calls the\n__getitem__()\nmethod or while creating the temporarystr\nobject are silently ignored. For proper error handling, usePyMapping_HasKeyStringWithError()\n,PyMapping_GetOptionalItemString()\norPyMapping_GetItemString()\ninstead.\n-\nPyObject *PyMapping_Keys(PyObject *o)\u00b6\n- Return value: New reference. Part of the Stable ABI.\nOn success, return a list of the keys in object o. On failure, return\nNULL\n.Changed in version 3.7: Previously, the function returned a list or a tuple.\n-\nPyObject *PyMapping_Values(PyObject *o)\u00b6\n- Return value: New reference. Part of the Stable ABI.\nOn success, return a list of the values in object o. On failure, return\nNULL\n.Changed in version 3.7: Previously, the function returned a list or a tuple.\n-\nPyObject *PyMapping_Items(PyObject *o)\u00b6\n- Return value: New reference. Part of the Stable ABI.\nOn success, return a list of the items in object o, where each item is a tuple containing a key-value pair. On failure, return\nNULL\n.Changed in version 3.7: Previously, the function returned a list or a tuple.", "code_snippets": [], "language": "Python", "source": "python.org", "token_count": 1124} +{"url": "https://docs.python.org/3/reference/lexical_analysis.html", "title": "Lexical analysis", "content": "2. Lexical analysis\u00b6\nA Python program is read by a parser. Input to the parser is a stream of tokens, generated by the lexical analyzer (also known as the tokenizer). This chapter describes how the lexical analyzer produces these tokens.\nThe lexical analyzer determines the program text\u2019s encoding\n(UTF-8 by default), and decodes the text into\nsource characters.\nIf the text cannot be decoded, a SyntaxError\nis raised.\nNext, the lexical analyzer uses the source characters to generate a stream of tokens. The type of a generated token generally depends on the next source character to be processed. Similarly, other special behavior of the analyzer depends on the first source character that hasn\u2019t yet been processed. The following table gives a quick summary of these source characters, with links to sections that contain more information.\nCharacter |\nNext token (or other relevant documentation) |\n|---|---|\n|\n|\n|\n|\n|\n|\n|\n|\n|\n|\n|\n|\n|\n|\n|\n|\n|\n|\n|\n|\n|\n|\n|\n2.1. Line structure\u00b6\nA Python program is divided into a number of logical lines.\n2.1.1. Logical lines\u00b6\nThe end of a logical line is represented by the token NEWLINE\n.\nStatements cannot cross logical line boundaries except where NEWLINE\nis allowed by the syntax (e.g., between statements in compound statements).\nA logical line is constructed from one or more physical lines by following\nthe explicit or implicit\nline joining rules.\n2.1.2. Physical lines\u00b6\nA physical line is a sequence of characters terminated by one the following end-of-line sequences:\nthe Unix form using ASCII LF (linefeed),\nthe Windows form using the ASCII sequence CR LF (return followed by linefeed),\nthe \u2018Classic Mac OS\u2019 form using the ASCII CR (return) character.\nRegardless of platform, each of these sequences is replaced by a single ASCII LF (linefeed) character. (This is done even inside string literals.) Each line can use any of the sequences; they do not need to be consistent within a file.\nThe end of input also serves as an implicit terminator for the final physical line.\nFormally:\nnewline: | | \n2.1.4. Encoding declarations\u00b6\nIf a comment in the first or second line of the Python script matches the\nregular expression coding[=:]\\s*([-\\w.]+)\n, this comment is processed as an\nencoding declaration; the first group of this expression names the encoding of\nthe source code file. The encoding declaration must appear on a line of its\nown. If it is the second line, the first line must also be a comment-only line.\nThe recommended forms of an encoding expression are\n# -*- coding: -*-\nwhich is recognized also by GNU Emacs, and\n# vim:fileencoding=\nwhich is recognized by Bram Moolenaar\u2019s VIM.\nIf no encoding declaration is found, the default encoding is UTF-8. If the\nimplicit or explicit encoding of a file is UTF-8, an initial UTF-8 byte-order\nmark (b'\\xef\\xbb\\xbf'\n) is ignored rather than being a syntax error.\nIf an encoding is declared, the encoding name must be recognized by Python (see Standard Encodings). The encoding is used for all lexical analysis, including string literals, comments and identifiers.\nAll lexical analysis, including string literals, comments and identifiers, works on Unicode text decoded using the source encoding. Any Unicode code point, except the NUL control character, can appear in Python source.\nsource_character: \n2.1.5. Explicit line joining\u00b6\nTwo or more physical lines may be joined into logical lines using backslash\ncharacters (\\\n), as follows: when a physical line ends in a backslash that is\nnot part of a string literal or comment, it is joined with the following forming\na single logical line, deleting the backslash and the following end-of-line\ncharacter. For example:\nif 1900 < year < 2100 and 1 <= month <= 12 \\\nand 1 <= day <= 31 and 0 <= hour < 24 \\\nand 0 <= minute < 60 and 0 <= second < 60: # Looks like a valid date\nreturn 1\nA line ending in a backslash cannot carry a comment. A backslash does not continue a comment. A backslash does not continue a token except for string literals (i.e., tokens other than string literals cannot be split across physical lines using a backslash). A backslash is illegal elsewhere on a line outside a string literal.\n2.1.6. Implicit line joining\u00b6\nExpressions in parentheses, square brackets or curly braces can be split over more than one physical line without using backslashes. For example:\nmonth_names = ['Januari', 'Februari', 'Maart', # These are the\n'April', 'Mei', 'Juni', # Dutch names\n'Juli', 'Augustus', 'September', # for the months\n'Oktober', 'November', 'December'] # of the year\nImplicitly continued lines can carry comments. The indentation of the continuation lines is not important. Blank continuation lines are allowed. There is no NEWLINE token between implicit continuation lines. Implicitly continued lines can also occur within triple-quoted strings (see below); in that case they cannot carry comments.\n2.1.7. Blank lines\u00b6\nA logical line that contains only spaces, tabs, formfeeds and possibly a\ncomment, is ignored (i.e., no NEWLINE\ntoken is generated).\nDuring interactive input of statements, handling of a blank line may differ\ndepending on the implementation of the read-eval-print loop.\nIn the standard interactive interpreter, an entirely blank logical line (that\nis, one containing not even whitespace or a comment) terminates a multi-line\nstatement.\n2.1.8. Indentation\u00b6\nLeading whitespace (spaces and tabs) at the beginning of a logical line is used to compute the indentation level of the line, which in turn is used to determine the grouping of statements.\nTabs are replaced (from left to right) by one to eight spaces such that the total number of characters up to and including the replacement is a multiple of eight (this is intended to be the same rule as used by Unix). The total number of spaces preceding the first non-blank character then determines the line\u2019s indentation. Indentation cannot be split over multiple physical lines using backslashes; the whitespace up to the first backslash determines the indentation.\nIndentation is rejected as inconsistent if a source file mixes tabs and spaces\nin a way that makes the meaning dependent on the worth of a tab in spaces; a\nTabError\nis raised in that case.\nCross-platform compatibility note: because of the nature of text editors on non-UNIX platforms, it is unwise to use a mixture of spaces and tabs for the indentation in a single source file. It should also be noted that different platforms may explicitly limit the maximum indentation level.\nA formfeed character may be present at the start of the line; it will be ignored for the indentation calculations above. Formfeed characters occurring elsewhere in the leading whitespace have an undefined effect (for instance, they may reset the space count to zero).\nThe indentation levels of consecutive lines are used to generate\nINDENT\nand DEDENT\ntokens, using a stack,\nas follows.\nBefore the first line of the file is read, a single zero is pushed on the stack;\nthis will never be popped off again. The numbers pushed on the stack will\nalways be strictly increasing from bottom to top. At the beginning of each\nlogical line, the line\u2019s indentation level is compared to the top of the stack.\nIf it is equal, nothing happens. If it is larger, it is pushed on the stack, and\none INDENT\ntoken is generated. If it is smaller, it must be one of the\nnumbers occurring on the stack; all numbers on the stack that are larger are\npopped off, and for each number popped off a DEDENT\ntoken is generated.\nAt the end of the file, a DEDENT\ntoken is generated for each number\nremaining on the stack that is larger than zero.\nHere is an example of a correctly (though confusingly) indented piece of Python code:\ndef perm(l):\n# Compute the list of all permutations of l\nif len(l) <= 1:\nreturn [l]\nr = []\nfor i in range(len(l)):\ns = l[:i] + l[i+1:]\np = perm(s)\nfor x in p:\nr.append(l[i:i+1] + x)\nreturn r\nThe following example shows various indentation errors:\ndef perm(l): # error: first line indented\nfor i in range(len(l)): # error: not indented\ns = l[:i] + l[i+1:]\np = perm(l[:i] + l[i+1:]) # error: unexpected indent\nfor x in p:\nr.append(l[i:i+1] + x)\nreturn r # error: inconsistent dedent\n(Actually, the first three errors are detected by the parser; only the last\nerror is found by the lexical analyzer \u2014 the indentation of return r\ndoes\nnot match a level popped off the stack.)\n2.1.9. Whitespace between tokens\u00b6\nExcept at the beginning of a logical line or in string literals, the whitespace characters space, tab and formfeed can be used interchangeably to separate tokens:\nwhitespace: ' ' | tab | formfeed\nWhitespace is needed between two tokens only if their concatenation\ncould otherwise be interpreted as a different token. For example, ab\nis one\ntoken, but a b\nis two tokens. However, +a\nand + a\nboth produce\ntwo tokens, +\nand a\n, as +a\nis not a valid token.\n2.1.10. End marker\u00b6\nAt the end of non-interactive input, the lexical analyzer generates an\nENDMARKER\ntoken.\n2.2. Other tokens\u00b6\nBesides NEWLINE\n, INDENT\nand DEDENT\n,\nthe following categories of tokens exist:\nidentifiers and keywords (NAME\n), literals (such as\nNUMBER\nand STRING\n), and other symbols\n(operators and delimiters, OP\n).\nWhitespace characters (other than logical line terminators, discussed earlier)\nare not tokens, but serve to delimit tokens.\nWhere ambiguity exists, a token comprises the longest possible string that\nforms a legal token, when read from left to right.\n2.3. Names (identifiers and keywords)\u00b6\nNAME\ntokens represent identifiers, keywords, and\nsoft keywords.\nNames are composed of the following characters:\nuppercase and lowercase letters (\nA-Z\nanda-z\n),the underscore (\n_\n),digits (\n0\nthrough9\n), which cannot appear as the first character, andnon-ASCII characters. Valid names may only contain \u201cletter-like\u201d and \u201cdigit-like\u201d characters; see Non-ASCII characters in names for details.\nNames must contain at least one character, but have no upper length limit. Case is significant.\nFormally, names are described by the following lexical definitions:\nNAME:name_start\nname_continue\n* name_start: \"a\"...\"z\" | \"A\"...\"Z\" | \"_\" | name_continue: name_start | \"0\"...\"9\" identifier: \nNote that not all names matched by this grammar are valid; see Non-ASCII characters in names for details.\n2.3.1. Keywords\u00b6\nThe following names are used as reserved words, or keywords of the language, and cannot be used as ordinary identifiers. They must be spelled exactly as written here:\nFalse await else import pass\nNone break except in raise\nTrue class finally is return\nand continue for lambda try\nas def from nonlocal while\nassert del global not with\nasync elif if or yield\n2.3.2. Soft Keywords\u00b6\nAdded in version 3.10.\nSome names are only reserved under specific contexts. These are known as soft keywords:\nThese syntactically act as keywords in their specific contexts, but this distinction is done at the parser level, not when tokenizing.\nAs soft keywords, their use in the grammar is possible while still preserving compatibility with existing code that uses these names as identifier names.\nChanged in version 3.12: type\nis now a soft keyword.\n2.3.3. Reserved classes of identifiers\u00b6\nCertain classes of identifiers (besides keywords) have special meanings. These classes are identified by the patterns of leading and trailing underscore characters:\n_*\nNot imported by\nfrom module import *\n._\nIn a\ncase\npattern within amatch\nstatement,_\nis a soft keyword that denotes a wildcard.Separately, the interactive interpreter makes the result of the last evaluation available in the variable\n_\n. (It is stored in thebuiltins\nmodule, alongside built-in functions likeprint\n.)Elsewhere,\n_\nis a regular identifier. It is often used to name \u201cspecial\u201d items, but it is not special to Python itself.Note\nThe name\n_\nis often used in conjunction with internationalization; refer to the documentation for thegettext\nmodule for more information on this convention.It is also commonly used for unused variables.\n__*__\nSystem-defined names, informally known as \u201cdunder\u201d names. These names are defined by the interpreter and its implementation (including the standard library). Current system names are discussed in the Special method names section and elsewhere. More will likely be defined in future versions of Python. Any use of\n__*__\nnames, in any context, that does not follow explicitly documented use, is subject to breakage without warning.__*\nClass-private names. Names in this category, when used within the context of a class definition, are re-written to use a mangled form to help avoid name clashes between \u201cprivate\u201d attributes of base and derived classes. See section Identifiers (Names).\n2.3.4. Non-ASCII characters in names\u00b6\nNames that contain non-ASCII characters need additional normalization\nand validation beyond the rules and grammar explained\nabove.\nFor example, \u0159_1\n, \u86c7\n, or \u0938\u093e\u0901\u092a\nare valid names, but r\u30302\n,\n\u20ac\n, or \ud83d\udc0d\nare not.\nThis section explains the exact rules.\nAll names are converted into the normalization form NFKC while parsing.\nThis means that, for example, some typographic variants of characters are\nconverted to their \u201cbasic\u201d form. For example, \ufb01\u207f\u2090\u02e1\u1d62\u1dbb\u2090\u1d57\u1d62\u1d52\u2099\nnormalizes to\nfinalization\n, so Python treats them as the same name:\n>>> \ufb01\u207f\u2090\u02e1\u1d62\u1dbb\u2090\u1d57\u1d62\u1d52\u2099 = 3\n>>> finalization\n3\nNote\nNormalization is done at the lexical level only.\nRun-time functions that take names as strings generally do not normalize\ntheir arguments.\nFor example, the variable defined above is accessible at run time in the\nglobals()\ndictionary as globals()[\"finalization\"]\nbut not\nglobals()[\"\ufb01\u207f\u2090\u02e1\u1d62\u1dbb\u2090\u1d57\u1d62\u1d52\u2099\"]\n.\nSimilarly to how ASCII-only names must contain only letters, digits and\nthe underscore, and cannot start with a digit, a valid name must\nstart with a character in the \u201cletter-like\u201d set xid_start\n,\nand the remaining characters must be in the \u201cletter- and digit-like\u201d set\nxid_continue\n.\nThese sets based on the XID_Start and XID_Continue sets as defined by the\nUnicode standard annex UAX-31.\nPython\u2019s xid_start\nadditionally includes the underscore (_\n).\nNote that Python does not necessarily conform to UAX-31.\nA non-normative listing of characters in the XID_Start and XID_Continue\nsets as defined by Unicode is available in the DerivedCoreProperties.txt\nfile in the Unicode Character Database.\nFor reference, the construction rules for the xid_*\nsets are given below.\nThe set id_start\nis defined as the union of:\nUnicode category\n\n- uppercase letters (includesA\ntoZ\n)Unicode category\n\n- lowercase letters (includesa\ntoz\n)Unicode category\n\n- titlecase lettersUnicode category\n\n- modifier lettersUnicode category\n\n- other lettersUnicode category\n\n- letter numbers{\n\"_\"\n} - the underscore\n- an explicit set of characters in PropList.txt to support backwards compatibility\nThe set xid_start\nthen closes this set under NFKC normalization, by\nremoving all characters whose normalization is not of the form\nid_start id_continue*\n.\nThe set id_continue\nis defined as the union of:\nid_start\n(see above)Unicode category\n\n- decimal numbers (includes0\nto9\n)Unicode category\n\n- connector punctuationsUnicode category\n\n- nonspacing marksUnicode category\n\n- spacing combining marks\n- another explicit set of characters in PropList.txt to support backwards compatibility\nAgain, xid_continue\ncloses this set under NFKC normalization.\nUnicode categories use the version of the Unicode Character Database as\nincluded in the unicodedata\nmodule.\n2.4. Literals\u00b6\nLiterals are notations for constant values of some built-in types.\nIn terms of lexical analysis, Python has string, bytes and numeric literals.\nOther \u201cliterals\u201d are lexically denoted using keywords\n(None\n, True\n, False\n) and the special\nellipsis token (...\n).\n2.5. String and Bytes literals\u00b6\nString literals are text enclosed in single quotes ('\n) or double\nquotes (\"\n). For example:\n\"spam\"\n'eggs'\nThe quote used to start the literal also terminates it, so a string literal can only contain the other quote (except with escape sequences, see below). For example:\n'Say \"Hello\", please.'\n\"Don't do that!\"\nExcept for this limitation, the choice of quote character ('\nor \"\n)\ndoes not affect how the literal is parsed.\nInside a string literal, the backslash (\\\n) character introduces an\nescape sequence, which has special meaning depending on the character\nafter the backslash.\nFor example, \\\"\ndenotes the double quote character, and does not end\nthe string:\n>>> print(\"Say \\\"Hello\\\" to everyone!\")\nSay \"Hello\" to everyone!\nSee escape sequences below for a full list of such sequences, and more details.\n2.5.1. Triple-quoted strings\u00b6\nStrings can also be enclosed in matching groups of three single or double quotes. These are generally referred to as triple-quoted strings:\n\"\"\"This is a triple-quoted string.\"\"\"\nIn triple-quoted literals, unescaped quotes are allowed (and are\nretained), except that three unescaped quotes in a row terminate the literal,\nif they are of the same kind ('\nor \"\n) used at the start:\n\"\"\"This string has \"quotes\" inside.\"\"\"\nUnescaped newlines are also allowed and retained:\n'''This triple-quoted string\ncontinues on the next line.'''\n2.5.2. String prefixes\u00b6\nString literals can have an optional prefix that influences how the content of the literal is parsed, for example:\nb\"data\"\nf'{result=}'\nThe allowed prefixes are:\nr\n: Raw stringf\n: Formatted string literal (\u201cf-string\u201d)t\n: Template string literal (\u201ct-string\u201d)u\n: No effect (allowed for backwards compatibility)\nSee the linked sections for details on each type.\nPrefixes are case-insensitive (for example, \u2018B\n\u2019 works the same as \u2018b\n\u2019).\nThe \u2018r\n\u2019 prefix can be combined with \u2018f\n\u2019, \u2018t\n\u2019 or \u2018b\n\u2019, so \u2018fr\n\u2019,\n\u2018rf\n\u2019, \u2018tr\n\u2019, \u2018rt\n\u2019, \u2018br\n\u2019, and \u2018rb\n\u2019 are also valid prefixes.\nAdded in version 3.3: The 'rb'\nprefix of raw bytes literals has been added as a synonym\nof 'br'\n.\nSupport for the unicode legacy literal (u'value'\n) was reintroduced\nto simplify the maintenance of dual Python 2.x and 3.x codebases.\nSee PEP 414 for more information.\n2.5.3. Formal grammar\u00b6\nString literals, except \u201cf-strings\u201d and \u201ct-strings\u201d, are described by the following lexical definitions.\nThese definitions use negative lookaheads (!\n)\nto indicate that an ending quote ends the literal.\nSTRING: [stringprefix\n] (stringcontent\n) stringprefix: <(\"r\" | \"u\" | \"b\" | \"br\" | \"rb\"), case-insensitive> stringcontent: | \"'''\" ( !\"'''\"longstringitem\n)* \"'''\" | '\"\"\"' ( !'\"\"\"'longstringitem\n)* '\"\"\"' | \"'\" ( !\"'\"stringitem\n)* \"'\" | '\"' ( !'\"'stringitem\n)* '\"' stringitem:stringchar\n|stringescapeseq\nstringchar: longstringitem:stringitem\n| newline stringescapeseq: \"\\\" \nNote that as in all lexical definitions, whitespace is significant. In particular, the prefix (if any) must be immediately followed by the starting quote.\n2.5.4. Escape sequences\u00b6\nUnless an \u2018r\n\u2019 or \u2018R\n\u2019 prefix is present, escape sequences in string and\nbytes literals are interpreted according to rules similar to those used by\nStandard C. The recognized escape sequences are:\nEscape Sequence |\nMeaning |\n|---|---|\n|\n|\n|\n|\n|\n|\n|\n|\n|\nASCII Bell (BEL) |\n|\nASCII Backspace (BS) |\n|\nASCII Formfeed (FF) |\n|\nASCII Linefeed (LF) |\n|\nASCII Carriage Return (CR) |\n|\nASCII Horizontal Tab (TAB) |\n|\nASCII Vertical Tab (VT) |\n|\n|\n|\n|\n|\n|\n|\n|\n|\n2.5.4.1. Ignored end of line\u00b6\nA backslash can be added at the end of a line to ignore the newline:\n>>> 'This string will not include \\\n... backslashes or newline characters.'\n'This string will not include backslashes or newline characters.'\nThe same result can be achieved using triple-quoted strings, or parentheses and string literal concatenation.\n2.5.4.2. Escaped characters\u00b6\nTo include a backslash in a non-raw Python string\nliteral, it must be doubled. The \\\\\nescape sequence denotes a single\nbackslash character:\n>>> print('C:\\\\Program Files')\nC:\\Program Files\nSimilarly, the \\'\nand \\\"\nsequences denote the single and double\nquote character, respectively:\n>>> print('\\' and \\\"')\n' and \"\n2.5.4.3. Octal character\u00b6\nThe sequence \\ooo\ndenotes a character with the octal (base 8)\nvalue ooo:\n>>> '\\120'\n'P'\nUp to three octal digits (0 through 7) are accepted.\nIn a bytes literal, character means a byte with the given value. In a string literal, it means a Unicode character with the given value.\nChanged in version 3.11: Octal escapes with value larger than 0o377\n(255) produce a\nDeprecationWarning\n.\nChanged in version 3.12: Octal escapes with value larger than 0o377\n(255) produce a\nSyntaxWarning\n.\nIn a future Python version they will raise a SyntaxError\n.\n2.5.4.4. Hexadecimal character\u00b6\nThe sequence \\xhh\ndenotes a character with the hex (base 16)\nvalue hh:\n>>> '\\x50'\n'P'\nUnlike in Standard C, exactly two hex digits are required.\nIn a bytes literal, character means a byte with the given value. In a string literal, it means a Unicode character with the given value.\n2.5.4.5. Named Unicode character\u00b6\nThe sequence \\N{name}\ndenotes a Unicode character\nwith the given name:\n>>> '\\N{LATIN CAPITAL LETTER P}'\n'P'\n>>> '\\N{SNAKE}'\n'\ud83d\udc0d'\nThis sequence cannot appear in bytes literals.\nChanged in version 3.3: Support for name aliases has been added.\n2.5.4.6. Hexadecimal Unicode characters\u00b6\nThese sequences \\uxxxx\nand \\Uxxxxxxxx\ndenote the\nUnicode character with the given hex (base 16) value.\nExactly four digits are required for \\u\n; exactly eight digits are\nrequired for \\U\n.\nThe latter can encode any Unicode character.\n>>> '\\u1234'\n'\u1234'\n>>> '\\U0001f40d'\n'\ud83d\udc0d'\nThese sequences cannot appear in bytes literals.\n2.5.4.7. Unrecognized escape sequences\u00b6\nUnlike in Standard C, all unrecognized escape sequences are left in the string unchanged, that is, the backslash is left in the result:\n>>> print('\\q')\n\\q\n>>> list('\\q')\n['\\\\', 'q']\nNote that for bytes literals, the escape sequences only recognized in string\nliterals (\\N...\n, \\u...\n, \\U...\n) fall into the category of\nunrecognized escapes.\nChanged in version 3.6: Unrecognized escape sequences produce a DeprecationWarning\n.\nChanged in version 3.12: Unrecognized escape sequences produce a SyntaxWarning\n.\nIn a future Python version they will raise a SyntaxError\n.\n2.5.5. Bytes literals\u00b6\nBytes literals are always prefixed with \u2018b\n\u2019 or \u2018B\n\u2019; they produce an\ninstance of the bytes\ntype instead of the str\ntype.\nThey may only contain ASCII characters; bytes with a numeric value of 128\nor greater must be expressed with escape sequences (typically\nHexadecimal character or Octal character):\n>>> b'\\x89PNG\\r\\n\\x1a\\n'\nb'\\x89PNG\\r\\n\\x1a\\n'\n>>> list(b'\\x89PNG\\r\\n\\x1a\\n')\n[137, 80, 78, 71, 13, 10, 26, 10]\nSimilarly, a zero byte must be expressed using an escape sequence (typically\n\\0\nor \\x00\n).\n2.5.6. Raw string literals\u00b6\nBoth string and bytes literals may optionally be prefixed with a letter \u2018r\n\u2019\nor \u2018R\n\u2019; such constructs are called raw string literals\nand raw bytes literals respectively and treat backslashes as\nliteral characters.\nAs a result, in raw string literals, escape sequences\nare not treated specially:\n>>> r'\\d{4}-\\d{2}-\\d{2}'\n'\\\\d{4}-\\\\d{2}-\\\\d{2}'\nEven in a raw literal, quotes can be escaped with a backslash, but the\nbackslash remains in the result; for example, r\"\\\"\"\nis a valid string\nliteral consisting of two characters: a backslash and a double quote; r\"\\\"\nis not a valid string literal (even a raw string cannot end in an odd number of\nbackslashes). Specifically, a raw literal cannot end in a single backslash\n(since the backslash would escape the following quote character). Note also\nthat a single backslash followed by a newline is interpreted as those two\ncharacters as part of the literal, not as a line continuation.\n2.5.7. f-strings\u00b6\nAdded in version 3.6.\nChanged in version 3.8: Added the debug specifier (=\n)\nChanged in version 3.12: Many restrictions on expressions within f-strings have been removed. Notably, nested strings, comments, and backslashes are now permitted.\nA formatted string literal or f-string is a string literal\nthat is prefixed with \u2018f\n\u2019 or \u2018F\n\u2019.\nUnlike other string literals, f-strings do not have a constant value.\nThey may contain replacement fields delimited by curly braces {}\n.\nReplacement fields contain expressions which are evaluated at run time.\nFor example:\n>>> who = 'nobody'\n>>> nationality = 'Spanish'\n>>> f'{who.title()} expects the {nationality} Inquisition!'\n'Nobody expects the Spanish Inquisition!'\nAny doubled curly braces ({{\nor }}\n) outside replacement fields\nare replaced with the corresponding single curly brace:\n>>> print(f'{{...}}')\n{...}\nOther characters outside replacement fields are treated like in ordinary string literals. This means that escape sequences are decoded (except when a literal is also marked as a raw string), and newlines are possible in triple-quoted f-strings:\n>>> name = 'Galahad'\n>>> favorite_color = 'blue'\n>>> print(f'{name}:\\t{favorite_color}')\nGalahad: blue\n>>> print(rf\"C:\\Users\\{name}\")\nC:\\Users\\Galahad\n>>> print(f'''Three shall be the number of the counting\n... and the number of the counting shall be three.''')\nThree shall be the number of the counting\nand the number of the counting shall be three.\nExpressions in formatted string literals are treated like regular\nPython expressions.\nEach expression is evaluated in the context where the formatted string literal\nappears, in order from left to right.\nAn empty expression is not allowed, and both lambda\nand\nassignment expressions :=\nmust be surrounded by explicit parentheses:\n>>> f'{(half := 1/2)}, {half * 42}'\n'0.5, 21.0'\nReusing the outer f-string quoting type inside a replacement field is permitted:\n>>> a = dict(x=2)\n>>> f\"abc {a[\"x\"]} def\"\n'abc 2 def'\nBackslashes are also allowed in replacement fields and are evaluated the same way as in any other context:\n>>> a = [\"a\", \"b\", \"c\"]\n>>> print(f\"List a contains:\\n{\"\\n\".join(a)}\")\nList a contains:\na\nb\nc\nIt is possible to nest f-strings:\n>>> name = 'world'\n>>> f'Repeated:{f' hello {name}' * 3}'\n'Repeated: hello world hello world hello world'\nPortable Python programs should not use more than 5 levels of nesting.\nCPython implementation detail: CPython does not limit nesting of f-strings.\nReplacement expressions can contain newlines in both single-quoted and\ntriple-quoted f-strings and they can contain comments.\nEverything that comes after a #\ninside a replacement field\nis a comment (even closing braces and quotes).\nThis means that replacement fields with comments must be closed in a\ndifferent line:\n>>> a = 2\n>>> f\"abc{a # This comment }\" continues until the end of the line\n... + 3}\"\n'abc5'\nAfter the expression, replacement fields may optionally contain:\na debug specifier \u2013 an equal sign (\n=\n), optionally surrounded by whitespace on one or both sides;a conversion specifier \u2013\n!s\n,!r\nor!a\n; and/ora format specifier prefixed with a colon (\n:\n).\nSee the Standard Library section on f-strings for details on how these fields are evaluated.\nAs that section explains, format specifiers are passed as the second argument\nto the format()\nfunction to format a replacement field value.\nFor example, they can be used to specify a field width and padding characters\nusing the Format Specification Mini-Language:\n>>> number = 14.3\n>>> f'{number:20.7f}'\n' 14.3000000'\nTop-level format specifiers may include nested replacement fields:\n>>> field_size = 20\n>>> precision = 7\n>>> f'{number:{field_size}.{precision}f}'\n' 14.3000000'\nThese nested fields may include their own conversion fields and format specifiers:\n>>> number = 3\n>>> f'{number:{field_size}}'\n' 3'\n>>> f'{number:{field_size:05}}'\n'00000000000000000003'\nHowever, these nested fields may not include more deeply nested replacement fields.\nFormatted string literals cannot be used as docstrings, even if they do not include expressions:\n>>> def foo():\n... f\"Not a docstring\"\n...\n>>> print(foo.__doc__)\nNone\nSee also\nPEP 498 \u2013 Literal String Interpolation\nPEP 701 \u2013 Syntactic formalization of f-strings\nstr.format()\n, which uses a related format string mechanism.\n2.5.8. t-strings\u00b6\nAdded in version 3.14.\nA template string literal or t-string is a string literal\nthat is prefixed with \u2018t\n\u2019 or \u2018T\n\u2019.\nThese strings follow the same syntax rules as\nformatted string literals.\nFor differences in evaluation rules, see the\nStandard Library section on t-strings\n2.5.9. Formal grammar for f-strings\u00b6\nF-strings are handled partly by the lexical analyzer, which produces the\ntokens FSTRING_START\n, FSTRING_MIDDLE\nand FSTRING_END\n, and partly by the parser, which handles\nexpressions in the replacement field.\nThe exact way the work is split is a CPython implementation detail.\nCorrespondingly, the f-string grammar is a mix of lexical and syntactic definitions.\nWhitespace is significant in these situations:\nThere may be no whitespace in\nFSTRING_START\n(between the prefix and quote).Whitespace in\nFSTRING_MIDDLE\nis part of the literal string contents.In\nfstring_replacement_field\n, iff_debug_specifier\nis present, all whitespace after the opening brace until thef_debug_specifier\n, as well as whitespace immediately followingf_debug_specifier\n, is retained as part of the expression.CPython implementation detail: The expression is not handled in the tokenization phase; it is retrieved from the source code using locations of the\n{\ntoken and the token after=\n.\nThe FSTRING_MIDDLE\ndefinition uses\nnegative lookaheads (!\n)\nto indicate special characters (backslash, newline, {\n, }\n) and\nsequences (f_quote\n).\nfstring:FSTRING_START\nfstring_middle\n*FSTRING_END\nFSTRING_START:fstringprefix\n(\"'\" | '\"' | \"'''\" | '\"\"\"') FSTRING_END:f_quote\nfstringprefix: <(\"f\" | \"fr\" | \"rf\"), case-insensitive> f_debug_specifier: '=' f_quote: fstring_middle: |fstring_replacement_field\n|FSTRING_MIDDLE\nFSTRING_MIDDLE: | (!\"\\\" !newline\n!'{' !'}' !f_quote\n)source_character\n|stringescapeseq\n| \"{{\" | \"}}\" | fstring_replacement_field: | '{'f_expression\n[f_debug_specifier\n] [fstring_conversion\n] [fstring_full_format_spec\n] '}' fstring_conversion: | \"!\" (\"s\" | \"r\" | \"a\") fstring_full_format_spec: | ':'fstring_format_spec\n* fstring_format_spec: |FSTRING_MIDDLE\n|fstring_replacement_field\nf_expression: | ','.(conditional_expression\n| \"*\"or_expr\n)+ [\",\"] |yield_expression\nNote\nIn the above grammar snippet, the f_quote\nand FSTRING_MIDDLE\nrules\nare context-sensitive \u2013 they depend on the contents of FSTRING_START\nof the nearest enclosing fstring\n.\nConstructing a more traditional formal grammar from this template is left as an exercise for the reader.\nThe grammar for t-strings is identical to the one for f-strings, with t instead of f at the beginning of rule and token names and in the prefix.\ntstring: TSTRING_START tstring_middle* TSTRING_END \n2.6. Numeric literals\u00b6\nNUMBER\ntokens represent numeric literals, of which there are\nthree types: integers, floating-point numbers, and imaginary numbers.\nNUMBER:integer\n|floatnumber\n|imagnumber\nThe numeric value of a numeric literal is the same as if it were passed as a\nstring to the int\n, float\nor complex\nclass\nconstructor, respectively.\nNote that not all valid inputs for those constructors are also valid literals.\nNumeric literals do not include a sign; a phrase like -1\nis\nactually an expression composed of the unary operator \u2018-\n\u2019 and the literal\n1\n.\n2.6.1. Integer literals\u00b6\nInteger literals denote whole numbers. For example:\n7\n3\n2147483647\nThere is no limit for the length of integer literals apart from what can be stored in available memory:\n7922816251426433759354395033679228162514264337593543950336\nUnderscores can be used to group digits for enhanced readability, and are ignored for determining the numeric value of the literal. For example, the following literals are equivalent:\n100_000_000_000\n100000000000\n1_00_00_00_00_000\nUnderscores can only occur between digits.\nFor example, _123\n, 321_\n, and 123__321\nare not valid literals.\nIntegers can be specified in binary (base 2), octal (base 8), or hexadecimal\n(base 16) using the prefixes 0b\n, 0o\nand 0x\n, respectively.\nHexadecimal digits 10 through 15 are represented by letters A\n-F\n,\ncase-insensitive. For example:\n0b100110111\n0b_1110_0101\n0o177\n0o377\n0xdeadbeef\n0xDead_Beef\nAn underscore can follow the base specifier.\nFor example, 0x_1f\nis a valid literal, but 0_x1f\nand 0x__1f\nare\nnot.\nLeading zeros in a non-zero decimal number are not allowed.\nFor example, 0123\nis not a valid literal.\nThis is for disambiguation with C-style octal literals, which Python used\nbefore version 3.0.\nFormally, integer literals are described by the following lexical definitions:\ninteger:decinteger\n|bininteger\n|octinteger\n|hexinteger\n|zerointeger\ndecinteger:nonzerodigit\n([\"_\"]digit\n)* bininteger: \"0\" (\"b\" | \"B\") ([\"_\"]bindigit\n)+ octinteger: \"0\" (\"o\" | \"O\") ([\"_\"]octdigit\n)+ hexinteger: \"0\" (\"x\" | \"X\") ([\"_\"]hexdigit\n)+ zerointeger: \"0\"+ ([\"_\"] \"0\")* nonzerodigit: \"1\"...\"9\" digit: \"0\"...\"9\" bindigit: \"0\" | \"1\" octdigit: \"0\"...\"7\" hexdigit:digit\n| \"a\"...\"f\" | \"A\"...\"F\"\nChanged in version 3.6: Underscores are now allowed for grouping purposes in literals.\n2.6.2. Floating-point literals\u00b6\nFloating-point (float) literals, such as 3.14\nor 1.5\n, denote\napproximations of real numbers.\nThey consist of integer and fraction parts, each composed of decimal digits.\nThe parts are separated by a decimal point, .\n:\n2.71828\n4.0\nUnlike in integer literals, leading zeros are allowed.\nFor example, 077.010\nis legal, and denotes the same number as 77.01\n.\nAs in integer literals, single underscores may occur between digits to help readability:\n96_485.332_123\n3.14_15_93\nEither of these parts, but not both, can be empty. For example:\n10. # (equivalent to 10.0)\n.001 # (equivalent to 0.001)\nOptionally, the integer and fraction may be followed by an exponent:\nthe letter e\nor E\n, followed by an optional sign, +\nor -\n,\nand a number in the same format as the integer and fraction parts.\nThe e\nor E\nrepresents \u201ctimes ten raised to the power of\u201d:\n1.0e3 # (represents 1.0\u00d710\u00b3, or 1000.0)\n1.166e-5 # (represents 1.166\u00d710\u207b\u2075, or 0.00001166)\n6.02214076e+23 # (represents 6.02214076\u00d710\u00b2\u00b3, or 602214076000000000000000.)\nIn floats with only integer and exponent parts, the decimal point may be omitted:\n1e3 # (equivalent to 1.e3 and 1.0e3)\n0e0 # (equivalent to 0.)\nFormally, floating-point literals are described by the following lexical definitions:\nfloatnumber: |digitpart\n\".\" [digitpart\n] [exponent\n] | \".\"digitpart\n[exponent\n] |digitpart\nexponent\ndigitpart:digit\n([\"_\"]digit\n)* exponent: (\"e\" | \"E\") [\"+\" | \"-\"]digitpart\nChanged in version 3.6: Underscores are now allowed for grouping purposes in literals.\n2.6.3. Imaginary literals\u00b6\nPython has complex number objects, but no complex literals. Instead, imaginary literals denote complex numbers with a zero real part.\nFor example, in math, the complex number 3+4.2i is written\nas the real number 3 added to the imaginary number 4.2i.\nPython uses a similar syntax, except the imaginary unit is written as j\nrather than i:\n3+4.2j\nThis is an expression composed\nof the integer literal 3\n,\nthe operator \u2018+\n\u2019,\nand the imaginary literal 4.2j\n.\nSince these are three separate tokens, whitespace is allowed between them:\n3 + 4.2j\nNo whitespace is allowed within each token.\nIn particular, the j\nsuffix, may not be separated from the number\nbefore it.\nThe number before the j\nhas the same syntax as a floating-point literal.\nThus, the following are valid imaginary literals:\n4.2j\n3.14j\n10.j\n.001j\n1e100j\n3.14e-10j\n3.14_15_93j\nUnlike in a floating-point literal the decimal point can be omitted if the imaginary number only has an integer part. The number is still evaluated as a floating-point number, not an integer:\n10j\n0j\n1000000000000000000000000j # equivalent to 1e+24j\nThe j\nsuffix is case-insensitive.\nThat means you can use J\ninstead:\n3.14J # equivalent to 3.14j\nFormally, imaginary literals are described by the following lexical definition:\nimagnumber: (floatnumber\n|digitpart\n) (\"j\" | \"J\")\n2.7. Operators and delimiters\u00b6\nThe following grammar defines operator and delimiter tokens,\nthat is, the generic OP\ntoken type.\nA list of these tokens and their names\nis also available in the token\nmodule documentation.\nOP: | assignment_operator | bitwise_operator | comparison_operator | enclosing_delimiter | other_delimiter | arithmetic_operator | \"...\" | other_op assignment_operator: \"+=\" | \"-=\" | \"*=\" | \"**=\" | \"/=\" | \"//=\" | \"%=\" | \"&=\" | \"|=\" | \"^=\" | \"<<=\" | \">>=\" | \"@=\" | \":=\" bitwise_operator: \"&\" | \"|\" | \"^\" | \"~\" | \"<<\" | \">>\" comparison_operator: \"<=\" | \">=\" | \"<\" | \">\" | \"==\" | \"!=\" enclosing_delimiter: \"(\" | \")\" | \"[\" | \"]\" | \"{\" | \"}\" other_delimiter: \",\" | \":\" | \"!\" | \";\" | \"=\" | \"->\" arithmetic_operator: \"+\" | \"-\" | \"**\" | \"*\" | \"//\" | \"/\" | \"%\" other_op: \".\" | \"@\"\nNote\nGenerally, operators are used to combine expressions, while delimiters serve other purposes. However, there is no clear, formal distinction between the two categories.\nSome tokens can serve as either operators or delimiters, depending on usage.\nFor example, *\nis both the multiplication operator and a delimiter used\nfor sequence unpacking, and @\nis both the matrix multiplication and\na delimiter that introduces decorators.\nFor some tokens, the distinction is unclear.\nFor example, some people consider .\n, (\n, and )\nto be delimiters, while others\nsee the getattr()\noperator and the function call operator(s).\nSome of Python\u2019s operators, like and\n, or\n, and not in\n, use\nkeyword tokens rather than \u201csymbols\u201d (operator tokens).\nA sequence of three consecutive periods (...\n) has a special\nmeaning as an Ellipsis\nliteral.\n2.1.3. Comments\u00b6\nA comment starts with a hash character (\n#\n) that is not part of a string literal, and ends at the end of the physical line. A comment signifies the end of the logical line unless the implicit line joining rules are invoked. Comments are ignored by the syntax.", "code_snippets": ["\n", "\n", " ", " ", " ", " ", " ", " ", " ", " ", " ", " ", " ", " \\\n ", " ", " ", " ", " ", " ", " ", " ", " ", " ", " ", " ", " \\\n ", " ", " ", " ", " ", " ", " ", " ", " ", " ", " ", " ", " ", "\n ", " ", "\n", " ", " ", " ", " ", " ", "\n ", " ", " ", " ", "\n ", " ", " ", " ", "\n ", " ", " ", " ", "\n", "\n ", "\n ", " ", " ", " ", "\n ", " ", "\n ", " ", " ", "\n ", " ", " ", " ", "\n ", " ", " ", " ", " ", "\n ", " ", " ", "\n ", " ", " ", " ", "\n ", " ", " ", "\n ", " ", "\n", " ", " ", "\n", " ", " ", " ", " ", "\n ", " ", " ", " ", " ", "\n ", " ", " ", " ", " ", " ", "\n ", " ", " ", " ", "\n ", " ", " ", "\n ", " ", " ", "\n", " ", " ", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", " ", " ", "\n", " ", " ", "\n", "\n", "\n", "\n", "\n", " ", " ", "\n", " ", " ", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", " ", " ", "\n", "\n", "\n", " ", " ", " ", " ", "\n", "\n", "\n", "\n", "\n", "\n", " ", " ", "\n", "\n", "\n", " ", " ", "\n", "\n", "\n", " ", " ", "\n", " ", " ", "\n", "\n", "\n", " ", " ", "\n", "\n", "\n", "\n", "\n", "\n", " ", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", " ", "\n", " ", "\n", " ", "\n", " ", "\n", " ", "\n", " ", "\n", " ", "\n", "\n", " ", " ", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", " ", "\n", " ", "\n"], "language": "Python", "source": "python.org", "token_count": 9476} +{"url": "https://docs.python.org/3/library/asyncio-graph.html", "title": "Call Graph Introspection", "content": "Call Graph Introspection\u00b6\nSource code: Lib/asyncio/graph.py\nasyncio has powerful runtime call graph introspection utilities to trace the entire call graph of a running coroutine or task, or a suspended future. These utilities and the underlying machinery can be used from within a Python program or by external profilers and debuggers.\nAdded in version 3.14.\n- asyncio.print_call_graph(future=None, /, *, file=None, depth=1, limit=None)\u00b6\nPrint the async call graph for the current task or the provided\nTask\norFuture\n.This function prints entries starting from the top frame and going down towards the invocation point.\nThe function receives an optional future argument. If not passed, the current running task will be used.\nIf the function is called on the current task, the optional keyword-only depth argument can be used to skip the specified number of frames from top of the stack.\nIf the optional keyword-only limit argument is provided, each call stack in the resulting graph is truncated to include at most\nabs(limit)\nentries. If limit is positive, the entries left are the closest to the invocation point. If limit is negative, the topmost entries are left. If limit is omitted orNone\n, all entries are present. If limit is0\n, the call stack is not printed at all, only \u201cawaited by\u201d information is printed.If file is omitted or\nNone\n, the function will print tosys.stdout\n.Example:\nThe following Python code:\nimport asyncio async def test(): asyncio.print_call_graph() async def main(): async with asyncio.TaskGroup() as g: g.create_task(test(), name='test') asyncio.run(main())\nwill print:\n* Task(name='test', id=0x1039f0fe0) + Call stack: | File 't2.py', line 4, in async test() + Awaited by: * Task(name='Task-1', id=0x103a5e060) + Call stack: | File 'taskgroups.py', line 107, in async TaskGroup.__aexit__() | File 't2.py', line 7, in async main()\n- asyncio.format_call_graph(future=None, /, *, depth=1, limit=None)\u00b6\nLike\nprint_call_graph()\n, but returns a string. If future isNone\nand there\u2019s no current task, the function returns an empty string.\n- asyncio.capture_call_graph(future=None, /, *, depth=1, limit=None)\u00b6\nCapture the async call graph for the current task or the provided\nTask\norFuture\n.The function receives an optional future argument. If not passed, the current running task will be used. If there\u2019s no current task, the function returns\nNone\n.If the function is called on the current task, the optional keyword-only depth argument can be used to skip the specified number of frames from top of the stack.\nReturns a\nFutureCallGraph\ndata class object:FutureCallGraph(future, call_stack, awaited_by)\nFrameCallGraphEntry(frame)\nWhere frame is a frame object of a regular Python function in the call stack.\nLow level utility functions\u00b6\nTo introspect an async call graph asyncio requires cooperation from\ncontrol flow structures, such as shield()\nor TaskGroup\n.\nAny time an intermediate Future\nobject with low-level APIs like\nFuture.add_done_callback()\nis\ninvolved, the following two functions should be used to inform asyncio\nabout how exactly such intermediate future objects are connected with\nthe tasks they wrap or control.\n- asyncio.future_add_to_awaited_by(future, waiter, /)\u00b6\nRecord that future is awaited on by waiter.\nBoth future and waiter must be instances of\nFuture\norTask\nor their subclasses, otherwise the call would have no effect.A call to\nfuture_add_to_awaited_by()\nmust be followed by an eventual call to thefuture_discard_from_awaited_by()\nfunction with the same arguments.", "code_snippets": [" ", " ", "\n", " ", " ", "\n", " ", " ", " ", " ", " ", " ", " ", "\n", " ", " ", "\n ", " ", " ", "\n ", " ", " ", "\n ", " ", " ", " ", " ", " ", " ", " ", "\n ", " ", " ", " ", " ", " ", " ", " ", "\n"], "language": "Python", "source": "python.org", "token_count": 879} +{"url": "https://docs.python.org/3/library/posix.html", "title": " \u2014 The most common POSIX system calls", "content": "posix\n\u2014 The most common POSIX system calls\u00b6\nThis module provides access to operating system functionality that is standardized by the C Standard and the POSIX standard (a thinly disguised Unix interface).\nAvailability: Unix.\nDo not import this module directly. Instead, import the module os\n,\nwhich provides a portable version of this interface. On Unix, the os\nmodule provides a superset of the posix\ninterface. On non-Unix operating\nsystems the posix\nmodule is not available, but a subset is always\navailable through the os\ninterface. Once os\nis imported, there is\nno performance penalty in using it instead of posix\n. In addition,\nos\nprovides some additional functionality, such as automatically calling\nputenv()\nwhen an entry in os.environ\nis changed.\nErrors are reported as exceptions; the usual exceptions are given for type\nerrors, while errors reported by the system calls raise OSError\n.\nLarge File Support\u00b6\nSeveral operating systems (including AIX and Solaris) provide support for files that are larger than 2 GiB from a C programming model where int and long are 32-bit values. This is typically accomplished by defining the relevant size and offset types as 64-bit values. Such files are sometimes referred to as large files.\nLarge file support is enabled in Python when the size of an off_t\nis\nlarger than a long and the long long is at least as large\nas an off_t\n.\nIt may be necessary to configure and compile Python with certain compiler flags\nto enable this mode. For example, with Solaris 2.6 and 2.7 you need to do\nsomething like:\nCFLAGS=\"`getconf LFS_CFLAGS`\" OPT=\"-g -O2 $CFLAGS\" \\\n./configure\nOn large-file-capable Linux systems, this might work:\nCFLAGS='-D_LARGEFILE64_SOURCE -D_FILE_OFFSET_BITS=64' OPT=\"-g -O2 $CFLAGS\" \\\n./configure\nNotable Module Contents\u00b6\nIn addition to many functions described in the os\nmodule documentation,\nposix\ndefines the following data item:\n- posix.environ\u00b6\nA dictionary representing the string environment at the time the interpreter was started. Keys and values are bytes on Unix and str on Windows. For example,\nenviron[b'HOME']\n(environ['HOME']\non Windows) is the pathname of your home directory, equivalent togetenv(\"HOME\")\nin C.Modifying this dictionary does not affect the string environment passed on by\nexecv()\n,popen()\norsystem()\n; if you need to change the environment, passenviron\ntoexecve()\nor add variable assignments and export statements to the command string forsystem()\norpopen()\n.Changed in version 3.2: On Unix, keys and values are bytes.\nNote\nThe\nos\nmodule provides an alternate implementation ofenviron\nwhich updates the environment on modification. Note also that updatingos.environ\nwill render this dictionary obsolete. Use of theos\nmodule version of this is recommended over direct access to theposix\nmodule.", "code_snippets": [" ", " \\\n ", "\n", " ", " \\\n ", "\n"], "language": "Python", "source": "python.org", "token_count": 700} +{"url": "https://docs.python.org/3/c-api/sequence.html", "title": "Sequence Protocol", "content": "Sequence Protocol\u00b6\n-\nint PySequence_Check(PyObject *o)\u00b6\n- Part of the Stable ABI.\nReturn\n1\nif the object provides the sequence protocol, and0\notherwise. Note that it returns1\nfor Python classes with a__getitem__()\nmethod, unless they aredict\nsubclasses, since in general it is impossible to determine what type of keys the class supports. This function always succeeds.\n-\nPy_ssize_t PySequence_Size(PyObject *o)\u00b6\n-\nPy_ssize_t PySequence_Length(PyObject *o)\u00b6\n- Part of the Stable ABI.\nReturns the number of objects in sequence o on success, and\n-1\non failure. This is equivalent to the Python expressionlen(o)\n.\n-\nPyObject *PySequence_Concat(PyObject *o1, PyObject *o2)\u00b6\n- Return value: New reference. Part of the Stable ABI.\nReturn the concatenation of o1 and o2 on success, and\nNULL\non failure. This is the equivalent of the Python expressiono1 + o2\n.\n-\nPyObject *PySequence_Repeat(PyObject *o, Py_ssize_t count)\u00b6\n- Return value: New reference. Part of the Stable ABI.\nReturn the result of repeating sequence object o count times, or\nNULL\non failure. This is the equivalent of the Python expressiono * count\n.\n-\nPyObject *PySequence_InPlaceConcat(PyObject *o1, PyObject *o2)\u00b6\n- Return value: New reference. Part of the Stable ABI.\nReturn the concatenation of o1 and o2 on success, and\nNULL\non failure. The operation is done in-place when o1 supports it. This is the equivalent of the Python expressiono1 += o2\n.\n-\nPyObject *PySequence_InPlaceRepeat(PyObject *o, Py_ssize_t count)\u00b6\n- Return value: New reference. Part of the Stable ABI.\nReturn the result of repeating sequence object o count times, or\nNULL\non failure. The operation is done in-place when o supports it. This is the equivalent of the Python expressiono *= count\n.\n-\nPyObject *PySequence_GetItem(PyObject *o, Py_ssize_t i)\u00b6\n- Return value: New reference. Part of the Stable ABI.\nReturn the ith element of o, or\nNULL\non failure. This is the equivalent of the Python expressiono[i]\n.\n-\nPyObject *PySequence_GetSlice(PyObject *o, Py_ssize_t i1, Py_ssize_t i2)\u00b6\n- Return value: New reference. Part of the Stable ABI.\nReturn the slice of sequence object o between i1 and i2, or\nNULL\non failure. This is the equivalent of the Python expressiono[i1:i2]\n.\n-\nint PySequence_SetItem(PyObject *o, Py_ssize_t i, PyObject *v)\u00b6\n- Part of the Stable ABI.\nAssign object v to the ith element of o. Raise an exception and return\n-1\non failure; return0\non success. This is the equivalent of the Python statemento[i] = v\n. This function does not steal a reference to v.If v is\nNULL\n, the element is deleted, but this feature is deprecated in favour of usingPySequence_DelItem()\n.\n-\nint PySequence_DelItem(PyObject *o, Py_ssize_t i)\u00b6\n- Part of the Stable ABI.\nDelete the ith element of object o. Returns\n-1\non failure. This is the equivalent of the Python statementdel o[i]\n.\n-\nint PySequence_SetSlice(PyObject *o, Py_ssize_t i1, Py_ssize_t i2, PyObject *v)\u00b6\n- Part of the Stable ABI.\nAssign the sequence object v to the slice in sequence object o from i1 to i2. This is the equivalent of the Python statement\no[i1:i2] = v\n.\n-\nint PySequence_DelSlice(PyObject *o, Py_ssize_t i1, Py_ssize_t i2)\u00b6\n- Part of the Stable ABI.\nDelete the slice in sequence object o from i1 to i2. Returns\n-1\non failure. This is the equivalent of the Python statementdel o[i1:i2]\n.\n-\nPy_ssize_t PySequence_Count(PyObject *o, PyObject *value)\u00b6\n- Part of the Stable ABI.\nReturn the number of occurrences of value in o, that is, return the number of keys for which\no[key] == value\n. On failure, return-1\n. This is equivalent to the Python expressiono.count(value)\n.\n-\nint PySequence_Contains(PyObject *o, PyObject *value)\u00b6\n- Part of the Stable ABI.\nDetermine if o contains value. If an item in o is equal to value, return\n1\n, otherwise return0\n. On error, return-1\n. This is equivalent to the Python expressionvalue in o\n.\n-\nint PySequence_In(PyObject *o, PyObject *value)\u00b6\n- Part of the Stable ABI.\nAlias for\nPySequence_Contains()\n.Deprecated since version 3.14: The function is soft deprecated and should no longer be used to write new code.\n-\nPy_ssize_t PySequence_Index(PyObject *o, PyObject *value)\u00b6\n- Part of the Stable ABI.\nReturn the first index i for which\no[i] == value\n. On error, return-1\n. This is equivalent to the Python expressiono.index(value)\n.\n-\nPyObject *PySequence_List(PyObject *o)\u00b6\n- Return value: New reference. Part of the Stable ABI.\nReturn a list object with the same contents as the sequence or iterable o, or\nNULL\non failure. The returned list is guaranteed to be new. This is equivalent to the Python expressionlist(o)\n.\n-\nPyObject *PySequence_Tuple(PyObject *o)\u00b6\n- Return value: New reference. Part of the Stable ABI.\nReturn a tuple object with the same contents as the sequence or iterable o, or\nNULL\non failure. If o is a tuple, a new reference will be returned, otherwise a tuple will be constructed with the appropriate contents. This is equivalent to the Python expressiontuple(o)\n.\n-\nPyObject *PySequence_Fast(PyObject *o, const char *m)\u00b6\n- Return value: New reference. Part of the Stable ABI.\nReturn the sequence or iterable o as an object usable by the other\nPySequence_Fast*\nfamily of functions. If the object is not a sequence or iterable, raisesTypeError\nwith m as the message text. ReturnsNULL\non failure.The\nPySequence_Fast*\nfunctions are thus named because they assume o is aPyTupleObject\nor aPyListObject\nand access the data fields of o directly.As a CPython implementation detail, if o is already a sequence or list, it will be returned.\n-\nPy_ssize_t PySequence_Fast_GET_SIZE(PyObject *o)\u00b6\nReturns the length of o, assuming that o was returned by\nPySequence_Fast()\nand that o is notNULL\n. The size can also be retrieved by callingPySequence_Size()\non o, butPySequence_Fast_GET_SIZE()\nis faster because it can assume o is a list or tuple.\n-\nPyObject *PySequence_Fast_GET_ITEM(PyObject *o, Py_ssize_t i)\u00b6\n- Return value: Borrowed reference.\nReturn the ith element of o, assuming that o was returned by\nPySequence_Fast()\n, o is notNULL\n, and that i is within bounds.\n-\nPyObject **PySequence_Fast_ITEMS(PyObject *o)\u00b6\nReturn the underlying array of PyObject pointers. Assumes that o was returned by\nPySequence_Fast()\nand o is notNULL\n.Note, if a list gets resized, the reallocation may relocate the items array. So, only use the underlying array pointer in contexts where the sequence cannot change.\n-\nPyObject *PySequence_ITEM(PyObject *o, Py_ssize_t i)\u00b6\n- Return value: New reference.\nReturn the ith element of o or\nNULL\non failure. Faster form ofPySequence_GetItem()\nbut without checking thatPySequence_Check()\non o is true and without adjustment for negative indices.", "code_snippets": [], "language": "Python", "source": "python.org", "token_count": 1657} +{"url": "https://docs.python.org/3/tutorial/inputoutput.html", "title": "Input and Output", "content": "7. Input and Output\u00b6\nThere are several ways to present the output of a program; data can be printed in a human-readable form, or written to a file for future use. This chapter will discuss some of the possibilities.\n7.1. Fancier Output Formatting\u00b6\nSo far we\u2019ve encountered two ways of writing values: expression statements and\nthe print()\nfunction. (A third way is using the write()\nmethod\nof file objects; the standard output file can be referenced as sys.stdout\n.\nSee the Library Reference for more information on this.)\nOften you\u2019ll want more control over the formatting of your output than simply printing space-separated values. There are several ways to format output.\nTo use formatted string literals, begin a string with\nf\norF\nbefore the opening quotation mark or triple quotation mark. Inside this string, you can write a Python expression between{\nand}\ncharacters that can refer to variables or literal values.>>> year = 2016 >>> event = 'Referendum' >>> f'Results of the {year} {event}' 'Results of the 2016 Referendum'\nThe\nstr.format()\nmethod of strings requires more manual effort. You\u2019ll still use{\nand}\nto mark where a variable will be substituted and can provide detailed formatting directives, but you\u2019ll also need to provide the information to be formatted. In the following code block there are two examples of how to format variables:>>> yes_votes = 42_572_654 >>> total_votes = 85_705_149 >>> percentage = yes_votes / total_votes >>> '{:-9} YES votes {:2.2%}'.format(yes_votes, percentage) ' 42572654 YES votes 49.67%'\nNotice how the\nyes_votes\nare padded with spaces and a negative sign only for negative numbers. The example also printspercentage\nmultiplied by 100, with 2 decimal places and followed by a percent sign (see Format Specification Mini-Language for details).Finally, you can do all the string handling yourself by using string slicing and concatenation operations to create any layout you can imagine. The string type has some methods that perform useful operations for padding strings to a given column width.\nWhen you don\u2019t need fancy output but just want a quick display of some\nvariables for debugging purposes, you can convert any value to a string with\nthe repr()\nor str()\nfunctions.\nThe str()\nfunction is meant to return representations of values which are\nfairly human-readable, while repr()\nis meant to generate representations\nwhich can be read by the interpreter (or will force a SyntaxError\nif\nthere is no equivalent syntax). For objects which don\u2019t have a particular\nrepresentation for human consumption, str()\nwill return the same value as\nrepr()\n. Many values, such as numbers or structures like lists and\ndictionaries, have the same representation using either function. Strings, in\nparticular, have two distinct representations.\nSome examples:\n>>> s = 'Hello, world.'\n>>> str(s)\n'Hello, world.'\n>>> repr(s)\n\"'Hello, world.'\"\n>>> str(1/7)\n'0.14285714285714285'\n>>> x = 10 * 3.25\n>>> y = 200 * 200\n>>> s = 'The value of x is ' + repr(x) + ', and y is ' + repr(y) + '...'\n>>> print(s)\nThe value of x is 32.5, and y is 40000...\n>>> # The repr() of a string adds string quotes and backslashes:\n>>> hello = 'hello, world\\n'\n>>> hellos = repr(hello)\n>>> print(hellos)\n'hello, world\\n'\n>>> # The argument to repr() may be any Python object:\n>>> repr((x, y, ('spam', 'eggs')))\n\"(32.5, 40000, ('spam', 'eggs'))\"\nThe string\nmodule contains support for a simple templating approach\nbased upon regular expressions, via string.Template\n.\nThis offers yet another way to substitute values into strings,\nusing placeholders like $x\nand replacing them with values from a dictionary.\nThis syntax is easy to use, although it offers much less control for formatting.\n7.1.1. Formatted String Literals\u00b6\nFormatted string literals (also called f-strings for\nshort) let you include the value of Python expressions inside a string by\nprefixing the string with f\nor F\nand writing expressions as\n{expression}\n.\nAn optional format specifier can follow the expression. This allows greater control over how the value is formatted. The following example rounds pi to three places after the decimal:\n>>> import math\n>>> print(f'The value of pi is approximately {math.pi:.3f}.')\nThe value of pi is approximately 3.142.\nPassing an integer after the ':'\nwill cause that field to be a minimum\nnumber of characters wide. This is useful for making columns line up.\n>>> table = {'Sjoerd': 4127, 'Jack': 4098, 'Dcab': 7678}\n>>> for name, phone in table.items():\n... print(f'{name:10} ==> {phone:10d}')\n...\nSjoerd ==> 4127\nJack ==> 4098\nDcab ==> 7678\nOther modifiers can be used to convert the value before it is formatted.\n'!a'\napplies ascii()\n, '!s'\napplies str()\n, and '!r'\napplies repr()\n:\n>>> animals = 'eels'\n>>> print(f'My hovercraft is full of {animals}.')\nMy hovercraft is full of eels.\n>>> print(f'My hovercraft is full of {animals!r}.')\nMy hovercraft is full of 'eels'.\nThe =\nspecifier can be used to expand an expression to the text of the\nexpression, an equal sign, then the representation of the evaluated expression:\n>>> bugs = 'roaches'\n>>> count = 13\n>>> area = 'living room'\n>>> print(f'Debugging {bugs=} {count=} {area=}')\nDebugging bugs='roaches' count=13 area='living room'\nSee self-documenting expressions for more information\non the =\nspecifier. For a reference on these format specifications, see\nthe reference guide for the Format Specification Mini-Language.\n7.1.2. The String format() Method\u00b6\nBasic usage of the str.format()\nmethod looks like this:\n>>> print('We are the {} who say \"{}!\"'.format('knights', 'Ni'))\nWe are the knights who say \"Ni!\"\nThe brackets and characters within them (called format fields) are replaced with\nthe objects passed into the str.format()\nmethod. A number in the\nbrackets can be used to refer to the position of the object passed into the\nstr.format()\nmethod.\n>>> print('{0} and {1}'.format('spam', 'eggs'))\nspam and eggs\n>>> print('{1} and {0}'.format('spam', 'eggs'))\neggs and spam\nIf keyword arguments are used in the str.format()\nmethod, their values\nare referred to by using the name of the argument.\n>>> print('This {food} is {adjective}.'.format(\n... food='spam', adjective='absolutely horrible'))\nThis spam is absolutely horrible.\nPositional and keyword arguments can be arbitrarily combined:\n>>> print('The story of {0}, {1}, and {other}.'.format('Bill', 'Manfred',\n... other='Georg'))\nThe story of Bill, Manfred, and Georg.\nIf you have a really long format string that you don\u2019t want to split up, it\nwould be nice if you could reference the variables to be formatted by name\ninstead of by position. This can be done by simply passing the dict and using\nsquare brackets '[]'\nto access the keys.\n>>> table = {'Sjoerd': 4127, 'Jack': 4098, 'Dcab': 8637678}\n>>> print('Jack: {0[Jack]:d}; Sjoerd: {0[Sjoerd]:d}; '\n... 'Dcab: {0[Dcab]:d}'.format(table))\nJack: 4098; Sjoerd: 4127; Dcab: 8637678\nThis could also be done by passing the table\ndictionary as keyword arguments with the **\nnotation.\n>>> table = {'Sjoerd': 4127, 'Jack': 4098, 'Dcab': 8637678}\n>>> print('Jack: {Jack:d}; Sjoerd: {Sjoerd:d}; Dcab: {Dcab:d}'.format(**table))\nJack: 4098; Sjoerd: 4127; Dcab: 8637678\nThis is particularly useful in combination with the built-in function\nvars()\n, which returns a dictionary containing all local variables:\n>>> table = {k: str(v) for k, v in vars().items()}\n>>> message = \" \".join([f'{k}: ' + '{' + k +'};' for k in table.keys()])\n>>> print(message.format(**table))\n__name__: __main__; __doc__: None; __package__: None; __loader__: ...\nAs an example, the following lines produce a tidily aligned set of columns giving integers and their squares and cubes:\n>>> for x in range(1, 11):\n... print('{0:2d} {1:3d} {2:4d}'.format(x, x*x, x*x*x))\n...\n1 1 1\n2 4 8\n3 9 27\n4 16 64\n5 25 125\n6 36 216\n7 49 343\n8 64 512\n9 81 729\n10 100 1000\nFor a complete overview of string formatting with str.format()\n, see\nFormat String Syntax.\n7.1.3. Manual String Formatting\u00b6\nHere\u2019s the same table of squares and cubes, formatted manually:\n>>> for x in range(1, 11):\n... print(repr(x).rjust(2), repr(x*x).rjust(3), end=' ')\n... # Note use of 'end' on previous line\n... print(repr(x*x*x).rjust(4))\n...\n1 1 1\n2 4 8\n3 9 27\n4 16 64\n5 25 125\n6 36 216\n7 49 343\n8 64 512\n9 81 729\n10 100 1000\n(Note that the one space between each column was added by the\nway print()\nworks: it always adds spaces between its arguments.)\nThe str.rjust()\nmethod of string objects right-justifies a string in a\nfield of a given width by padding it with spaces on the left. There are\nsimilar methods str.ljust()\nand str.center()\n. These methods do\nnot write anything, they just return a new string. If the input string is too\nlong, they don\u2019t truncate it, but return it unchanged; this will mess up your\ncolumn lay-out but that\u2019s usually better than the alternative, which would be\nlying about a value. (If you really want truncation you can always add a\nslice operation, as in x.ljust(n)[:n]\n.)\nThere is another method, str.zfill()\n, which pads a numeric string on the\nleft with zeros. It understands about plus and minus signs:\n>>> '12'.zfill(5)\n'00012'\n>>> '-3.14'.zfill(7)\n'-003.14'\n>>> '3.14159265359'.zfill(5)\n'3.14159265359'\n7.1.4. Old string formatting\u00b6\nThe % operator (modulo) can also be used for string formatting.\nGiven format % values\n(where format is a string),\n%\nconversion specifications in format are replaced with\nzero or more elements of values.\nThis operation is commonly known as string\ninterpolation. For example:\n>>> import math\n>>> print('The value of pi is approximately %5.3f.' % math.pi)\nThe value of pi is approximately 3.142.\nMore information can be found in the printf-style String Formatting section.\n7.2. Reading and Writing Files\u00b6\nopen()\nreturns a file object, and is most commonly used with\ntwo positional arguments and one keyword argument:\nopen(filename, mode, encoding=None)\n>>> f = open('workfile', 'w', encoding=\"utf-8\")\nThe first argument is a string containing the filename. The second argument is\nanother string containing a few characters describing the way in which the file\nwill be used. mode can be 'r'\nwhen the file will only be read, 'w'\nfor only writing (an existing file with the same name will be erased), and\n'a'\nopens the file for appending; any data written to the file is\nautomatically added to the end. 'r+'\nopens the file for both reading and\nwriting. The mode argument is optional; 'r'\nwill be assumed if it\u2019s\nomitted.\nNormally, files are opened in text mode, that means, you read and write\nstrings from and to the file, which are encoded in a specific encoding.\nIf encoding is not specified, the default is platform dependent\n(see open()\n).\nBecause UTF-8 is the modern de-facto standard, encoding=\"utf-8\"\nis\nrecommended unless you know that you need to use a different encoding.\nAppending a 'b'\nto the mode opens the file in binary mode.\nBinary mode data is read and written as bytes\nobjects.\nYou can not specify encoding when opening file in binary mode.\nIn text mode, the default when reading is to convert platform-specific line\nendings (\\n\non Unix, \\r\\n\non Windows) to just \\n\n. When writing in\ntext mode, the default is to convert occurrences of \\n\nback to\nplatform-specific line endings. This behind-the-scenes modification\nto file data is fine for text files, but will corrupt binary data like that in\nJPEG\nor EXE\nfiles. Be very careful to use binary mode when\nreading and writing such files.\nIt is good practice to use the with\nkeyword when dealing\nwith file objects. The advantage is that the file is properly closed\nafter its suite finishes, even if an exception is raised at some\npoint. Using with\nis also much shorter than writing\nequivalent try\n-finally\nblocks:\n>>> with open('workfile', encoding=\"utf-8\") as f:\n... read_data = f.read()\n>>> # We can check that the file has been automatically closed.\n>>> f.closed\nTrue\nIf you\u2019re not using the with\nkeyword, then you should call\nf.close()\nto close the file and immediately free up any system\nresources used by it.\nWarning\nCalling f.write()\nwithout using the with\nkeyword or calling\nf.close()\nmight result in the arguments\nof f.write()\nnot being completely written to the disk, even if the\nprogram exits successfully.\nAfter a file object is closed, either by a with\nstatement\nor by calling f.close()\n, attempts to use the file object will\nautomatically fail.\n>>> f.close()\n>>> f.read()\nTraceback (most recent call last):\nFile \"\", line 1, in \nValueError: I/O operation on closed file.\n7.2.1. Methods of File Objects\u00b6\nThe rest of the examples in this section will assume that a file object called\nf\nhas already been created.\nTo read a file\u2019s contents, call f.read(size)\n, which reads some quantity of\ndata and returns it as a string (in text mode) or bytes object (in binary mode).\nsize is an optional numeric argument. When size is omitted or negative, the\nentire contents of the file will be read and returned; it\u2019s your problem if the\nfile is twice as large as your machine\u2019s memory. Otherwise, at most size\ncharacters (in text mode) or size bytes (in binary mode) are read and returned.\nIf the end of the file has been reached, f.read()\nwill return an empty\nstring (''\n).\n>>> f.read()\n'This is the entire file.\\n'\n>>> f.read()\n''\nf.readline()\nreads a single line from the file; a newline character (\\n\n)\nis left at the end of the string, and is only omitted on the last line of the\nfile if the file doesn\u2019t end in a newline. This makes the return value\nunambiguous; if f.readline()\nreturns an empty string, the end of the file\nhas been reached, while a blank line is represented by '\\n'\n, a string\ncontaining only a single newline.\n>>> f.readline()\n'This is the first line of the file.\\n'\n>>> f.readline()\n'Second line of the file\\n'\n>>> f.readline()\n''\nFor reading lines from a file, you can loop over the file object. This is memory efficient, fast, and leads to simple code:\n>>> for line in f:\n... print(line, end='')\n...\nThis is the first line of the file.\nSecond line of the file\nIf you want to read all the lines of a file in a list you can also use\nlist(f)\nor f.readlines()\n.\nf.write(string)\nwrites the contents of string to the file, returning\nthe number of characters written.\n>>> f.write('This is a test\\n')\n15\nOther types of objects need to be converted \u2013 either to a string (in text mode) or a bytes object (in binary mode) \u2013 before writing them:\n>>> value = ('the answer', 42)\n>>> s = str(value) # convert the tuple to string\n>>> f.write(s)\n18\nf.tell()\nreturns an integer giving the file object\u2019s current position in the file\nrepresented as number of bytes from the beginning of the file when in binary mode and\nan opaque number when in text mode.\nTo change the file object\u2019s position, use f.seek(offset, whence)\n. The position is computed\nfrom adding offset to a reference point; the reference point is selected by\nthe whence argument. A whence value of 0 measures from the beginning\nof the file, 1 uses the current file position, and 2 uses the end of the file as\nthe reference point. whence can be omitted and defaults to 0, using the\nbeginning of the file as the reference point.\n>>> f = open('workfile', 'rb+')\n>>> f.write(b'0123456789abcdef')\n16\n>>> f.seek(5) # Go to the 6th byte in the file\n5\n>>> f.read(1)\nb'5'\n>>> f.seek(-3, 2) # Go to the 3rd byte before the end\n13\n>>> f.read(1)\nb'd'\nIn text files (those opened without a b\nin the mode string), only seeks\nrelative to the beginning of the file are allowed (the exception being seeking\nto the very file end with seek(0, 2)\n) and the only valid offset values are\nthose returned from the f.tell()\n, or zero. Any other offset value produces\nundefined behaviour.\nFile objects have some additional methods, such as isatty()\nand\ntruncate()\nwhich are less frequently used; consult the Library\nReference for a complete guide to file objects.\n7.2.2. Saving structured data with json\n\u00b6\nStrings can easily be written to and read from a file. Numbers take a bit more\neffort, since the read()\nmethod only returns strings, which will have to\nbe passed to a function like int()\n, which takes a string like '123'\nand returns its numeric value 123. When you want to save more complex data\ntypes like nested lists and dictionaries, parsing and serializing by hand\nbecomes complicated.\nRather than having users constantly writing and debugging code to save\ncomplicated data types to files, Python allows you to use the popular data\ninterchange format called JSON (JavaScript Object Notation). The standard module called json\ncan take Python\ndata hierarchies, and convert them to string representations; this process is\ncalled serializing. Reconstructing the data from the string representation\nis called deserializing. Between serializing and deserializing, the\nstring representing the object may have been stored in a file or data, or\nsent over a network connection to some distant machine.\nNote\nThe JSON format is commonly used by modern applications to allow for data exchange. Many programmers are already familiar with it, which makes it a good choice for interoperability.\nIf you have an object x\n, you can view its JSON string representation with a\nsimple line of code:\n>>> import json\n>>> x = [1, 'simple', 'list']\n>>> json.dumps(x)\n'[1, \"simple\", \"list\"]'\nAnother variant of the dumps()\nfunction, called dump()\n,\nsimply serializes the object to a text file. So if f\nis a\ntext file object opened for writing, we can do this:\njson.dump(x, f)\nTo decode the object again, if f\nis a binary file or\ntext file object which has been opened for reading:\nx = json.load(f)\nNote\nJSON files must be encoded in UTF-8. Use encoding=\"utf-8\"\nwhen opening\nJSON file as a text file for both of reading and writing.\nThis simple serialization technique can handle lists and dictionaries, but\nserializing arbitrary class instances in JSON requires a bit of extra effort.\nThe reference for the json\nmodule contains an explanation of this.\nSee also\npickle\n- the pickle module\nContrary to JSON, pickle is a protocol which allows the serialization of arbitrarily complex Python objects. As such, it is specific to Python and cannot be used to communicate with applications written in other languages. It is also insecure by default: deserializing pickle data coming from an untrusted source can execute arbitrary code, if the data was crafted by a skilled attacker.", "code_snippets": [" ", " ", "\n", " ", " ", "\n", "\n", "\n", " ", " ", "\n", " ", " ", "\n", " ", " ", " ", " ", "\n", " ", "\n", "\n", " ", " ", "\n", "\n", "\n", "\n", "\n", "\n", "\n", " ", " ", " ", " ", "\n", " ", " ", " ", " ", "\n", " ", " ", " ", " ", " ", " ", " ", " ", " ", " ", "\n", "\n", "\n", "\n", " ", " ", "\n", " ", " ", "\n", "\n", "\n", "\n", " ", " ", " ", "\n", "\n", "\n", "\n", "\n", " ", " ", " ", " ", " ", " ", " ", "\n", " ", " ", " ", " ", "\n", " ", "\n", "\n", "\n", "\n", "\n", " ", " ", "\n", "\n", "\n", "\n", "\n", " ", "\n", "\n", " ", "\n", "\n", " ", "\n", "\n", "\n", " ", " ", "\n", "\n", " ", "\n", " ", "\n", "\n", " ", " ", " ", " ", " ", " ", " ", "\n", "\n", " ", "\n", "\n", " ", " ", " ", " ", " ", " ", " ", "\n", "\n", "\n", " ", " ", " ", " ", " ", " ", " ", " ", "\n", " ", " ", " ", " ", " ", " ", " ", " ", " ", " ", " ", "\n", "\n", "\n", " ", " ", " ", " ", "\n", " ", " ", " ", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", " ", " ", " ", " ", "\n", " ", " ", " ", "\n", " ", "\n", " ", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", " ", " ", "\n", "\n", " ", " ", " ", " ", "\n", " ", " ", " ", " ", "\n", " ", " ", " ", "\n\n", "\n", "\n", "\n", "\n", "\n", "\n File ", ", line ", ", in ", "\n", ": ", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", " ", " ", " ", "\n", " ", " ", "\n", "\n", "\n", "\n", "\n", "\n", " ", " ", " ", "\n", " ", " ", " ", "\n", "\n", "\n", " ", " ", " ", "\n", "\n", "\n", " ", "\n", "\n", "\n", "\n", " ", " ", "\n", "\n", "\n", "\n", "\n", " ", " ", " ", " ", "\n", "\n", "\n", " ", "\n", " ", " ", "\n"], "language": "Python", "source": "python.org", "token_count": 4570} +{"url": "https://docs.python.org/3/library/chunk.html", "title": " \u2014 Read IFF chunked data", "content": "chunk\n\u2014 Read IFF chunked data\u00b6\nDeprecated since version 3.11, removed in version 3.13.\nThis module is no longer part of the Python standard library. It was removed in Python 3.13 after being deprecated in Python 3.11. The removal was decided in PEP 594.\nThe last version of Python that provided the chunk\nmodule was\nPython 3.12.", "code_snippets": [], "language": "Python", "source": "python.org", "token_count": 82} +{"url": "https://docs.python.org/3/library/nntplib.html", "title": " \u2014 NNTP protocol client", "content": "nntplib\n\u2014 NNTP protocol client\u00b6\nDeprecated since version 3.11, removed in version 3.13.\nThis module is no longer part of the Python standard library. It was removed in Python 3.13 after being deprecated in Python 3.11. The removal was decided in PEP 594.\nThe last version of Python that provided the nntplib\nmodule was\nPython 3.12.", "code_snippets": [], "language": "Python", "source": "python.org", "token_count": 83} +{"url": "https://docs.python.org/3/whatsnew/2.7.html", "title": "What\u2019s New in Python 2.7", "content": "What\u2019s New in Python 2.7\u00b6\n- Author:\nA.M. Kuchling (amk at amk.ca)\nThis article explains the new features in Python 2.7. Python 2.7 was released on July 3, 2010.\nNumeric handling has been improved in many ways, for both\nfloating-point numbers and for the Decimal\nclass.\nThere are some useful additions to the standard library, such as a\ngreatly enhanced unittest\nmodule, the argparse\nmodule\nfor parsing command-line options, convenient OrderedDict\nand Counter\nclasses in the collections\nmodule,\nand many other improvements.\nPython 2.7 is planned to be the last of the 2.x releases, so we worked on making it a good release for the long term. To help with porting to Python 3, several new features from the Python 3.x series have been included in 2.7.\nThis article doesn\u2019t attempt to provide a complete specification of the new features, but instead provides a convenient overview. For full details, you should refer to the documentation for Python 2.7 at https://docs.python.org. If you want to understand the rationale for the design and implementation, refer to the PEP for a particular new feature or the issue on https://bugs.python.org in which a change was discussed. Whenever possible, \u201cWhat\u2019s New in Python\u201d links to the bug/patch item for each change.\nThe Future for Python 2.x\u00b6\nPython 2.7 is the last major release in the 2.x series, as the Python maintainers have shifted the focus of their new feature development efforts to the Python 3.x series. This means that while Python 2 continues to receive bug fixes, and to be updated to build correctly on new hardware and versions of supported operated systems, there will be no new full feature releases for the language or standard library.\nHowever, while there is a large common subset between Python 2.7 and Python 3, and many of the changes involved in migrating to that common subset, or directly to Python 3, can be safely automated, some other changes (notably those associated with Unicode handling) may require careful consideration, and preferably robust automated regression test suites, to migrate effectively.\nThis means that Python 2.7 will remain in place for a long time, providing a stable and supported base platform for production systems that have not yet been ported to Python 3. The full expected lifecycle of the Python 2.7 series is detailed in PEP 373.\nSome key consequences of the long-term significance of 2.7 are:\nAs noted above, the 2.7 release has a much longer period of maintenance when compared to earlier 2.x versions. Python 2.7 is currently expected to remain supported by the core development team (receiving security updates and other bug fixes) until at least 2020 (10 years after its initial release, compared to the more typical support period of 18\u201324 months).\nAs the Python 2.7 standard library ages, making effective use of the Python Package Index (either directly or via a redistributor) becomes more important for Python 2 users. In addition to a wide variety of third party packages for various tasks, the available packages include backports of new modules and features from the Python 3 standard library that are compatible with Python 2, as well as various tools and libraries that can make it easier to migrate to Python 3. The Python Packaging User Guide provides guidance on downloading and installing software from the Python Package Index.\nWhile the preferred approach to enhancing Python 2 is now the publication of new packages on the Python Package Index, this approach doesn\u2019t necessarily work in all cases, especially those related to network security. In exceptional cases that cannot be handled adequately by publishing new or updated packages on PyPI, the Python Enhancement Proposal process may be used to make the case for adding new features directly to the Python 2 standard library. Any such additions, and the maintenance releases where they were added, will be noted in the New Features Added to Python 2.7 Maintenance Releases section below.\nFor projects wishing to migrate from Python 2 to Python 3, or for library and framework developers wishing to support users on both Python 2 and Python 3, there are a variety of tools and guides available to help decide on a suitable approach and manage some of the technical details involved. The recommended starting point is the How to port Python 2 Code to Python 3 HOWTO guide.\nChanges to the Handling of Deprecation Warnings\u00b6\nFor Python 2.7, a policy decision was made to silence warnings only of\ninterest to developers by default. DeprecationWarning\nand its\ndescendants are now ignored unless otherwise requested, preventing\nusers from seeing warnings triggered by an application. This change\nwas also made in the branch that became Python 3.2. (Discussed\non stdlib-sig and carried out in bpo-7319.)\nIn previous releases, DeprecationWarning\nmessages were\nenabled by default, providing Python developers with a clear\nindication of where their code may break in a future major version\nof Python.\nHowever, there are increasingly many users of Python-based\napplications who are not directly involved in the development of\nthose applications. DeprecationWarning\nmessages are\nirrelevant to such users, making them worry about an application\nthat\u2019s actually working correctly and burdening application developers\nwith responding to these concerns.\nYou can re-enable display of DeprecationWarning\nmessages by\nrunning Python with the -Wdefault\n(short form:\n-Wd\n) switch, or by setting the PYTHONWARNINGS\nenvironment variable to \"default\"\n(or \"d\"\n) before running\nPython. Python code can also re-enable them\nby calling warnings.simplefilter('default')\n.\nThe unittest\nmodule also automatically reenables deprecation warnings\nwhen running tests.\nPython 3.1 Features\u00b6\nMuch as Python 2.6 incorporated features from Python 3.0, version 2.7 incorporates some of the new features in Python 3.1. The 2.x series continues to provide tools for migrating to the 3.x series.\nA partial list of 3.1 features that were backported to 2.7:\nThe syntax for set literals (\n{1,2,3}\nis a mutable set).Dictionary and set comprehensions (\n{i: i*2 for i in range(3)}\n).Multiple context managers in a single\nwith\nstatement.A new version of the\nio\nlibrary, rewritten in C for performance.The ordered-dictionary type described in PEP 372: Adding an Ordered Dictionary to collections.\nThe new\n\",\"\nformat specifier described in PEP 378: Format Specifier for Thousands Separator.The\nmemoryview\nobject.A small subset of the\nimportlib\nmodule, described below.The\nrepr()\nof a floatx\nis shorter in many cases: it\u2019s now based on the shortest decimal string that\u2019s guaranteed to round back tox\n. As in previous versions of Python, it\u2019s guaranteed thatfloat(repr(x))\nrecoversx\n.Float-to-string and string-to-float conversions are correctly rounded. The\nround()\nfunction is also now correctly rounded.The\nPyCapsule\ntype, used to provide a C API for extension modules.The\nPyLong_AsLongAndOverflow()\nC API function.\nOther new Python3-mode warnings include:\noperator.isCallable()\nandoperator.sequenceIncludes()\n, which are not supported in 3.x, now trigger warnings.The\n-3\nswitch now automatically enables the-Qwarn\nswitch that causes warnings about using classic division with integers and long integers.\nPEP 372: Adding an Ordered Dictionary to collections\u00b6\nRegular Python dictionaries iterate over key/value pairs in arbitrary order.\nOver the years, a number of authors have written alternative implementations\nthat remember the order that the keys were originally inserted. Based on\nthe experiences from those implementations, 2.7 introduces a new\nOrderedDict\nclass in the collections\nmodule.\nThe OrderedDict\nAPI provides the same interface as regular\ndictionaries but iterates over keys and values in a guaranteed order\ndepending on when a key was first inserted:\n>>> from collections import OrderedDict\n>>> d = OrderedDict([('first', 1),\n... ('second', 2),\n... ('third', 3)])\n>>> d.items()\n[('first', 1), ('second', 2), ('third', 3)]\nIf a new entry overwrites an existing entry, the original insertion position is left unchanged:\n>>> d['second'] = 4\n>>> d.items()\n[('first', 1), ('second', 4), ('third', 3)]\nDeleting an entry and reinserting it will move it to the end:\n>>> del d['second']\n>>> d['second'] = 5\n>>> d.items()\n[('first', 1), ('third', 3), ('second', 5)]\nThe popitem()\nmethod has an optional last\nargument that defaults to True\n. If last is true, the most recently\nadded key is returned and removed; if it\u2019s false, the\noldest key is selected:\n>>> od = OrderedDict([(x,0) for x in range(20)])\n>>> od.popitem()\n(19, 0)\n>>> od.popitem()\n(18, 0)\n>>> od.popitem(last=False)\n(0, 0)\n>>> od.popitem(last=False)\n(1, 0)\nComparing two ordered dictionaries checks both the keys and values, and requires that the insertion order was the same:\n>>> od1 = OrderedDict([('first', 1),\n... ('second', 2),\n... ('third', 3)])\n>>> od2 = OrderedDict([('third', 3),\n... ('first', 1),\n... ('second', 2)])\n>>> od1 == od2\nFalse\n>>> # Move 'third' key to the end\n>>> del od2['third']; od2['third'] = 3\n>>> od1 == od2\nTrue\nComparing an OrderedDict\nwith a regular dictionary\nignores the insertion order and just compares the keys and values.\nHow does the OrderedDict\nwork? It maintains a\ndoubly linked list of keys, appending new keys to the list as they\u2019re inserted.\nA secondary dictionary maps keys to their corresponding list node, so\ndeletion doesn\u2019t have to traverse the entire linked list and therefore\nremains O(1).\nThe standard library now supports use of ordered dictionaries in several modules.\nThe\nConfigParser\nmodule uses them by default, meaning that configuration files can now be read, modified, and then written back in their original order.The\n_asdict()\nmethod forcollections.namedtuple()\nnow returns an ordered dictionary with the values appearing in the same order as the underlying tuple indices.The\njson\nmodule\u2019sJSONDecoder\nclass constructor was extended with an object_pairs_hook parameter to allowOrderedDict\ninstances to be built by the decoder. Support was also added for third-party tools like PyYAML.\nSee also\n- PEP 372 - Adding an ordered dictionary to collections\nPEP written by Armin Ronacher and Raymond Hettinger; implemented by Raymond Hettinger.\nPEP 378: Format Specifier for Thousands Separator\u00b6\nTo make program output more readable, it can be useful to add separators to large numbers, rendering them as 18,446,744,073,709,551,616 instead of 18446744073709551616.\nThe fully general solution for doing this is the locale\nmodule,\nwhich can use different separators (\u201c,\u201d in North America, \u201c.\u201d in\nEurope) and different grouping sizes, but locale\nis complicated\nto use and unsuitable for multi-threaded applications where different\nthreads are producing output for different locales.\nTherefore, a simple comma-grouping mechanism has been added to the\nmini-language used by the str.format()\nmethod. When\nformatting a floating-point number, simply include a comma between the\nwidth and the precision:\n>>> '{:20,.2f}'.format(18446744073709551616.0)\n'18,446,744,073,709,551,616.00'\nWhen formatting an integer, include the comma after the width:\n>>> '{:20,d}'.format(18446744073709551616)\n'18,446,744,073,709,551,616'\nThis mechanism is not adaptable at all; commas are always used as the\nseparator and the grouping is always into three-digit groups. The\ncomma-formatting mechanism isn\u2019t as general as the locale\nmodule, but it\u2019s easier to use.\nSee also\n- PEP 378 - Format Specifier for Thousands Separator\nPEP written by Raymond Hettinger; implemented by Eric Smith.\nPEP 389: The argparse Module for Parsing Command Lines\u00b6\nThe argparse\nmodule for parsing command-line arguments was\nadded as a more powerful replacement for the\noptparse\nmodule.\nThis means Python now supports three different modules for parsing\ncommand-line arguments: getopt\n, optparse\n, and\nargparse\n. The getopt\nmodule closely resembles the C\nlibrary\u2019s getopt()\nfunction, so it remains useful if you\u2019re writing a\nPython prototype that will eventually be rewritten in C.\noptparse\nbecomes redundant, but there are no plans to remove it\nbecause there are many scripts still using it, and there\u2019s no\nautomated way to update these scripts. (Making the argparse\nAPI consistent with optparse\n\u2019s interface was discussed but\nrejected as too messy and difficult.)\nIn short, if you\u2019re writing a new script and don\u2019t need to worry\nabout compatibility with earlier versions of Python, use\nargparse\ninstead of optparse\n.\nHere\u2019s an example:\nimport argparse\nparser = argparse.ArgumentParser(description='Command-line example.')\n# Add optional switches\nparser.add_argument('-v', action='store_true', dest='is_verbose',\nhelp='produce verbose output')\nparser.add_argument('-o', action='store', dest='output',\nmetavar='FILE',\nhelp='direct output to FILE instead of stdout')\nparser.add_argument('-C', action='store', type=int, dest='context',\nmetavar='NUM', default=0,\nhelp='display NUM lines of added context')\n# Allow any number of additional arguments.\nparser.add_argument(nargs='*', action='store', dest='inputs',\nhelp='input filenames (default is stdin)')\nargs = parser.parse_args()\nprint args.__dict__\nUnless you override it, -h\nand --help\nswitches\nare automatically added, and produce neatly formatted output:\n-> ./python.exe argparse-example.py --help\nusage: argparse-example.py [-h] [-v] [-o FILE] [-C NUM] [inputs [inputs ...]]\nCommand-line example.\npositional arguments:\ninputs input filenames (default is stdin)\noptional arguments:\n-h, --help show this help message and exit\n-v produce verbose output\n-o FILE direct output to FILE instead of stdout\n-C NUM display NUM lines of added context\nAs with optparse\n, the command-line switches and arguments\nare returned as an object with attributes named by the dest parameters:\n-> ./python.exe argparse-example.py -v\n{'output': None,\n'is_verbose': True,\n'context': 0,\n'inputs': []}\n-> ./python.exe argparse-example.py -v -o /tmp/output -C 4 file1 file2\n{'output': '/tmp/output',\n'is_verbose': True,\n'context': 4,\n'inputs': ['file1', 'file2']}\nargparse\nhas much fancier validation than optparse\n; you\ncan specify an exact number of arguments as an integer, 0 or more\narguments by passing '*'\n, 1 or more by passing '+'\n, or an\noptional argument with '?'\n. A top-level parser can contain\nsub-parsers to define subcommands that have different sets of\nswitches, as in svn commit\n, svn checkout\n, etc. You can\nspecify an argument\u2019s type as FileType\n, which will\nautomatically open files for you and understands that '-'\nmeans\nstandard input or output.\nSee also\nargparse\ndocumentationThe documentation page of the argparse module.\n- Migrating optparse code to argparse\nPart of the Python documentation, describing how to convert code that uses\noptparse\n.- PEP 389 - argparse - New Command Line Parsing Module\nPEP written and implemented by Steven Bethard.\nPEP 391: Dictionary-Based Configuration For Logging\u00b6\nThe logging\nmodule is very flexible; applications can define\na tree of logging subsystems, and each logger in this tree can filter\nout certain messages, format them differently, and direct messages to\na varying number of handlers.\nAll this flexibility can require a lot of configuration. You can\nwrite Python statements to create objects and set their properties,\nbut a complex set-up requires verbose but boring code.\nlogging\nalso supports a fileConfig()\nfunction that parses a file, but the file format doesn\u2019t support\nconfiguring filters, and it\u2019s messier to generate programmatically.\nPython 2.7 adds a dictConfig()\nfunction that\nuses a dictionary to configure logging. There are many ways to\nproduce a dictionary from different sources: construct one with code;\nparse a file containing JSON; or use a YAML parsing library if one is\ninstalled. For more information see Configuration functions.\nThe following example configures two loggers, the root logger and a\nlogger named \u201cnetwork\u201d. Messages sent to the root logger will be\nsent to the system log using the syslog protocol, and messages\nto the \u201cnetwork\u201d logger will be written to a network.log\nfile\nthat will be rotated once the log reaches 1MB.\nimport logging\nimport logging.config\nconfigdict = {\n'version': 1, # Configuration schema in use; must be 1 for now\n'formatters': {\n'standard': {\n'format': ('%(asctime)s %(name)-15s '\n'%(levelname)-8s %(message)s')}},\n'handlers': {'netlog': {'backupCount': 10,\n'class': 'logging.handlers.RotatingFileHandler',\n'filename': '/logs/network.log',\n'formatter': 'standard',\n'level': 'INFO',\n'maxBytes': 1000000},\n'syslog': {'class': 'logging.handlers.SysLogHandler',\n'formatter': 'standard',\n'level': 'ERROR'}},\n# Specify all the subordinate loggers\n'loggers': {\n'network': {\n'handlers': ['netlog']\n}\n},\n# Specify properties of the root logger\n'root': {\n'handlers': ['syslog']\n},\n}\n# Set up configuration\nlogging.config.dictConfig(configdict)\n# As an example, log two error messages\nlogger = logging.getLogger('/')\nlogger.error('Database not found')\nnetlogger = logging.getLogger('network')\nnetlogger.error('Connection failed')\nThree smaller enhancements to the logging\nmodule, all\nimplemented by Vinay Sajip, are:\nThe\nSysLogHandler\nclass now supports syslogging over TCP. The constructor has a socktype parameter giving the type of socket to use, eithersocket.SOCK_DGRAM\nfor UDP orsocket.SOCK_STREAM\nfor TCP. The default protocol remains UDP.Logger\ninstances gained agetChild()\nmethod that retrieves a descendant logger using a relative path. For example, once you retrieve a logger by doinglog = getLogger('app')\n, callinglog.getChild('network.listen')\nis equivalent togetLogger('app.network.listen')\n.The\nLoggerAdapter\nclass gained anisEnabledFor()\nmethod that takes a level and returns whether the underlying logger would process a message of that level of importance.\nSee also\n- PEP 391 - Dictionary-Based Configuration For Logging\nPEP written and implemented by Vinay Sajip.\nPEP 3106: Dictionary Views\u00b6\nThe dictionary methods keys()\n, values()\n, and\nitems()\nare different in Python 3.x. They return an object\ncalled a view instead of a fully materialized list.\nIt\u2019s not possible to change the return values of keys()\n,\nvalues()\n, and items()\nin Python 2.7 because\ntoo much code would break. Instead the 3.x versions were added\nunder the new names viewkeys()\n, viewvalues()\n,\nand viewitems()\n.\n>>> d = dict((i*10, chr(65+i)) for i in range(26))\n>>> d\n{0: 'A', 130: 'N', 10: 'B', 140: 'O', 20: ..., 250: 'Z'}\n>>> d.viewkeys()\ndict_keys([0, 130, 10, 140, 20, 150, 30, ..., 250])\nViews can be iterated over, but the key and item views also behave\nlike sets. The &\noperator performs intersection, and |\nperforms a union:\n>>> d1 = dict((i*10, chr(65+i)) for i in range(26))\n>>> d2 = dict((i**.5, i) for i in range(1000))\n>>> d1.viewkeys() & d2.viewkeys()\nset([0.0, 10.0, 20.0, 30.0])\n>>> d1.viewkeys() | range(0, 30)\nset([0, 1, 130, 3, 4, 5, 6, ..., 120, 250])\nThe view keeps track of the dictionary and its contents change as the dictionary is modified:\n>>> vk = d.viewkeys()\n>>> vk\ndict_keys([0, 130, 10, ..., 250])\n>>> d[260] = '&'\n>>> vk\ndict_keys([0, 130, 260, 10, ..., 250])\nHowever, note that you can\u2019t add or remove keys while you\u2019re iterating over the view:\n>>> for k in vk:\n... d[k*2] = k\n...\nTraceback (most recent call last):\nFile \"\", line 1, in \nRuntimeError: dictionary changed size during iteration\nYou can use the view methods in Python 2.x code, and the 2to3\nconverter will change them to the standard keys()\n,\nvalues()\n, and items()\nmethods.\nPEP 3137: The memoryview Object\u00b6\nThe memoryview\nobject provides a view of another object\u2019s\nmemory content that matches the bytes\ntype\u2019s interface.\n>>> import string\n>>> m = memoryview(string.letters)\n>>> m\n\n>>> len(m) # Returns length of underlying object\n52\n>>> m[0], m[25], m[26] # Indexing returns one byte\n('a', 'z', 'A')\n>>> m2 = m[0:26] # Slicing returns another memoryview\n>>> m2\n\nThe content of the view can be converted to a string of bytes or a list of integers:\n>>> m2.tobytes()\n'abcdefghijklmnopqrstuvwxyz'\n>>> m2.tolist()\n[97, 98, 99, 100, 101, 102, 103, ... 121, 122]\n>>>\nmemoryview\nobjects allow modifying the underlying object if\nit\u2019s a mutable object.\n>>> m2[0] = 75\nTraceback (most recent call last):\nFile \"\", line 1, in \nTypeError: cannot modify read-only memory\n>>> b = bytearray(string.letters) # Creating a mutable object\n>>> b\nbytearray(b'abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ')\n>>> mb = memoryview(b)\n>>> mb[0] = '*' # Assign to view, changing the bytearray.\n>>> b[0:5] # The bytearray has been changed.\nbytearray(b'*bcde')\n>>>\nOther Language Changes\u00b6\nSome smaller changes made to the core Python language are:\nThe syntax for set literals has been backported from Python 3.x. Curly brackets are used to surround the contents of the resulting mutable set; set literals are distinguished from dictionaries by not containing colons and values.\n{}\ncontinues to represent an empty dictionary; useset()\nfor an empty set.>>> {1, 2, 3, 4, 5} set([1, 2, 3, 4, 5]) >>> set() # empty set set([]) >>> {} # empty dict {}\nBackported by Alexandre Vassalotti; bpo-2335.\nDictionary and set comprehensions are another feature backported from 3.x, generalizing list/generator comprehensions to use the literal syntax for sets and dictionaries.\n>>> {x: x*x for x in range(6)} {0: 0, 1: 1, 2: 4, 3: 9, 4: 16, 5: 25} >>> {('a'*x) for x in range(6)} set(['', 'a', 'aa', 'aaa', 'aaaa', 'aaaaa'])\nBackported by Alexandre Vassalotti; bpo-2333.\nThe\nwith\nstatement can now use multiple context managers in one statement. Context managers are processed from left to right and each one is treated as beginning a newwith\nstatement. This means that:with A() as a, B() as b: ... suite of statements ...\nis equivalent to:\nwith A() as a: with B() as b: ... suite of statements ...\nThe\ncontextlib.nested()\nfunction provides a very similar function, so it\u2019s no longer necessary and has been deprecated.(Proposed in https://codereview.appspot.com/53094; implemented by Georg Brandl.)\nConversions between floating-point numbers and strings are now correctly rounded on most platforms. These conversions occur in many different places:\nstr()\non floats and complex numbers; thefloat\nandcomplex\nconstructors; numeric formatting; serializing and deserializing floats and complex numbers using themarshal\n,pickle\nandjson\nmodules; parsing of float and imaginary literals in Python code; andDecimal\n-to-float conversion.Related to this, the\nrepr()\nof a floating-point number x now returns a result based on the shortest decimal string that\u2019s guaranteed to round back to x under correct rounding (with round-half-to-even rounding mode). Previously it gave a string based on rounding x to 17 decimal digits.The rounding library responsible for this improvement works on Windows and on Unix platforms using the gcc, icc, or suncc compilers. There may be a small number of platforms where correct operation of this code cannot be guaranteed, so the code is not used on such systems. You can find out which code is being used by checking\nsys.float_repr_style\n, which will beshort\nif the new code is in use andlegacy\nif it isn\u2019t.Implemented by Eric Smith and Mark Dickinson, using David Gay\u2019s\ndtoa.c\nlibrary; bpo-7117.Conversions from long integers and regular integers to floating point now round differently, returning the floating-point number closest to the number. This doesn\u2019t matter for small integers that can be converted exactly, but for large numbers that will unavoidably lose precision, Python 2.7 now approximates more closely. For example, Python 2.6 computed the following:\n>>> n = 295147905179352891391 >>> float(n) 2.9514790517935283e+20 >>> n - long(float(n)) 65535L\nPython 2.7\u2019s floating-point result is larger, but much closer to the true value:\n>>> n = 295147905179352891391 >>> float(n) 2.9514790517935289e+20 >>> n - long(float(n)) -1L\n(Implemented by Mark Dickinson; bpo-3166.)\nInteger division is also more accurate in its rounding behaviours. (Also implemented by Mark Dickinson; bpo-1811.)\nImplicit coercion for complex numbers has been removed; the interpreter will no longer ever attempt to call a\n__coerce__()\nmethod on complex objects. (Removed by Meador Inge and Mark Dickinson; bpo-5211.)The\nstr.format()\nmethod now supports automatic numbering of the replacement fields. This makes usingstr.format()\nmore closely resemble using%s\nformatting:>>> '{}:{}:{}'.format(2009, 04, 'Sunday') '2009:4:Sunday' >>> '{}:{}:{day}'.format(2009, 4, day='Sunday') '2009:4:Sunday'\nThe auto-numbering takes the fields from left to right, so the first\n{...}\nspecifier will use the first argument tostr.format()\n, the next specifier will use the next argument, and so on. You can\u2019t mix auto-numbering and explicit numbering \u2013 either number all of your specifier fields or none of them \u2013 but you can mix auto-numbering and named fields, as in the second example above. (Contributed by Eric Smith; bpo-5237.)Complex numbers now correctly support usage with\nformat()\n, and default to being right-aligned. Specifying a precision or comma-separation applies to both the real and imaginary parts of the number, but a specified field width and alignment is applied to the whole of the resulting1.5+3j\noutput. (Contributed by Eric Smith; bpo-1588 and bpo-7988.)The \u2018F\u2019 format code now always formats its output using uppercase characters, so it will now produce \u2018INF\u2019 and \u2018NAN\u2019. (Contributed by Eric Smith; bpo-3382.)\nA low-level change: the\nobject.__format__()\nmethod now triggers aPendingDeprecationWarning\nif it\u2019s passed a format string, because the__format__()\nmethod forobject\nconverts the object to a string representation and formats that. Previously the method silently applied the format string to the string representation, but that could hide mistakes in Python code. If you\u2019re supplying formatting information such as an alignment or precision, presumably you\u2019re expecting the formatting to be applied in some object-specific way. (Fixed by Eric Smith; bpo-7994.)The\nint()\nandlong()\ntypes gained abit_length\nmethod that returns the number of bits necessary to represent its argument in binary:>>> n = 37 >>> bin(n) '0b100101' >>> n.bit_length() 6 >>> n = 2**123-1 >>> n.bit_length() 123 >>> (n+1).bit_length() 124\n(Contributed by Fredrik Johansson and Victor Stinner; bpo-3439.)\nThe\nimport\nstatement will no longer try an absolute import if a relative import (e.g.from .os import sep\n) fails. This fixes a bug, but could possibly break certainimport\nstatements that were only working by accident. (Fixed by Meador Inge; bpo-7902.)It\u2019s now possible for a subclass of the built-in\nunicode\ntype to override the__unicode__()\nmethod. (Implemented by Victor Stinner; bpo-1583863.)The\nbytearray\ntype\u2019stranslate()\nmethod now acceptsNone\nas its first argument. (Fixed by Georg Brandl; bpo-4759.)When using\n@classmethod\nand@staticmethod\nto wrap methods as class or static methods, the wrapper object now exposes the wrapped function as their__func__\nattribute. (Contributed by Amaury Forgeot d\u2019Arc, after a suggestion by George Sakkis; bpo-5982.)When a restricted set of attributes were set using\n__slots__\n, deleting an unset attribute would not raiseAttributeError\nas you would expect. Fixed by Benjamin Peterson; bpo-7604.)Two new encodings are now supported: \u201ccp720\u201d, used primarily for Arabic text; and \u201ccp858\u201d, a variant of CP 850 that adds the euro symbol. (CP720 contributed by Alexander Belchenko and Amaury Forgeot d\u2019Arc in bpo-1616979; CP858 contributed by Tim Hatch in bpo-8016.)\nThe\nfile\nobject will now set thefilename\nattribute on theIOError\nexception when trying to open a directory on POSIX platforms (noted by Jan Kaliszewski; bpo-4764), and now explicitly checks for and forbids writing to read-only file objects instead of trusting the C library to catch and report the error (fixed by Stefan Krah; bpo-5677).The Python tokenizer now translates line endings itself, so the\ncompile()\nbuilt-in function now accepts code using any line-ending convention. Additionally, it no longer requires that the code end in a newline.Extra parentheses in function definitions are illegal in Python 3.x, meaning that you get a syntax error from\ndef f((x)): pass\n. In Python3-warning mode, Python 2.7 will now warn about this odd usage. (Noted by James Lingard; bpo-7362.)It\u2019s now possible to create weak references to old-style class objects. New-style classes were always weak-referenceable. (Fixed by Antoine Pitrou; bpo-8268.)\nWhen a module object is garbage-collected, the module\u2019s dictionary is now only cleared if no one else is holding a reference to the dictionary (bpo-7140).\nInterpreter Changes\u00b6\nA new environment variable, PYTHONWARNINGS\n,\nallows controlling warnings. It should be set to a string\ncontaining warning settings, equivalent to those\nused with the -W\nswitch, separated by commas.\n(Contributed by Brian Curtin; bpo-7301.)\nFor example, the following setting will print warnings every time\nthey occur, but turn warnings from the Cookie\nmodule into an\nerror. (The exact syntax for setting an environment variable varies\nacross operating systems and shells.)\nexport PYTHONWARNINGS=all,error:::Cookie:0\nOptimizations\u00b6\nSeveral performance enhancements have been added:\nA new opcode was added to perform the initial setup for\nwith\nstatements, looking up the__enter__()\nand__exit__()\nmethods. (Contributed by Benjamin Peterson.)The garbage collector now performs better for one common usage pattern: when many objects are being allocated without deallocating any of them. This would previously take quadratic time for garbage collection, but now the number of full garbage collections is reduced as the number of objects on the heap grows. The new logic only performs a full garbage collection pass when the middle generation has been collected 10 times and when the number of survivor objects from the middle generation exceeds 10% of the number of objects in the oldest generation. (Suggested by Martin von L\u00f6wis and implemented by Antoine Pitrou; bpo-4074.)\nThe garbage collector tries to avoid tracking simple containers which can\u2019t be part of a cycle. In Python 2.7, this is now true for tuples and dicts containing atomic types (such as ints, strings, etc.). Transitively, a dict containing tuples of atomic types won\u2019t be tracked either. This helps reduce the cost of each garbage collection by decreasing the number of objects to be considered and traversed by the collector. (Contributed by Antoine Pitrou; bpo-4688.)\nLong integers are now stored internally either in base\n2**15\nor in base2**30\n, the base being determined at build time. Previously, they were always stored in base2**15\n. Using base2**30\ngives significant performance improvements on 64-bit machines, but benchmark results on 32-bit machines have been mixed. Therefore, the default is to use base2**30\non 64-bit machines and base2**15\non 32-bit machines; on Unix, there\u2019s a new configure option--enable-big-digits\nthat can be used to override this default.Apart from the performance improvements this change should be invisible to end users, with one exception: for testing and debugging purposes there\u2019s a new structseq\nsys.long_info\nthat provides information about the internal format, giving the number of bits per digit and the size in bytes of the C type used to store each digit:>>> import sys >>> sys.long_info sys.long_info(bits_per_digit=30, sizeof_digit=4)\n(Contributed by Mark Dickinson; bpo-4258.)\nAnother set of changes made long objects a few bytes smaller: 2 bytes smaller on 32-bit systems and 6 bytes on 64-bit. (Contributed by Mark Dickinson; bpo-5260.)\nThe division algorithm for long integers has been made faster by tightening the inner loop, doing shifts instead of multiplications, and fixing an unnecessary extra iteration. Various benchmarks show speedups of between 50% and 150% for long integer divisions and modulo operations. (Contributed by Mark Dickinson; bpo-5512.) Bitwise operations are also significantly faster (initial patch by Gregory Smith; bpo-1087418).\nThe implementation of\n%\nchecks for the left-side operand being a Python string and special-cases it; this results in a 1\u20133% performance increase for applications that frequently use%\nwith strings, such as templating libraries. (Implemented by Collin Winter; bpo-5176.)List comprehensions with an\nif\ncondition are compiled into faster bytecode. (Patch by Antoine Pitrou, back-ported to 2.7 by Jeffrey Yasskin; bpo-4715.)Converting an integer or long integer to a decimal string was made faster by special-casing base 10 instead of using a generalized conversion function that supports arbitrary bases. (Patch by Gawain Bolton; bpo-6713.)\nThe\nsplit()\n,replace()\n,rindex()\n,rpartition()\n, andrsplit()\nmethods of string-like types (strings, Unicode strings, andbytearray\nobjects) now use a fast reverse-search algorithm instead of a character-by-character scan. This is sometimes faster by a factor of 10. (Added by Florent Xicluna; bpo-7462 and bpo-7622.)The\npickle\nandcPickle\nmodules now automatically intern the strings used for attribute names, reducing memory usage of the objects resulting from unpickling. (Contributed by Jake McGuire; bpo-5084.)The\ncPickle\nmodule now special-cases dictionaries, nearly halving the time required to pickle them. (Contributed by Collin Winter; bpo-5670.)\nNew and Improved Modules\u00b6\nAs in every release, Python\u2019s standard library received a number of\nenhancements and bug fixes. Here\u2019s a partial list of the most notable\nchanges, sorted alphabetically by module name. Consult the\nMisc/NEWS\nfile in the source tree for a more complete list of\nchanges, or look through the Subversion logs for all the details.\nThe\nbdb\nmodule\u2019s base debugging classBdb\ngained a feature for skipping modules. The constructor now takes an iterable containing glob-style patterns such asdjango.*\n; the debugger will not step into stack frames from a module that matches one of these patterns. (Contributed by Maru Newby after a suggestion by Senthil Kumaran; bpo-5142.)The\nbinascii\nmodule now supports the buffer API, so it can be used withmemoryview\ninstances and other similar buffer objects. (Backported from 3.x by Florent Xicluna; bpo-7703.)Updated module: the\nbsddb\nmodule has been updated from 4.7.2devel9 to version 4.8.4 of the pybsddb package. The new version features better Python 3.x compatibility, various bug fixes, and adds several new BerkeleyDB flags and methods. (Updated by Jes\u00fas Cea Avi\u00f3n; bpo-8156. The pybsddb changelog can be read at https://hg.jcea.es/pybsddb/file/tip/ChangeLog.)The\nbz2\nmodule\u2019sBZ2File\nnow supports the context management protocol, so you can writewith bz2.BZ2File(...) as f:\n. (Contributed by Hagen F\u00fcrstenau; bpo-3860.)New class: the\nCounter\nclass in thecollections\nmodule is useful for tallying data.Counter\ninstances behave mostly like dictionaries but return zero for missing keys instead of raising aKeyError\n:>>> from collections import Counter >>> c = Counter() >>> for letter in 'here is a sample of english text': ... c[letter] += 1 ... >>> c Counter({' ': 6, 'e': 5, 's': 3, 'a': 2, 'i': 2, 'h': 2, 'l': 2, 't': 2, 'g': 1, 'f': 1, 'm': 1, 'o': 1, 'n': 1, 'p': 1, 'r': 1, 'x': 1}) >>> c['e'] 5 >>> c['z'] 0\nThere are three additional\nCounter\nmethods.most_common()\nreturns the N most common elements and their counts.elements()\nreturns an iterator over the contained elements, repeating each element as many times as its count.subtract()\ntakes an iterable and subtracts one for each element instead of adding; if the argument is a dictionary or anotherCounter\n, the counts are subtracted.>>> c.most_common(5) [(' ', 6), ('e', 5), ('s', 3), ('a', 2), ('i', 2)] >>> c.elements() -> 'a', 'a', ' ', ' ', ' ', ' ', ' ', ' ', 'e', 'e', 'e', 'e', 'e', 'g', 'f', 'i', 'i', 'h', 'h', 'm', 'l', 'l', 'o', 'n', 'p', 's', 's', 's', 'r', 't', 't', 'x' >>> c['e'] 5 >>> c.subtract('very heavy on the letter e') >>> c['e'] # Count is now lower -1\nContributed by Raymond Hettinger; bpo-1696199.\nNew class:\nOrderedDict\nis described in the earlier section PEP 372: Adding an Ordered Dictionary to collections.New method: The\ndeque\ndata type now has acount()\nmethod that returns the number of contained elements equal to the supplied argument x, and areverse()\nmethod that reverses the elements of the deque in-place.deque\nalso exposes its maximum length as the read-onlymaxlen\nattribute. (Both features added by Raymond Hettinger.)The\nnamedtuple\nclass now has an optional rename parameter. If rename is true, field names that are invalid because they\u2019ve been repeated or aren\u2019t legal Python identifiers will be renamed to legal names that are derived from the field\u2019s position within the list of fields:>>> from collections import namedtuple >>> T = namedtuple('T', ['field1', '$illegal', 'for', 'field2'], rename=True) >>> T._fields ('field1', '_1', '_2', 'field2')\n(Added by Raymond Hettinger; bpo-1818.)\nFinally, the\nMapping\nabstract base class now returnsNotImplemented\nif a mapping is compared to another type that isn\u2019t aMapping\n. (Fixed by Daniel Stutzbach; bpo-8729.)Constructors for the parsing classes in the\nConfigParser\nmodule now take an allow_no_value parameter, defaulting to false; if true, options without values will be allowed. For example:>>> import ConfigParser, StringIO >>> sample_config = \"\"\" ... [mysqld] ... user = mysql ... pid-file = /var/run/mysqld/mysqld.pid ... skip-bdb ... \"\"\" >>> config = ConfigParser.RawConfigParser(allow_no_value=True) >>> config.readfp(StringIO.StringIO(sample_config)) >>> config.get('mysqld', 'user') 'mysql' >>> print config.get('mysqld', 'skip-bdb') None >>> print config.get('mysqld', 'unknown') Traceback (most recent call last): ... NoOptionError: No option 'unknown' in section: 'mysqld'\n(Contributed by Mats Kindahl; bpo-7005.)\nDeprecated function:\ncontextlib.nested()\n, which allows handling more than one context manager with a singlewith\nstatement, has been deprecated, because thewith\nstatement now supports multiple context managers.The\ncookielib\nmodule now ignores cookies that have an invalid version field, one that doesn\u2019t contain an integer value. (Fixed by John J. Lee; bpo-3924.)The\ncopy\nmodule\u2019sdeepcopy()\nfunction will now correctly copy bound instance methods. (Implemented by Robert Collins; bpo-1515.)The\nctypes\nmodule now always convertsNone\nto a CNULL\npointer for arguments declared as pointers. (Changed by Thomas Heller; bpo-4606.) The underlying libffi library has been updated to version 3.0.9, containing various fixes for different platforms. (Updated by Matthias Klose; bpo-8142.)New method: the\ndatetime\nmodule\u2019stimedelta\nclass gained atotal_seconds()\nmethod that returns the number of seconds in the duration. (Contributed by Brian Quinlan; bpo-5788.)New method: the\nDecimal\nclass gained afrom_float()\nclass method that performs an exact conversion of a floating-point number to aDecimal\n. This exact conversion strives for the closest decimal approximation to the floating-point representation\u2019s value; the resulting decimal value will therefore still include the inaccuracy, if any. For example,Decimal.from_float(0.1)\nreturnsDecimal('0.1000000000000000055511151231257827021181583404541015625')\n. (Implemented by Raymond Hettinger; bpo-4796.)Comparing instances of\nDecimal\nwith floating-point numbers now produces sensible results based on the numeric values of the operands. Previously such comparisons would fall back to Python\u2019s default rules for comparing objects, which produced arbitrary results based on their type. Note that you still cannot combineDecimal\nand floating point in other operations such as addition, since you should be explicitly choosing how to convert between float andDecimal\n. (Fixed by Mark Dickinson; bpo-2531.)The constructor for\nDecimal\nnow accepts floating-point numbers (added by Raymond Hettinger; bpo-8257) and non-European Unicode characters such as Arabic-Indic digits (contributed by Mark Dickinson; bpo-6595).Most of the methods of the\nContext\nclass now accept integers as well asDecimal\ninstances; the only exceptions are thecanonical()\nandis_canonical()\nmethods. (Patch by Juan Jos\u00e9 Conti; bpo-7633.)When using\nDecimal\ninstances with a string\u2019sformat()\nmethod, the default alignment was previously left-alignment. This has been changed to right-alignment, which is more sensible for numeric types. (Changed by Mark Dickinson; bpo-6857.)Comparisons involving a signaling NaN value (or\nsNAN\n) now signalInvalidOperation\ninstead of silently returning a true or false value depending on the comparison operator. Quiet NaN values (orNaN\n) are now hashable. (Fixed by Mark Dickinson; bpo-7279.)The\ndifflib\nmodule now produces output that is more compatible with modern diff/patch tools through one small change, using a tab character instead of spaces as a separator in the header giving the filename. (Fixed by Anatoly Techtonik; bpo-7585.)The Distutils\nsdist\ncommand now always regenerates theMANIFEST\nfile, since even if theMANIFEST.in\norsetup.py\nfiles haven\u2019t been modified, the user might have created some new files that should be included. (Fixed by Tarek Ziad\u00e9; bpo-8688.)The\ndoctest\nmodule\u2019sIGNORE_EXCEPTION_DETAIL\nflag will now ignore the name of the module containing the exception being tested. (Patch by Lennart Regebro; bpo-7490.)The\nemail\nmodule\u2019sMessage\nclass will now accept a Unicode-valued payload, automatically converting the payload to the encoding specified byoutput_charset\n. (Added by R. David Murray; bpo-1368247.)The\nFraction\nclass now accepts a single float orDecimal\ninstance, or two rational numbers, as arguments to its constructor. (Implemented by Mark Dickinson; rationals added in bpo-5812, and float/decimal in bpo-8294.)Ordering comparisons (\n<\n,<=\n,>\n,>=\n) between fractions and complex numbers now raise aTypeError\n. This fixes an oversight, making theFraction\nmatch the other numeric types.New class:\nFTP_TLS\nin theftplib\nmodule provides secure FTP connections using TLS encapsulation of authentication as well as subsequent control and data transfers. (Contributed by Giampaolo Rodola; bpo-2054.)The\nstorbinary()\nmethod for binary uploads can now restart uploads thanks to an added rest parameter (patch by Pablo Mouzo; bpo-6845.)New class decorator:\ntotal_ordering()\nin thefunctools\nmodule takes a class that defines an__eq__()\nmethod and one of__lt__()\n,__le__()\n,__gt__()\n, or__ge__()\n, and generates the missing comparison methods. Since the__cmp__()\nmethod is being deprecated in Python 3.x, this decorator makes it easier to define ordered classes. (Added by Raymond Hettinger; bpo-5479.)New function:\ncmp_to_key()\nwill take an old-style comparison function that expects two arguments and return a new callable that can be used as the key parameter to functions such assorted()\n,min()\nandmax()\n, etc. The primary intended use is to help with making code compatible with Python 3.x. (Added by Raymond Hettinger.)New function: the\ngc\nmodule\u2019sis_tracked()\nreturns true if a given instance is tracked by the garbage collector, false otherwise. (Contributed by Antoine Pitrou; bpo-4688.)The\ngzip\nmodule\u2019sGzipFile\nnow supports the context management protocol, so you can writewith gzip.GzipFile(...) as f:\n(contributed by Hagen F\u00fcrstenau; bpo-3860), and it now implements theio.BufferedIOBase\nABC, so you can wrap it withio.BufferedReader\nfor faster processing (contributed by Nir Aides; bpo-7471). It\u2019s also now possible to override the modification time recorded in a gzipped file by providing an optional timestamp to the constructor. (Contributed by Jacques Frechet; bpo-4272.)Files in gzip format can be padded with trailing zero bytes; the\ngzip\nmodule will now consume these trailing bytes. (Fixed by Tadek Pietraszek and Brian Curtin; bpo-2846.)New attribute: the\nhashlib\nmodule now has analgorithms\nattribute containing a tuple naming the supported algorithms. In Python 2.7,hashlib.algorithms\ncontains('md5', 'sha1', 'sha224', 'sha256', 'sha384', 'sha512')\n. (Contributed by Carl Chenet; bpo-7418.)The default\nHTTPResponse\nclass used by thehttplib\nmodule now supports buffering, resulting in much faster reading of HTTP responses. (Contributed by Kristj\u00e1n Valur J\u00f3nsson; bpo-4879.)The\nHTTPConnection\nandHTTPSConnection\nclasses now support a source_address parameter, a(host, port)\n2-tuple giving the source address that will be used for the connection. (Contributed by Eldon Ziegler; bpo-3972.)The\nihooks\nmodule now supports relative imports. Note thatihooks\nis an older module for customizing imports, superseded by theimputil\nmodule added in Python 2.0. (Relative import support added by Neil Schemenauer.)The\nimaplib\nmodule now supports IPv6 addresses. (Contributed by Derek Morr; bpo-1655.)New function: the\ninspect\nmodule\u2019sgetcallargs()\ntakes a callable and its positional and keyword arguments, and figures out which of the callable\u2019s parameters will receive each argument, returning a dictionary mapping argument names to their values. For example:>>> from inspect import getcallargs >>> def f(a, b=1, *pos, **named): ... pass ... >>> getcallargs(f, 1, 2, 3) {'a': 1, 'b': 2, 'pos': (3,), 'named': {}} >>> getcallargs(f, a=2, x=4) {'a': 2, 'b': 1, 'pos': (), 'named': {'x': 4}} >>> getcallargs(f) Traceback (most recent call last): ... TypeError: f() takes at least 1 argument (0 given)\nContributed by George Sakkis; bpo-3135.\nUpdated module: The\nio\nlibrary has been upgraded to the version shipped with Python 3.1. For 3.1, the I/O library was entirely rewritten in C and is 2 to 20 times faster depending on the task being performed. The original Python version was renamed to the_pyio\nmodule.One minor resulting change: the\nio.TextIOBase\nclass now has anerrors\nattribute giving the error setting used for encoding and decoding errors (one of'strict'\n,'replace'\n,'ignore'\n).The\nio.FileIO\nclass now raises anOSError\nwhen passed an invalid file descriptor. (Implemented by Benjamin Peterson; bpo-4991.) Thetruncate()\nmethod now preserves the file position; previously it would change the file position to the end of the new file. (Fixed by Pascal Chambon; bpo-6939.)New function:\nitertools.compress(data, selectors)\ntakes two iterators. Elements of data are returned if the corresponding value in selectors is true:itertools.compress('ABCDEF', [1,0,1,0,1,1]) => A, C, E, F\nNew function:\nitertools.combinations_with_replacement(iter, r)\nreturns all the possible r-length combinations of elements from the iterable iter. Unlikecombinations()\n, individual elements can be repeated in the generated combinations:itertools.combinations_with_replacement('abc', 2) => ('a', 'a'), ('a', 'b'), ('a', 'c'), ('b', 'b'), ('b', 'c'), ('c', 'c')\nNote that elements are treated as unique depending on their position in the input, not their actual values.\nThe\nitertools.count()\nfunction now has a step argument that allows incrementing by values other than 1.count()\nalso now allows keyword arguments, and using non-integer values such as floats orDecimal\ninstances. (Implemented by Raymond Hettinger; bpo-5032.)itertools.combinations()\nanditertools.product()\npreviously raisedValueError\nfor values of r larger than the input iterable. This was deemed a specification error, so they now return an empty iterator. (Fixed by Raymond Hettinger; bpo-4816.)Updated module: The\njson\nmodule was upgraded to version 2.0.9 of the simplejson package, which includes a C extension that makes encoding and decoding faster. (Contributed by Bob Ippolito; bpo-4136.)To support the new\ncollections.OrderedDict\ntype,json.load()\nnow has an optional object_pairs_hook parameter that will be called with any object literal that decodes to a list of pairs. (Contributed by Raymond Hettinger; bpo-5381.)The\nmailbox\nmodule\u2019sMaildir\nclass now records the timestamp on the directories it reads, and only re-reads them if the modification time has subsequently changed. This improves performance by avoiding unneeded directory scans. (Fixed by A.M. Kuchling and Antoine Pitrou; bpo-1607951, bpo-6896.)New functions: the\nmath\nmodule gainederf()\nanderfc()\nfor the error function and the complementary error function,expm1()\nwhich computese**x - 1\nwith more precision than usingexp()\nand subtracting 1,gamma()\nfor the Gamma function, andlgamma()\nfor the natural log of the Gamma function. (Contributed by Mark Dickinson and nirinA raseliarison; bpo-3366.)The\nmultiprocessing\nmodule\u2019sManager*\nclasses can now be passed a callable that will be called whenever a subprocess is started, along with a set of arguments that will be passed to the callable. (Contributed by lekma; bpo-5585.)The\nPool\nclass, which controls a pool of worker processes, now has an optional maxtasksperchild parameter. Worker processes will perform the specified number of tasks and then exit, causing thePool\nto start a new worker. This is useful if tasks may leak memory or other resources, or if some tasks will cause the worker to become very large. (Contributed by Charles Cazabon; bpo-6963.)The\nnntplib\nmodule now supports IPv6 addresses. (Contributed by Derek Morr; bpo-1664.)New functions: the\nos\nmodule wraps the following POSIX system calls:getresgid()\nandgetresuid()\n, which return the real, effective, and saved GIDs and UIDs;setresgid()\nandsetresuid()\n, which set real, effective, and saved GIDs and UIDs to new values;initgroups()\n, which initialize the group access list for the current process. (GID/UID functions contributed by Travis H.; bpo-6508. Support for initgroups added by Jean-Paul Calderone; bpo-7333.)The\nos.fork()\nfunction now re-initializes the import lock in the child process; this fixes problems on Solaris whenfork()\nis called from a thread. (Fixed by Zsolt Cserna; bpo-7242.)In the\nos.path\nmodule, thenormpath()\nandabspath()\nfunctions now preserve Unicode; if their input path is a Unicode string, the return value is also a Unicode string. (normpath()\nfixed by Matt Giuca in bpo-5827;abspath()\nfixed by Ezio Melotti in bpo-3426.)The\npydoc\nmodule now has help for the various symbols that Python uses. You can now dohelp('<<')\norhelp('@')\n, for example. (Contributed by David Laban; bpo-4739.)The\nre\nmodule\u2019ssplit()\n,sub()\n, andsubn()\nnow accept an optional flags argument, for consistency with the other functions in the module. (Added by Gregory P. Smith.)New function:\nrun_path()\nin therunpy\nmodule will execute the code at a provided path argument. path can be the path of a Python source file (example.py\n), a compiled bytecode file (example.pyc\n), a directory (./package/\n), or a zip archive (example.zip\n). If a directory or zip path is provided, it will be added to the front ofsys.path\nand the module__main__\nwill be imported. It\u2019s expected that the directory or zip contains a__main__.py\n; if it doesn\u2019t, some other__main__.py\nmight be imported from a location later insys.path\n. This makes more of the machinery ofrunpy\navailable to scripts that want to mimic the way Python\u2019s command line processes an explicit path name. (Added by Nick Coghlan; bpo-6816.)New function: in the\nshutil\nmodule,make_archive()\ntakes a filename, archive type (zip or tar-format), and a directory path, and creates an archive containing the directory\u2019s contents. (Added by Tarek Ziad\u00e9.)shutil\n\u2019scopyfile()\nandcopytree()\nfunctions now raise aSpecialFileError\nexception when asked to copy a named pipe. Previously the code would treat named pipes like a regular file by opening them for reading, and this would block indefinitely. (Fixed by Antoine Pitrou; bpo-3002.)The\nsignal\nmodule no longer re-installs the signal handler unless this is truly necessary, which fixes a bug that could make it impossible to catch the EINTR signal robustly. (Fixed by Charles-Francois Natali; bpo-8354.)New functions: in the\nsite\nmodule, three new functions return various site- and user-specific paths.getsitepackages()\nreturns a list containing all global site-packages directories,getusersitepackages()\nreturns the path of the user\u2019s site-packages directory, andgetuserbase()\nreturns the value of theUSER_BASE\nenvironment variable, giving the path to a directory that can be used to store data. (Contributed by Tarek Ziad\u00e9; bpo-6693.)The\nsite\nmodule now reports exceptions occurring when thesitecustomize\nmodule is imported, and will no longer catch and swallow theKeyboardInterrupt\nexception. (Fixed by Victor Stinner; bpo-3137.)The\ncreate_connection()\nfunction gained a source_address parameter, a(host, port)\n2-tuple giving the source address that will be used for the connection. (Contributed by Eldon Ziegler; bpo-3972.)The\nrecv_into()\nandrecvfrom_into()\nmethods will now write into objects that support the buffer API, most usefully thebytearray\nandmemoryview\nobjects. (Implemented by Antoine Pitrou; bpo-8104.)The\nSocketServer\nmodule\u2019sTCPServer\nclass now supports socket timeouts and disabling the Nagle algorithm. Thedisable_nagle_algorithm\nclass attribute defaults toFalse\n; if overridden to be true, new request connections will have the TCP_NODELAY option set to prevent buffering many small sends into a single TCP packet. Thetimeout\nclass attribute can hold a timeout in seconds that will be applied to the request socket; if no request is received within that time,handle_timeout()\nwill be called andhandle_request()\nwill return. (Contributed by Kristj\u00e1n Valur J\u00f3nsson; bpo-6192 and bpo-6267.)Updated module: the\nsqlite3\nmodule has been updated to version 2.6.0 of the pysqlite package. Version 2.6.0 includes a number of bugfixes, and adds the ability to load SQLite extensions from shared libraries. Call theenable_load_extension(True)\nmethod to enable extensions, and then callload_extension()\nto load a particular shared library. (Updated by Gerhard H\u00e4ring.)The\nssl\nmodule\u2019sSSLSocket\nobjects now support the buffer API, which fixed a test suite failure (fix by Antoine Pitrou; bpo-7133) and automatically set OpenSSL\u2019sSSL_MODE_AUTO_RETRY\n, which will prevent an error code being returned fromrecv()\noperations that trigger an SSL renegotiation (fix by Antoine Pitrou; bpo-8222).The\nwrap_socket()\nconstructor function now takes a ciphers argument that\u2019s a string listing the encryption algorithms to be allowed; the format of the string is described in the OpenSSL documentation. (Added by Antoine Pitrou; bpo-8322.)Another change makes the extension load all of OpenSSL\u2019s ciphers and digest algorithms so that they\u2019re all available. Some SSL certificates couldn\u2019t be verified, reporting an \u201cunknown algorithm\u201d error. (Reported by Beda Kosata, and fixed by Antoine Pitrou; bpo-8484.)\nThe version of OpenSSL being used is now available as the module attributes\nssl.OPENSSL_VERSION\n(a string),ssl.OPENSSL_VERSION_INFO\n(a 5-tuple), andssl.OPENSSL_VERSION_NUMBER\n(an integer). (Added by Antoine Pitrou; bpo-8321.)The\nstruct\nmodule will no longer silently ignore overflow errors when a value is too large for a particular integer format code (one ofbBhHiIlLqQ\n); it now always raises astruct.error\nexception. (Changed by Mark Dickinson; bpo-1523.) Thepack()\nfunction will also attempt to use__index__()\nto convert and pack non-integers before trying the__int__()\nmethod or reporting an error. (Changed by Mark Dickinson; bpo-8300.)New function: the\nsubprocess\nmodule\u2019scheck_output()\nruns a command with a specified set of arguments and returns the command\u2019s output as a string when the command runs without error, or raises aCalledProcessError\nexception otherwise.>>> subprocess.check_output(['df', '-h', '.']) 'Filesystem Size Used Avail Capacity Mounted on\\n /dev/disk0s2 52G 49G 3.0G 94% /\\n' >>> subprocess.check_output(['df', '-h', '/bogus']) ... subprocess.CalledProcessError: Command '['df', '-h', '/bogus']' returned non-zero exit status 1\n(Contributed by Gregory P. Smith.)\nThe\nsubprocess\nmodule will now retry its internal system calls on receiving anEINTR\nsignal. (Reported by several people; final patch by Gregory P. Smith in bpo-1068268.)New function:\nis_declared_global()\nin thesymtable\nmodule returns true for variables that are explicitly declared to be global, false for ones that are implicitly global. (Contributed by Jeremy Hylton.)The\nsyslog\nmodule will now use the value ofsys.argv[0]\nas the identifier instead of the previous default value of'python'\n. (Changed by Sean Reifschneider; bpo-8451.)The\nsys.version_info\nvalue is now a named tuple, with attributes namedmajor\n,minor\n,micro\n,releaselevel\n, andserial\n. (Contributed by Ross Light; bpo-4285.)sys.getwindowsversion()\nalso returns a named tuple, with attributes namedmajor\n,minor\n,build\n,platform\n,service_pack\n,service_pack_major\n,service_pack_minor\n,suite_mask\n, andproduct_type\n. (Contributed by Brian Curtin; bpo-7766.)The\ntarfile\nmodule\u2019s default error handling has changed, to no longer suppress fatal errors. The default error level was previously 0, which meant that errors would only result in a message being written to the debug log, but because the debug log is not activated by default, these errors go unnoticed. The default error level is now 1, which raises an exception if there\u2019s an error. (Changed by Lars Gust\u00e4bel; bpo-7357.)tarfile\nnow supports filtering theTarInfo\nobjects being added to a tar file. When you calladd()\n, you may supply an optional filter argument that\u2019s a callable. The filter callable will be passed theTarInfo\nfor every file being added, and can modify and return it. If the callable returnsNone\n, the file will be excluded from the resulting archive. This is more powerful than the existing exclude argument, which has therefore been deprecated. (Added by Lars Gust\u00e4bel; bpo-6856.) TheTarFile\nclass also now supports the context management protocol. (Added by Lars Gust\u00e4bel; bpo-7232.)The\nwait()\nmethod of thethreading.Event\nclass now returns the internal flag on exit. This means the method will usually return true becausewait()\nis supposed to block until the internal flag becomes true. The return value will only be false if a timeout was provided and the operation timed out. (Contributed by Tim Lesher; bpo-1674032.)The Unicode database provided by the\nunicodedata\nmodule is now used internally to determine which characters are numeric, whitespace, or represent line breaks. The database also includes information from theUnihan.txt\ndata file (patch by Anders Chrigstr\u00f6m and Amaury Forgeot d\u2019Arc; bpo-1571184) and has been updated to version 5.2.0 (updated by Florent Xicluna; bpo-8024).The\nurlparse\nmodule\u2019surlsplit()\nnow handles unknown URL schemes in a fashion compliant with RFC 3986: if the URL is of the form\"://...\"\n, the text before the://\nis treated as the scheme, even if it\u2019s a made-up scheme that the module doesn\u2019t know about. This change may break code that worked around the old behaviour. For example, Python 2.6.4 or 2.5 will return the following:>>> import urlparse >>> urlparse.urlsplit('invented://host/filename?query') ('invented', '', '//host/filename?query', '', '')\nPython 2.7 (and Python 2.6.5) will return:\n>>> import urlparse >>> urlparse.urlsplit('invented://host/filename?query') ('invented', 'host', '/filename?query', '', '')\n(Python 2.7 actually produces slightly different output, since it returns a named tuple instead of a standard tuple.)\nThe\nurlparse\nmodule also supports IPv6 literal addresses as defined by RFC 2732 (contributed by Senthil Kumaran; bpo-2987).>>> urlparse.urlparse('http://[1080::8:800:200C:417A]/foo') ParseResult(scheme='http', netloc='[1080::8:800:200C:417A]', path='/foo', params='', query='', fragment='')\nNew class: the\nWeakSet\nclass in theweakref\nmodule is a set that only holds weak references to its elements; elements will be removed once there are no references pointing to them. (Originally implemented in Python 3.x by Raymond Hettinger, and backported to 2.7 by Michael Foord.)The\nxml.etree.ElementTree\nlibrary, no longer escapes ampersands and angle brackets when outputting an XML processing instruction (which looks like\n) or comment (which looks like\n). (Patch by Neil Muller; bpo-2746.)The XML-RPC client and server, provided by the\nxmlrpclib\nandSimpleXMLRPCServer\nmodules, have improved performance by supporting HTTP/1.1 keep-alive and by optionally using gzip encoding to compress the XML being exchanged. The gzip compression is controlled by theencode_threshold\nattribute ofSimpleXMLRPCRequestHandler\n, which contains a size in bytes; responses larger than this will be compressed. (Contributed by Kristj\u00e1n Valur J\u00f3nsson; bpo-6267.)The\nzipfile\nmodule\u2019sZipFile\nnow supports the context management protocol, so you can writewith zipfile.ZipFile(...) as f:\n. (Contributed by Brian Curtin; bpo-5511.)zipfile\nnow also supports archiving empty directories and extracts them correctly. (Fixed by Kuba Wieczorek; bpo-4710.) Reading files out of an archive is faster, and interleavingread()\nandreadline()\nnow works correctly. (Contributed by Nir Aides; bpo-7610.)The\nis_zipfile()\nfunction now accepts a file object, in addition to the path names accepted in earlier versions. (Contributed by Gabriel Genellina; bpo-4756.)The\nwritestr()\nmethod now has an optional compress_type parameter that lets you override the default compression method specified in theZipFile\nconstructor. (Contributed by Ronald Oussoren; bpo-6003.)\nNew module: importlib\u00b6\nPython 3.1 includes the importlib\npackage, a re-implementation\nof the logic underlying Python\u2019s import\nstatement.\nimportlib\nis useful for implementers of Python interpreters and\nto users who wish to write new importers that can participate in the\nimport process. Python 2.7 doesn\u2019t contain the complete\nimportlib\npackage, but instead has a tiny subset that contains\na single function, import_module()\n.\nimport_module(name, package=None)\nimports a module. name is\na string containing the module or package\u2019s name. It\u2019s possible to do\nrelative imports by providing a string that begins with a .\ncharacter, such as ..utils.errors\n. For relative imports, the\npackage argument must be provided and is the name of the package that\nwill be used as the anchor for\nthe relative import. import_module()\nboth inserts the imported\nmodule into sys.modules\nand returns the module object.\nHere are some examples:\n>>> from importlib import import_module\n>>> anydbm = import_module('anydbm') # Standard absolute import\n>>> anydbm\n\n>>> # Relative import\n>>> file_util = import_module('..file_util', 'distutils.command')\n>>> file_util\n\nimportlib\nwas implemented by Brett Cannon and introduced in\nPython 3.1.\nNew module: sysconfig\u00b6\nThe sysconfig\nmodule has been pulled out of the Distutils\npackage, becoming a new top-level module in its own right.\nsysconfig\nprovides functions for getting information about\nPython\u2019s build process: compiler switches, installation paths, the\nplatform name, and whether Python is running from its source\ndirectory.\nSome of the functions in the module are:\nget_config_var()\nreturns variables from Python\u2019s Makefile and thepyconfig.h\nfile.get_config_vars()\nreturns a dictionary containing all of the configuration variables.get_path()\nreturns the configured path for a particular type of module: the standard library, site-specific modules, platform-specific modules, etc.is_python_build()\nreturns true if you\u2019re running a binary from a Python source tree, and false otherwise.\nConsult the sysconfig\ndocumentation for more details and for\na complete list of functions.\nThe Distutils package and sysconfig\nare now maintained by Tarek\nZiad\u00e9, who has also started a Distutils2 package (source repository at\nhttps://hg.python.org/distutils2/) for developing a next-generation\nversion of Distutils.\nttk: Themed Widgets for Tk\u00b6\nTcl/Tk 8.5 includes a set of themed widgets that re-implement basic Tk widgets but have a more customizable appearance and can therefore more closely resemble the native platform\u2019s widgets. This widget set was originally called Tile, but was renamed to Ttk (for \u201cthemed Tk\u201d) on being added to Tcl/Tck release 8.5.\nTo learn more, read the ttk\nmodule documentation. You may also\nwish to read the Tcl/Tk manual page describing the\nTtk theme engine, available at\nhttps://www.tcl.tk/man/tcl8.5/TkCmd/ttk_intro.html. Some\nscreenshots of the Python/Ttk code in use are at\nhttps://code.google.com/archive/p/python-ttk/wikis/Screenshots.wiki.\nThe tkinter.ttk\nmodule was written by Guilherme Polo and added in\nbpo-2983. An alternate version called Tile.py\n, written by\nMartin Franklin and maintained by Kevin Walzer, was proposed for\ninclusion in bpo-2618, but the authors argued that Guilherme\nPolo\u2019s work was more comprehensive.\nUpdated module: unittest\u00b6\nThe unittest\nmodule was greatly enhanced; many\nnew features were added. Most of these features were implemented\nby Michael Foord, unless otherwise noted. The enhanced version of\nthe module is downloadable separately for use with Python versions 2.4 to 2.6,\npackaged as the unittest2\npackage, from unittest2.\nWhen used from the command line, the module can automatically discover\ntests. It\u2019s not as fancy as py.test or\nnose, but provides a\nsimple way to run tests kept within a set of package directories. For example,\nthe following command will search the test/\nsubdirectory for\nany importable test files named test*.py\n:\npython -m unittest discover -s test\nConsult the unittest\nmodule documentation for more details.\n(Developed in bpo-6001.)\nThe main()\nfunction supports some other new options:\n-b\nor--buffer\nwill buffer the standard output and standard error streams during each test. If the test passes, any resulting output will be discarded; on failure, the buffered output will be displayed.-c\nor--catch\nwill cause the control-C interrupt to be handled more gracefully. Instead of interrupting the test process immediately, the currently running test will be completed and then the partial results up to the interruption will be reported. If you\u2019re impatient, a second press of control-C will cause an immediate interruption.This control-C handler tries to avoid causing problems when the code being tested or the tests being run have defined a signal handler of their own, by noticing that a signal handler was already set and calling it. If this doesn\u2019t work for you, there\u2019s a\nremoveHandler()\ndecorator that can be used to mark tests that should have the control-C handling disabled.-f\nor--failfast\nmakes test execution stop immediately when a test fails instead of continuing to execute further tests. (Suggested by Cliff Dyer and implemented by Michael Foord; bpo-8074.)\nThe progress messages now show \u2018x\u2019 for expected failures and \u2018u\u2019 for unexpected successes when run in verbose mode. (Contributed by Benjamin Peterson.)\nTest cases can raise the SkipTest\nexception to skip a\ntest (bpo-1034053).\nThe error messages for assertEqual()\n,\nassertTrue()\n, and assertFalse()\nfailures now provide more information. If you set the\nlongMessage\nattribute of your TestCase\nclasses to\ntrue, both the standard error message and any additional message you\nprovide will be printed for failures. (Added by Michael Foord; bpo-5663.)\nThe assertRaises()\nmethod now\nreturns a context handler when called without providing a callable\nobject to run. For example, you can write this:\nwith self.assertRaises(KeyError):\n{}['foo']\n(Implemented by Antoine Pitrou; bpo-4444.)\nModule- and class-level setup and teardown fixtures are now supported.\nModules can contain setUpModule()\nand tearDownModule()\nfunctions. Classes can have setUpClass()\nand\ntearDownClass()\nmethods that must be defined as class methods\n(using @classmethod\nor equivalent). These functions and\nmethods are invoked when the test runner switches to a test case in a\ndifferent module or class.\nThe methods addCleanup()\nand\ndoCleanups()\nwere added.\naddCleanup()\nlets you add cleanup functions that\nwill be called unconditionally (after setUp()\nif\nsetUp()\nfails, otherwise after tearDown()\n). This allows\nfor much simpler resource allocation and deallocation during tests\n(bpo-5679).\nA number of new methods were added that provide more specialized\ntests. Many of these methods were written by Google engineers\nfor use in their test suites; Gregory P. Smith, Michael Foord, and\nGvR worked on merging them into Python\u2019s version of unittest\n.\nassertIsNone()\nandassertIsNotNone()\ntake one expression and verify that the result is or is notNone\n.assertIs()\nandassertIsNot()\ntake two values and check whether the two values evaluate to the same object or not. (Added by Michael Foord; bpo-2578.)assertIsInstance()\nandassertNotIsInstance()\ncheck whether the resulting object is an instance of a particular class, or of one of a tuple of classes. (Added by Georg Brandl; bpo-7031.)assertGreater()\n,assertGreaterEqual()\n,assertLess()\n, andassertLessEqual()\ncompare two quantities.assertMultiLineEqual()\ncompares two strings, and if they\u2019re not equal, displays a helpful comparison that highlights the differences in the two strings. This comparison is now used by default when Unicode strings are compared withassertEqual()\n.assertRegexpMatches()\nandassertNotRegexpMatches()\nchecks whether the first argument is a string matching or not matching the regular expression provided as the second argument (bpo-8038).assertRaisesRegexp()\nchecks whether a particular exception is raised, and then also checks that the string representation of the exception matches the provided regular expression.assertIn()\nandassertNotIn()\ntests whether first is or is not in second.assertItemsEqual()\ntests whether two provided sequences contain the same elements.assertSetEqual()\ncompares whether two sets are equal, and only reports the differences between the sets in case of error.Similarly,\nassertListEqual()\nandassertTupleEqual()\ncompare the specified types and explain any differences without necessarily printing their full values; these methods are now used by default when comparing lists and tuples usingassertEqual()\n. More generally,assertSequenceEqual()\ncompares two sequences and can optionally check whether both sequences are of a particular type.assertDictEqual()\ncompares two dictionaries and reports the differences; it\u2019s now used by default when you compare two dictionaries usingassertEqual()\n.assertDictContainsSubset()\nchecks whether all of the key/value pairs in first are found in second.assertAlmostEqual()\nandassertNotAlmostEqual()\ntest whether first and second are approximately equal. This method can either round their difference to an optionally specified number of places (the default is 7) and compare it to zero, or require the difference to be smaller than a supplied delta value.loadTestsFromName()\nproperly honors thesuiteClass\nattribute of theTestLoader\n. (Fixed by Mark Roddy; bpo-6866.)A new hook lets you extend the\nassertEqual()\nmethod to handle new data types. TheaddTypeEqualityFunc()\nmethod takes a type object and a function. The function will be used when both of the objects being compared are of the specified type. This function should compare the two objects and raise an exception if they don\u2019t match; it\u2019s a good idea for the function to provide additional information about why the two objects aren\u2019t matching, much as the new sequence comparison methods do.\nunittest.main()\nnow takes an optional exit\nargument. If\nfalse, main()\ndoesn\u2019t call sys.exit()\n, allowing\nmain()\nto be used from the interactive interpreter.\n(Contributed by J. Pablo Fern\u00e1ndez; bpo-3379.)\nTestResult\nhas new startTestRun()\nand\nstopTestRun()\nmethods that are called immediately before\nand after a test run. (Contributed by Robert Collins; bpo-5728.)\nWith all these changes, the unittest.py\nwas becoming awkwardly\nlarge, so the module was turned into a package and the code split into\nseveral files (by Benjamin Peterson). This doesn\u2019t affect how the\nmodule is imported or used.\nSee also\n- https://web.archive.org/web/20210619163128/http://www.voidspace.org.uk/python/articles/unittest2.shtml\nDescribes the new features, how to use them, and the rationale for various design decisions. (By Michael Foord.)\nUpdated module: ElementTree 1.3\u00b6\nThe version of the ElementTree library included with Python was updated to version 1.3. Some of the new features are:\nThe various parsing functions now take a parser keyword argument giving an\nXMLParser\ninstance that will be used. This makes it possible to override the file\u2019s internal encoding:p = ET.XMLParser(encoding='utf-8') t = ET.XML(\"\"\"\"\"\", parser=p)\nErrors in parsing XML now raise a\nParseError\nexception, whose instances have aposition\nattribute containing a (line, column) tuple giving the location of the problem.ElementTree\u2019s code for converting trees to a string has been significantly reworked, making it roughly twice as fast in many cases. The\nElementTree.write()\nandElement.write()\nmethods now have a method parameter that can be \u201cxml\u201d (the default), \u201chtml\u201d, or \u201ctext\u201d. HTML mode will output empty elements as\ninstead of\n, and text mode will skip over elements and only output the text chunks. If you set thetag\nattribute of an element toNone\nbut leave its children in place, the element will be omitted when the tree is written out, so you don\u2019t need to do more extensive rearrangement to remove a single element.Namespace handling has also been improved. All\nxmlns:\ndeclarations are now output on the root element, not scattered throughout the resulting XML. You can set the default namespace for a tree by setting thedefault_namespace\nattribute and can register new prefixes withregister_namespace()\n. In XML mode, you can use the true/false xml_declaration parameter to suppress the XML declaration.New\nElement\nmethod:extend()\nappends the items from a sequence to the element\u2019s children. Elements themselves behave like sequences, so it\u2019s easy to move children from one element to another:from xml.etree import ElementTree as ET t = ET.XML(\"\"\" 1 2 3 \"\"\") new = ET.XML('') new.extend(t) # Outputs 1... print ET.tostring(new)\nNew\nElement\nmethod:iter()\nyields the children of the element as a generator. It\u2019s also possible to writefor child in elem:\nto loop over an element\u2019s children. The existing methodgetiterator()\nis now deprecated, as isgetchildren()\nwhich constructs and returns a list of children.New\nElement\nmethod:itertext()\nyields all chunks of text that are descendants of the element. For example:t = ET.XML(\"\"\" 1 2 3 \"\"\") # Outputs ['\\n ', '1', ' ', '2', ' ', '3', '\\n'] print list(t.itertext())\nDeprecated: using an element as a Boolean (i.e.,\nif elem:\n) would return true if the element had any children, or false if there were no children. This behaviour is confusing \u2013None\nis false, but so is a childless element? \u2013 so it will now trigger aFutureWarning\n. In your code, you should be explicit: writelen(elem) != 0\nif you\u2019re interested in the number of children, orelem is not None\n.\nFredrik Lundh develops ElementTree and produced the 1.3 version; you can read his article describing 1.3 at https://web.archive.org/web/20200703234532/http://effbot.org/zone/elementtree-13-intro.htm. Florent Xicluna updated the version included with Python, after discussions on python-dev and in bpo-6472.)\nBuild and C API Changes\u00b6\nChanges to Python\u2019s build process and to the C API include:\nThe latest release of the GNU Debugger, GDB 7, can be scripted using Python. When you begin debugging an executable program P, GDB will look for a file named\nP-gdb.py\nand automatically read it. Dave Malcolm contributed apython-gdb.py\nthat adds a number of commands useful when debugging Python itself. For example,py-up\nandpy-down\ngo up or down one Python stack frame, which usually corresponds to several C stack frames.py-print\nprints the value of a Python variable, andpy-bt\nprints the Python stack trace. (Added as a result of bpo-8032.)If you use the\n.gdbinit\nfile provided with Python, the \u201cpyo\u201d macro in the 2.7 version now works correctly when the thread being debugged doesn\u2019t hold the GIL; the macro now acquires it before printing. (Contributed by Victor Stinner; bpo-3632.)Py_AddPendingCall()\nis now thread-safe, letting any worker thread submit notifications to the main Python thread. This is particularly useful for asynchronous IO operations. (Contributed by Kristj\u00e1n Valur J\u00f3nsson; bpo-4293.)New function:\nPyCode_NewEmpty()\ncreates an empty code object; only the filename, function name, and first line number are required. This is useful for extension modules that are attempting to construct a more useful traceback stack. Previously such extensions needed to callPyCode_New()\n, which had many more arguments. (Added by Jeffrey Yasskin.)New function:\nPyErr_NewExceptionWithDoc()\ncreates a new exception class, just as the existingPyErr_NewException()\ndoes, but takes an extrachar *\nargument containing the docstring for the new exception class. (Added by \u2018lekma\u2019 on the Python bug tracker; bpo-7033.)New function:\nPyFrame_GetLineNumber()\ntakes a frame object and returns the line number that the frame is currently executing. Previously code would need to get the index of the bytecode instruction currently executing, and then look up the line number corresponding to that address. (Added by Jeffrey Yasskin.)New functions:\nPyLong_AsLongAndOverflow()\nandPyLong_AsLongLongAndOverflow()\napproximates a Python long integer as a C long or long long. If the number is too large to fit into the output type, an overflow flag is set and returned to the caller. (Contributed by Case Van Horsen; bpo-7528 and bpo-7767.)New function: stemming from the rewrite of string-to-float conversion, a new\nPyOS_string_to_double()\nfunction was added. The oldPyOS_ascii_strtod()\nandPyOS_ascii_atof()\nfunctions are now deprecated.New function:\nPySys_SetArgvEx()\nsets the value ofsys.argv\nand can optionally updatesys.path\nto include the directory containing the script named bysys.argv[0]\ndepending on the value of an updatepath parameter.This function was added to close a security hole for applications that embed Python. The old function,\nPySys_SetArgv()\n, would always updatesys.path\n, and sometimes it would add the current directory. This meant that, if you ran an application embedding Python in a directory controlled by someone else, attackers could put a Trojan-horse module in the directory (say, a file namedos.py\n) that your application would then import and run.If you maintain a C/C++ application that embeds Python, check whether you\u2019re calling\nPySys_SetArgv()\nand carefully consider whether the application should be usingPySys_SetArgvEx()\nwith updatepath set to false.Security issue reported as CVE 2008-5983; discussed in bpo-5753, and fixed by Antoine Pitrou.\nNew macros: the Python header files now define the following macros:\nPy_ISALNUM\n,Py_ISALPHA\n,Py_ISDIGIT\n,Py_ISLOWER\n,Py_ISSPACE\n,Py_ISUPPER\n,Py_ISXDIGIT\n,Py_TOLOWER\n, andPy_TOUPPER\n. All of these functions are analogous to the C standard macros for classifying characters, but ignore the current locale setting, because in several places Python needs to analyze characters in a locale-independent way. (Added by Eric Smith; bpo-5793.)Removed function:\nPyEval_CallObject()\nis now only available as a macro. A function version was being kept around to preserve ABI linking compatibility, but that was in 1997; it can certainly be deleted by now. (Removed by Antoine Pitrou; bpo-8276.)New format codes: the\nPyString_FromFormat()\n,PyString_FromFormatV()\n, andPyErr_Format()\nfunctions now accept%lld\nand%llu\nformat codes for displaying C\u2019s long long types. (Contributed by Mark Dickinson; bpo-7228.)The complicated interaction between threads and process forking has been changed. Previously, the child process created by\nos.fork()\nmight fail because the child is created with only a single thread running, the thread performing theos.fork()\n. If other threads were holding a lock, such as Python\u2019s import lock, when the fork was performed, the lock would still be marked as \u201cheld\u201d in the new process. But in the child process nothing would ever release the lock, since the other threads weren\u2019t replicated, and the child process would no longer be able to perform imports.Python 2.7 acquires the import lock before performing an\nos.fork()\n, and will also clean up any locks created using thethreading\nmodule. C extension modules that have internal locks, or that callfork()\nthemselves, will not benefit from this clean-up.(Fixed by Thomas Wouters; bpo-1590864.)\nThe\nPy_Finalize()\nfunction now calls the internalthreading._shutdown()\nfunction; this prevents some exceptions from being raised when an interpreter shuts down. (Patch by Adam Olsen; bpo-1722344.)When using the\nPyMemberDef\nstructure to define attributes of a type, Python will no longer let you try to delete or set aT_STRING_INPLACE\nattribute.Global symbols defined by the\nctypes\nmodule are now prefixed withPy\n, or with_ctypes\n. (Implemented by Thomas Heller; bpo-3102.)New configure option: the\n--with-system-expat\nswitch allows building thepyexpat\nmodule to use the system Expat library. (Contributed by Arfrever Frehtes Taifersar Arahesis; bpo-7609.)New configure option: the\n--with-valgrind\noption will now disable the pymalloc allocator, which is difficult for the Valgrind memory-error detector to analyze correctly. Valgrind will therefore be better at detecting memory leaks and overruns. (Contributed by James Henstridge; bpo-2422.)New configure option: you can now supply an empty string to\n--with-dbmliborder=\nin order to disable all of the various DBM modules. (Added by Arfrever Frehtes Taifersar Arahesis; bpo-6491.)The configure script now checks for floating-point rounding bugs on certain 32-bit Intel chips and defines a\nX87_DOUBLE_ROUNDING\npreprocessor definition. No code currently uses this definition, but it\u2019s available if anyone wishes to use it. (Added by Mark Dickinson; bpo-2937.)configure also now sets a\nLDCXXSHARED\nMakefile variable for supporting C++ linking. (Contributed by Arfrever Frehtes Taifersar Arahesis; bpo-1222585.)The build process now creates the necessary files for pkg-config support. (Contributed by Clinton Roy; bpo-3585.)\nThe build process now supports Subversion 1.7. (Contributed by Arfrever Frehtes Taifersar Arahesis; bpo-6094.)\nCapsules\u00b6\nPython 3.1 adds a new C datatype, PyCapsule\n, for providing a\nC API to an extension module. A capsule is essentially the holder of\na C void *\npointer, and is made available as a module attribute; for\nexample, the socket\nmodule\u2019s API is exposed as socket.CAPI\n,\nand unicodedata\nexposes ucnhash_CAPI\n. Other extensions\ncan import the module, access its dictionary to get the capsule\nobject, and then get the void *\npointer, which will usually point\nto an array of pointers to the module\u2019s various API functions.\nThere is an existing data type already used for this,\nPyCObject\n, but it doesn\u2019t provide type safety. Evil code\nwritten in pure Python could cause a segmentation fault by taking a\nPyCObject\nfrom module A and somehow substituting it for the\nPyCObject\nin module B. Capsules know their own name,\nand getting the pointer requires providing the name:\nvoid *vtable;\nif (!PyCapsule_IsValid(capsule, \"mymodule.CAPI\") {\nPyErr_SetString(PyExc_ValueError, \"argument type invalid\");\nreturn NULL;\n}\nvtable = PyCapsule_GetPointer(capsule, \"mymodule.CAPI\");\nYou are assured that vtable\npoints to whatever you\u2019re expecting.\nIf a different capsule was passed in, PyCapsule_IsValid()\nwould\ndetect the mismatched name and return false. Refer to\nProviding a C API for an Extension Module for more information on using these objects.\nPython 2.7 now uses capsules internally to provide various\nextension-module APIs, but the PyCObject_AsVoidPtr()\nwas\nmodified to handle capsules, preserving compile-time compatibility\nwith the PyCObject\ninterface. Use of\nPyCObject_AsVoidPtr()\nwill signal a\nPendingDeprecationWarning\n, which is silent by default.\nImplemented in Python 3.1 and backported to 2.7 by Larry Hastings; discussed in bpo-5630.\nPort-Specific Changes: Windows\u00b6\nThe\nmsvcrt\nmodule now contains some constants from thecrtassem.h\nheader file:CRT_ASSEMBLY_VERSION\n,VC_ASSEMBLY_PUBLICKEYTOKEN\n, andLIBRARIES_ASSEMBLY_NAME_PREFIX\n. (Contributed by David Cournapeau; bpo-4365.)The\n_winreg\nmodule for accessing the registry now implements theCreateKeyEx()\nandDeleteKeyEx()\nfunctions, extended versions of previously supported functions that take several extra arguments. TheDisableReflectionKey()\n,EnableReflectionKey()\n, andQueryReflectionKey()\nwere also tested and documented. (Implemented by Brian Curtin: bpo-7347.)The new\n_beginthreadex()\nAPI is used to start threads, and the native thread-local storage functions are now used. (Contributed by Kristj\u00e1n Valur J\u00f3nsson; bpo-3582.)The\nos.kill()\nfunction now works on Windows. The signal value can be the constantsCTRL_C_EVENT\n,CTRL_BREAK_EVENT\n, or any integer. The first two constants will send Control-C and Control-Break keystroke events to subprocesses; any other value will use theTerminateProcess()\nAPI. (Contributed by Miki Tebeka; bpo-1220212.)The\nos.listdir()\nfunction now correctly fails for an empty path. (Fixed by Hirokazu Yamamoto; bpo-5913.)The\nmimetypes\nmodule will now read the MIME database from the Windows registry when initializing. (Patch by Gabriel Genellina; bpo-4969.)\nPort-Specific Changes: Mac OS X\u00b6\nThe path\n/Library/Python/2.7/site-packages\nis now appended tosys.path\n, in order to share added packages between the system installation and a user-installed copy of the same version. (Changed by Ronald Oussoren; bpo-4865.)Changed in version 2.7.13: As of 2.7.13, this change was removed.\n/Library/Python/2.7/site-packages\n, the site-packages directory used by the Apple-supplied system Python 2.7 is no longer appended tosys.path\nfor user-installed Pythons such as from the python.org installers. As of macOS 10.12, Apple changed how the system site-packages directory is configured, which could cause installation of pip components, like setuptools, to fail. Packages installed for the system Python will no longer be shared with user-installed Pythons. (bpo-28440)\nPort-Specific Changes: FreeBSD\u00b6\nFreeBSD 7.1\u2019s\nSO_SETFIB\nconstant, used with thesocket()\nmethodsgetsockopt()\n/setsockopt()\nto select an alternate routing table, is now available in thesocket\nmodule. (Added by Kyle VanderBeek; bpo-8235.)\nOther Changes and Fixes\u00b6\nTwo benchmark scripts,\niobench\nandccbench\n, were added to theTools\ndirectory.iobench\nmeasures the speed of the built-in file I/O objects returned byopen()\nwhile performing various operations, andccbench\nis a concurrency benchmark that tries to measure computing throughput, thread switching latency, and IO processing bandwidth when performing several tasks using a varying number of threads.The\nTools/i18n/msgfmt.py\nscript now understands plural forms in.po\nfiles. (Fixed by Martin von L\u00f6wis; bpo-5464.)When importing a module from a\n.pyc\nor.pyo\nfile with an existing.py\ncounterpart, theco_filename\nattributes of the resulting code objects are overwritten when the original filename is obsolete. This can happen if the file has been renamed, moved, or is accessed through different paths. (Patch by Ziga Seilnacht and Jean-Paul Calderone; bpo-1180193.)The\nregrtest.py\nscript now takes a--randseed=\nswitch that takes an integer that will be used as the random seed for the-r\noption that executes tests in random order. The-r\noption also reports the seed that was used (Added by Collin Winter.)Another\nregrtest.py\nswitch is-j\n, which takes an integer specifying how many tests run in parallel. This allows reducing the total runtime on multi-core machines. This option is compatible with several other options, including the-R\nswitch which is known to produce long runtimes. (Added by Antoine Pitrou, bpo-6152.) This can also be used with a new-F\nswitch that runs selected tests in a loop until they fail. (Added by Antoine Pitrou; bpo-7312.)When executed as a script, the\npy_compile.py\nmodule now accepts'-'\nas an argument, which will read standard input for the list of filenames to be compiled. (Contributed by Piotr O\u017carowski; bpo-8233.)\nPorting to Python 2.7\u00b6\nThis section lists previously described changes and other bugfixes that may require changes to your code:\nThe\nrange()\nfunction processes its arguments more consistently; it will now call__int__()\non non-float, non-integer arguments that are supplied to it. (Fixed by Alexander Belopolsky; bpo-1533.)The string\nformat()\nmethod changed the default precision used for floating-point and complex numbers from 6 decimal places to 12, which matches the precision used bystr()\n. (Changed by Eric Smith; bpo-5920.)Because of an optimization for the\nwith\nstatement, the special methods__enter__()\nand__exit__()\nmust belong to the object\u2019s type, and cannot be directly attached to the object\u2019s instance. This affects new-style classes (derived fromobject\n) and C extension types. (bpo-6101.)Due to a bug in Python 2.6, the exc_value parameter to\n__exit__()\nmethods was often the string representation of the exception, not an instance. This was fixed in 2.7, so exc_value will be an instance as expected. (Fixed by Florent Xicluna; bpo-7853.)When a restricted set of attributes were set using\n__slots__\n, deleting an unset attribute would not raiseAttributeError\nas you would expect. Fixed by Benjamin Peterson; bpo-7604.)\nIn the standard library:\nOperations with\ndatetime\ninstances that resulted in a year falling outside the supported range didn\u2019t always raiseOverflowError\n. Such errors are now checked more carefully and will now raise the exception. (Reported by Mark Leander, patch by Anand B. Pillai and Alexander Belopolsky; bpo-7150.)When using\nDecimal\ninstances with a string\u2019sformat()\nmethod, the default alignment was previously left-alignment. This has been changed to right-alignment, which might change the output of your programs. (Changed by Mark Dickinson; bpo-6857.)Comparisons involving a signaling NaN value (or\nsNAN\n) now signalInvalidOperation\ninstead of silently returning a true or false value depending on the comparison operator. Quiet NaN values (orNaN\n) are now hashable. (Fixed by Mark Dickinson; bpo-7279.)The\nxml.etree.ElementTree\nlibrary no longer escapes ampersands and angle brackets when outputting an XML processing instruction (which looks like\n) or comment (which looks like\n). (Patch by Neil Muller; bpo-2746.)The\nreadline()\nmethod ofStringIO\nobjects now does nothing when a negative length is requested, as other file-like objects do. (bpo-7348).The\nsyslog\nmodule will now use the value ofsys.argv[0]\nas the identifier instead of the previous default value of'python'\n. (Changed by Sean Reifschneider; bpo-8451.)The\ntarfile\nmodule\u2019s default error handling has changed, to no longer suppress fatal errors. The default error level was previously 0, which meant that errors would only result in a message being written to the debug log, but because the debug log is not activated by default, these errors go unnoticed. The default error level is now 1, which raises an exception if there\u2019s an error. (Changed by Lars Gust\u00e4bel; bpo-7357.)The\nurlparse\nmodule\u2019surlsplit()\nnow handles unknown URL schemes in a fashion compliant with RFC 3986: if the URL is of the form\"://...\"\n, the text before the://\nis treated as the scheme, even if it\u2019s a made-up scheme that the module doesn\u2019t know about. This change may break code that worked around the old behaviour. For example, Python 2.6.4 or 2.5 will return the following:>>> import urlparse >>> urlparse.urlsplit('invented://host/filename?query') ('invented', '', '//host/filename?query', '', '')\nPython 2.7 (and Python 2.6.5) will return:\n>>> import urlparse >>> urlparse.urlsplit('invented://host/filename?query') ('invented', 'host', '/filename?query', '', '')\n(Python 2.7 actually produces slightly different output, since it returns a named tuple instead of a standard tuple.)\nFor C extensions:\nC extensions that use integer format codes with the\nPyArg_Parse*\nfamily of functions will now raise aTypeError\nexception instead of triggering aDeprecationWarning\n(bpo-5080).Use the new\nPyOS_string_to_double()\nfunction instead of the oldPyOS_ascii_strtod()\nandPyOS_ascii_atof()\nfunctions, which are now deprecated.\nFor applications that embed Python:\nThe\nPySys_SetArgvEx()\nfunction was added, letting applications close a security hole when the existingPySys_SetArgv()\nfunction was used. Check whether you\u2019re callingPySys_SetArgv()\nand carefully consider whether the application should be usingPySys_SetArgvEx()\nwith updatepath set to false.\nNew Features Added to Python 2.7 Maintenance Releases\u00b6\nNew features may be added to Python 2.7 maintenance releases when the situation genuinely calls for it. Any such additions must go through the Python Enhancement Proposal process, and make a compelling case for why they can\u2019t be adequately addressed by either adding the new feature solely to Python 3, or else by publishing it on the Python Package Index.\nIn addition to the specific proposals listed below, there is a general\nexemption allowing new -3\nwarnings to be added in any Python 2.7\nmaintenance release.\nTwo new environment variables for debug mode\u00b6\nIn debug mode, the [xxx refs]\nstatistic is not written by default, the\nPYTHONSHOWREFCOUNT\nenvironment variable now must also be set.\n(Contributed by Victor Stinner; bpo-31733.)\nWhen Python is compiled with COUNT_ALLOC\ndefined, allocation counts are no\nlonger dumped by default anymore: the PYTHONSHOWALLOCCOUNT\nenvironment\nvariable must now also be set. Moreover, allocation counts are now dumped into\nstderr, rather than stdout. (Contributed by Victor Stinner; bpo-31692.)\nAdded in version 2.7.15.\nPEP 434: IDLE Enhancement Exception for All Branches\u00b6\nPEP 434 describes a general exemption for changes made to the IDLE development environment shipped along with Python. This exemption makes it possible for the IDLE developers to provide a more consistent user experience across all supported versions of Python 2 and 3.\nFor details of any IDLE changes, refer to the NEWS file for the specific release.\nPEP 466: Network Security Enhancements for Python 2.7\u00b6\nPEP 466 describes a number of network security enhancement proposals that have been approved for inclusion in Python 2.7 maintenance releases, with the first of those changes appearing in the Python 2.7.7 release.\nPEP 466 related features added in Python 2.7.7:\nhmac.compare_digest()\nwas backported from Python 3 to make a timing attack resistant comparison operation available to Python 2 applications. (Contributed by Alex Gaynor; bpo-21306.)OpenSSL 1.0.1g was upgraded in the official Windows installers published on python.org. (Contributed by Zachary Ware; bpo-21462.)\nPEP 466 related features added in Python 2.7.8:\nhashlib.pbkdf2_hmac()\nwas backported from Python 3 to make a hashing algorithm suitable for secure password storage broadly available to Python 2 applications. (Contributed by Alex Gaynor; bpo-21304.)OpenSSL 1.0.1h was upgraded for the official Windows installers published on python.org. (Contributed by Zachary Ware in bpo-21671 for CVE 2014-0224.)\nPEP 466 related features added in Python 2.7.9:\nMost of Python 3.4\u2019s\nssl\nmodule was backported. This meansssl\nnow supports Server Name Indication, TLS1.x settings, access to the platform certificate store, theSSLContext\nclass, and other features. (Contributed by Alex Gaynor and David Reid; bpo-21308.)Refer to the \u201cVersion added: 2.7.9\u201d notes in the module documentation for specific details.\nos.urandom()\nwas changed to cache a file descriptor to/dev/urandom\ninstead of reopening/dev/urandom\non every call. (Contributed by Alex Gaynor; bpo-21305.)hashlib.algorithms_guaranteed\nandhashlib.algorithms_available\nwere backported from Python 3 to make it easier for Python 2 applications to select the strongest available hash algorithm. (Contributed by Alex Gaynor in bpo-21307)\nPEP 477: Backport ensurepip (PEP 453) to Python 2.7\u00b6\nPEP 477 approves the inclusion of the PEP 453 ensurepip module and the improved documentation that was enabled by it in the Python 2.7 maintenance releases, appearing first in the Python 2.7.9 release.\nBootstrapping pip By Default\u00b6\nThe new ensurepip\nmodule (defined in PEP 453) provides a standard\ncross-platform mechanism to bootstrap the pip installer into Python\ninstallations. The version of pip\nincluded with Python 2.7.9 is pip\n1.5.6, and future 2.7.x maintenance releases will update the bundled version to\nthe latest version of pip\nthat is available at the time of creating the\nrelease candidate.\nBy default, the commands pip\n, pipX\nand pipX.Y\nwill be installed on\nall platforms (where X.Y stands for the version of the Python installation),\nalong with the pip\nPython package and its dependencies.\nFor CPython source builds on POSIX systems,\nthe make install\nand make altinstall\ncommands do not bootstrap pip\nby default. This behaviour can be controlled through configure options, and\noverridden through Makefile options.\nOn Windows and Mac OS X, the CPython installers now default to installing\npip\nalong with CPython itself (users may opt out of installing it\nduring the installation process). Window users will need to opt in to the\nautomatic PATH\nmodifications to have pip\navailable from the command\nline by default, otherwise it can still be accessed through the Python\nlauncher for Windows as py -m pip\n.\nAs discussed in the PEP, platform packagers may choose not to install these commands by default, as long as, when invoked, they provide clear and simple directions on how to install them on that platform (usually using the system package manager).\nDocumentation Changes\u00b6\nAs part of this change, the Installing Python Modules and Distributing Python Modules sections of the documentation have been completely redesigned as short getting started and FAQ documents. Most packaging documentation has now been moved out to the Python Packaging Authority maintained Python Packaging User Guide and the documentation of the individual projects.\nHowever, as this migration is currently still incomplete, the legacy versions of those guides remaining available as Building C and C++ Extensions with setuptools and Building C and C++ Extensions with setuptools.\nSee also\n- PEP 453 \u2013 Explicit bootstrapping of pip in Python installations\nPEP written by Donald Stufft and Nick Coghlan, implemented by Donald Stufft, Nick Coghlan, Martin von L\u00f6wis and Ned Deily.\nPEP 476: Enabling certificate verification by default for stdlib http clients\u00b6\nPEP 476 updated httplib\nand modules which use it, such as\nurllib2\nand xmlrpclib\n, to now\nverify that the server\npresents a certificate which is signed by a Certificate Authority in the\nplatform trust store and whose hostname matches the hostname being requested\nby default, significantly improving security for many applications. This\nchange was made in the Python 2.7.9 release.\nFor applications which require the old previous behavior, they can pass an alternate context:\nimport urllib2\nimport ssl\n# This disables all verification\ncontext = ssl._create_unverified_context()\n# This allows using a specific certificate for the host, which doesn't need\n# to be in the trust store\ncontext = ssl.create_default_context(cafile=\"/path/to/file.crt\")\nurllib2.urlopen(\"https://invalid-cert\", context=context)\nPEP 493: HTTPS verification migration tools for Python 2.7\u00b6\nPEP 493 provides additional migration tools to support a more incremental infrastructure upgrade process for environments containing applications and services relying on the historically permissive processing of server certificates when establishing client HTTPS connections. These additions were made in the Python 2.7.12 release.\nThese tools are intended for use in cases where affected applications and services can\u2019t be modified to explicitly pass a more permissive SSL context when establishing the connection.\nFor applications and services which can\u2019t be modified at all, the new\nPYTHONHTTPSVERIFY\nenvironment variable may be set to 0\nto revert an\nentire Python process back to the default permissive behaviour of Python 2.7.8\nand earlier.\nFor cases where the connection establishment code can\u2019t be modified, but the\noverall application can be, the new ssl._https_verify_certificates()\nfunction can be used to adjust the default behaviour at runtime.\nNew make regen-all\nbuild target\u00b6\nTo simplify cross-compilation, and to ensure that CPython can reliably be compiled without requiring an existing version of Python to already be available, the autotools-based build system no longer attempts to implicitly recompile generated files based on file modification times.\nInstead, a new make regen-all\ncommand has been added to force regeneration\nof these files when desired (e.g. after an initial version of Python has\nalready been built based on the pregenerated versions).\nMore selective regeneration targets are also defined - see Makefile.pre.in for details.\n(Contributed by Victor Stinner in bpo-23404.)\nAdded in version 2.7.14.\nRemoval of make touch\nbuild target\u00b6\nThe make touch\nbuild target previously used to request implicit regeneration\nof generated files by updating their modification times has been removed.\nIt has been replaced by the new make regen-all\ntarget.\n(Contributed by Victor Stinner in bpo-23404.)\nChanged in version 2.7.14.\nAcknowledgements\u00b6\nThe author would like to thank the following people for offering suggestions, corrections and assistance with various drafts of this article: Nick Coghlan, Philip Jenvey, Ryan Lovett, R. David Murray, Hugh Secker-Walker.", "code_snippets": [" ", "\n", " ", " ", " ", "\n", " ", " ", "\n", " ", " ", "\n", "\n", "\n", " ", " ", "\n", "\n", "\n", " ", "\n", " ", " ", "\n", "\n", "\n", " ", " ", " ", " ", " ", " ", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", " ", " ", " ", "\n", " ", " ", "\n", " ", " ", "\n", " ", " ", " ", "\n", " ", " ", "\n", " ", " ", "\n", " ", " ", "\n", "\n", "\n", " ", " ", " ", " ", "\n", " ", " ", "\n", "\n", "\n", "\n", "\n\n", " ", " ", "\n\n", "\n", " ", " ", "\n ", "\n", " ", " ", "\n ", "\n ", "\n", " ", " ", " ", "\n ", " ", "\n ", "\n\n", "\n", " ", " ", "\n ", "\n\n", " ", " ", "\n", " ", "\n", " ", " ", " ", "\n", " ", " ", " ", " ", " ", " ", " ", " ", " ", " ", "\n\n", " ", "\n\n", " ", "\n ", " ", " ", " ", " ", " ", "\n\n", " ", "\n ", " ", " ", " ", " ", " ", " ", " ", "\n ", " ", " ", " ", "\n ", " ", " ", " ", " ", " ", " ", " ", " ", "\n ", " ", " ", " ", " ", " ", " ", " ", "\n", " ", " ", " ", "\n", " ", "\n ", " ", "\n ", " ", "\n ", " ", "\n\n", " ", " ", " ", " ", " ", " ", " ", " ", " ", "\n", " ", "\n ", " ", "\n ", " ", "\n ", " ", " ", "\n", "\n", "\n\n", " ", " ", "\n ", " ", " ", "\n ", " ", "\n ", " ", "\n ", " ", "\n ", "\n\n ", " ", " ", " ", "\n ", " ", "\n ", " ", "\n ", " ", "\n ", " ", "\n ", " ", "\n ", " ", " ", "\n ", " ", "\n ", " ", "\n\n ", "\n ", " ", "\n ", " ", "\n ", " ", "\n ", "\n ", "\n ", "\n ", " ", "\n ", " ", "\n ", "\n", "\n\n", "\n", "\n\n", "\n", " ", " ", "\n", "\n\n", " ", " ", "\n", "\n", " ", " ", " ", " ", " ", " ", " ", "\n", "\n", "\n", "\n", "\n", " ", " ", " ", " ", " ", " ", " ", "\n", " ", " ", " ", " ", " ", " ", " ", "\n", " ", " ", "\n", "\n", " ", " ", " ", "\n", "\n", " ", " ", "\n", "\n", "\n", " ", " ", "\n", "\n", "\n", " ", " ", " ", "\n", " ", " ", " ", "\n", "\n", "\n File ", ", line ", ", in ", "\n", ": ", "\n", " ", " ", " ", " ", " ", " ", "\n ", " ", " ", " ", " ", "\n", " ", " ", " ", "\n ", " ", " ", " ", "\n ", " ", " ", " ", " ", "\n", " ", " ", "\n", "\n", "\n", " ", " ", "\n", "\n", " ", " ", "\n", "\n", "\n", " ", " ", "\n", "\n", " ", " ", "\n", "\n", " ", " ", "\n", "\n", " ", " ", "\n", "\n", "\n", "\n", "\n", " ", " ", "\n", "\n", "\n", "\n", "\n", " ", "\n", "\n", "\n", "\n", "\n", "\n", " ", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", " ", "\n", "\n", "\n", " ", " ", "\n", "\n", "\n", "\n", "\n", "\n", " ", " ", "\n", "\n", " ", "\n", "\n", " ", " ", "\n", "\n", " ", " ", "\n", "\n", "\n", ": ", "\n", " ", "\n", " ", " ", " ", "\n", " ", "\n", "\n", " ", " ", " ", "\n", "\n", " ", " ", "\n", "\n", "\n", "\n", "\n", ": ", "\n", " ", " ", "\n ", " ", " ", " ", "\n", " ", " ", "\n ", " ", " ", " ", " ", " ", "\n ", " ", " ", " ", " ", " ", "\n", " ", " ", "\n", "\n", "\n\n", " ", " ", "\n", "\n", "\n", " ", "\n", " ", " ", " ", "\n", "\n", "\n", "\n", " ", " ", " ", "\n", "\n", "\n", " ", " ", " ", " ", " ", "\n", " ", "\n ", "\n", " ", " ", "\n", " ", " ", " ", "\n", " ", " ", " ", "\n\n", " ", " ", "\n", "\n", "\n", " ", " ", "\n", "\n\n", "\n", " ", "\n", " ", " ", "\n", "\n", "\n\n", "\n", " ", "\n", "\n", "\n\n", "\n", " ", " ", "\n\n", "\n", "\n", " ", " ", "\n\n", " ", "\n"], "language": "Python", "source": "python.org", "token_count": 25383} +{"url": "https://docs.python.org/3/library/termios.html", "title": " \u2014 POSIX style tty control", "content": "termios\n\u2014 POSIX style tty control\u00b6\nThis module provides an interface to the POSIX calls for tty I/O control. For a complete description of these calls, see termios(3) Unix manual page. It is only available for those Unix versions that support POSIX termios style tty I/O control configured during installation.\nAvailability: Unix.\nAll functions in this module take a file descriptor fd as their first\nargument. This can be an integer file descriptor, such as returned by\nsys.stdin.fileno()\n, or a file object, such as sys.stdin\nitself.\nThis module also defines all the constants needed to work with the functions provided here; these have the same name as their counterparts in C. Please refer to your system documentation for more information on using these terminal control interfaces.\nThe module defines the following functions:\n- termios.tcgetattr(fd)\u00b6\nReturn a list containing the tty attributes for file descriptor fd, as follows:\n[iflag, oflag, cflag, lflag, ispeed, ospeed, cc]\nwhere cc is a list of the tty special characters (each a string of length 1, except the items with indicesVMIN\nandVTIME\n, which are integers when these fields are defined). The interpretation of the flags and the speeds as well as the indexing in the cc array must be done using the symbolic constants defined in thetermios\nmodule.\n- termios.tcsetattr(fd, when, attributes)\u00b6\nSet the tty attributes for file descriptor fd from the attributes, which is a list like the one returned by\ntcgetattr()\n. The when argument determines when the attributes are changed:- termios.TCSANOW\u00b6\nChange attributes immediately.\n- termios.TCSADRAIN\u00b6\nChange attributes after transmitting all queued output.\n- termios.TCSAFLUSH\u00b6\nChange attributes after transmitting all queued output and discarding all queued input.\n- termios.tcsendbreak(fd, duration)\u00b6\nSend a break on file descriptor fd. A zero duration sends a break for 0.25\u20130.5 seconds; a nonzero duration has a system dependent meaning.\n- termios.tcdrain(fd)\u00b6\nWait until all output written to file descriptor fd has been transmitted.\n- termios.tcflush(fd, queue)\u00b6\nDiscard queued data on file descriptor fd. The queue selector specifies which queue:\nTCIFLUSH\nfor the input queue,TCOFLUSH\nfor the output queue, orTCIOFLUSH\nfor both queues.\n- termios.tcflow(fd, action)\u00b6\nSuspend or resume input or output on file descriptor fd. The action argument can be\nTCOOFF\nto suspend output,TCOON\nto restart output,TCIOFF\nto suspend input, orTCION\nto restart input.\n- termios.tcgetwinsize(fd)\u00b6\nReturn a tuple\n(ws_row, ws_col)\ncontaining the tty window size for file descriptor fd. Requirestermios.TIOCGWINSZ\nortermios.TIOCGSIZE\n.Added in version 3.11.\n- termios.tcsetwinsize(fd, winsize)\u00b6\nSet the tty window size for file descriptor fd from winsize, which is a two-item tuple\n(ws_row, ws_col)\nlike the one returned bytcgetwinsize()\n. Requires at least one of the pairs (termios.TIOCGWINSZ\n,termios.TIOCSWINSZ\n); (termios.TIOCGSIZE\n,termios.TIOCSSIZE\n) to be defined.Added in version 3.11.\nSee also\n- Module\ntty\nConvenience functions for common terminal control operations.\nExample\u00b6\nHere\u2019s a function that prompts for a password with echoing turned off. Note the\ntechnique using a separate tcgetattr()\ncall and a try\n\u2026\nfinally\nstatement to ensure that the old tty attributes are restored\nexactly no matter what happens:\ndef getpass(prompt=\"Password: \"):\nimport termios, sys\nfd = sys.stdin.fileno()\nold = termios.tcgetattr(fd)\nnew = termios.tcgetattr(fd)\nnew[3] = new[3] & ~termios.ECHO # lflags\ntry:\ntermios.tcsetattr(fd, termios.TCSADRAIN, new)\npasswd = input(prompt)\nfinally:\ntermios.tcsetattr(fd, termios.TCSADRAIN, old)\nreturn passwd", "code_snippets": ["\n ", "\n ", " ", " ", "\n ", " ", " ", "\n ", " ", " ", "\n ", " ", " ", " ", " ", " ", "\n ", "\n ", " ", " ", "\n ", " ", " ", "\n ", "\n ", " ", " ", "\n ", " ", "\n"], "language": "Python", "source": "python.org", "token_count": 910} +{"url": "https://docs.python.org/3/extending/building.html", "title": "Building C and C++ Extensions", "content": "4. Building C and C++ Extensions\u00b6\nA C extension for CPython is a shared library (for example, a .so\nfile on\nLinux, .pyd\non Windows), which exports an initialization function.\nSee Defining extension modules for details.\n4.1. Building C and C++ Extensions with setuptools\u00b6\nBuilding, packaging and distributing extension modules is best done with third-party tools, and is out of scope of this document. One suitable tool is Setuptools, whose documentation can be found at https://setuptools.pypa.io/en/latest/setuptools.html.\nThe distutils\nmodule, which was included in the standard library\nuntil Python 3.12, is now maintained as part of Setuptools.", "code_snippets": [], "language": "Python", "source": "python.org", "token_count": 162} +{"url": "https://docs.python.org/3/library/email.examples.html", "title": ": Examples", "content": "email\n: Examples\u00b6\nHere are a few examples of how to use the email\npackage to read, write,\nand send simple email messages, as well as more complex MIME messages.\nFirst, let\u2019s see how to create and send a simple text message (both the text content and the addresses may contain unicode characters):\n# Import smtplib for the actual sending function\nimport smtplib\n# Import the email modules we'll need\nfrom email.message import EmailMessage\n# Open the plain text file whose name is in textfile for reading.\nwith open(textfile) as fp:\n# Create a text/plain message\nmsg = EmailMessage()\nmsg.set_content(fp.read())\n# me == the sender's email address\n# you == the recipient's email address\nmsg['Subject'] = f'The contents of {textfile}'\nmsg['From'] = me\nmsg['To'] = you\n# Send the message via our own SMTP server.\ns = smtplib.SMTP('localhost')\ns.send_message(msg)\ns.quit()\nParsing RFC 822 headers can easily be done by the using the classes\nfrom the parser\nmodule:\n# Import the email modules we'll need\n#from email.parser import BytesParser\nfrom email.parser import Parser\nfrom email.policy import default\n# If the e-mail headers are in a file, uncomment these two lines:\n# with open(messagefile, 'rb') as fp:\n# headers = BytesParser(policy=default).parse(fp)\n# Or for parsing headers in a string (this is an uncommon operation), use:\nheaders = Parser(policy=default).parsestr(\n'From: Foo Bar \\n'\n'To: \\n'\n'Subject: Test message\\n'\n'\\n'\n'Body would go here\\n')\n# Now the header items can be accessed as a dictionary:\nprint('To: {}'.format(headers['to']))\nprint('From: {}'.format(headers['from']))\nprint('Subject: {}'.format(headers['subject']))\n# You can also access the parts of the addresses:\nprint('Recipient username: {}'.format(headers['to'].addresses[0].username))\nprint('Sender name: {}'.format(headers['from'].addresses[0].display_name))\nHere\u2019s an example of how to send a MIME message containing a bunch of family pictures that may be residing in a directory:\n# Import smtplib for the actual sending function.\nimport smtplib\n# Here are the email package modules we'll need.\nfrom email.message import EmailMessage\n# Create the container email message.\nmsg = EmailMessage()\nmsg['Subject'] = 'Our family reunion'\n# me == the sender's email address\n# family = the list of all recipients' email addresses\nmsg['From'] = me\nmsg['To'] = ', '.join(family)\nmsg.preamble = 'You will not see this in a MIME-aware mail reader.\\n'\n# Open the files in binary mode. You can also omit the subtype\n# if you want MIMEImage to guess it.\nfor file in pngfiles:\nwith open(file, 'rb') as fp:\nimg_data = fp.read()\nmsg.add_attachment(img_data, maintype='image',\nsubtype='png')\n# Send the email via our own SMTP server.\nwith smtplib.SMTP('localhost') as s:\ns.send_message(msg)\nHere\u2019s an example of how to send the entire contents of a directory as an email message: [1]\n#!/usr/bin/env python3\n\"\"\"Send the contents of a directory as a MIME message.\"\"\"\nimport os\nimport smtplib\n# For guessing MIME type based on file name extension\nimport mimetypes\nfrom argparse import ArgumentParser\nfrom email.message import EmailMessage\nfrom email.policy import SMTP\ndef main():\nparser = ArgumentParser(description=\"\"\"\\\nSend the contents of a directory as a MIME message.\nUnless the -o option is given, the email is sent by forwarding to your local\nSMTP server, which then does the normal delivery process. Your local machine\nmust be running an SMTP server.\n\"\"\")\nparser.add_argument('-d', '--directory',\nhelp=\"\"\"Mail the contents of the specified directory,\notherwise use the current directory. Only the regular\nfiles in the directory are sent, and we don't recurse to\nsubdirectories.\"\"\")\nparser.add_argument('-o', '--output',\nmetavar='FILE',\nhelp=\"\"\"Print the composed message to FILE instead of\nsending the message to the SMTP server.\"\"\")\nparser.add_argument('-s', '--sender', required=True,\nhelp='The value of the From: header (required)')\nparser.add_argument('-r', '--recipient', required=True,\naction='append', metavar='RECIPIENT',\ndefault=[], dest='recipients',\nhelp='A To: header value (at least one required)')\nargs = parser.parse_args()\ndirectory = args.directory\nif not directory:\ndirectory = '.'\n# Create the message\nmsg = EmailMessage()\nmsg['Subject'] = f'Contents of directory {os.path.abspath(directory)}'\nmsg['To'] = ', '.join(args.recipients)\nmsg['From'] = args.sender\nmsg.preamble = 'You will not see this in a MIME-aware mail reader.\\n'\nfor filename in os.listdir(directory):\npath = os.path.join(directory, filename)\nif not os.path.isfile(path):\ncontinue\n# Guess the content type based on the file's extension. Encoding\n# will be ignored, although we should check for simple things like\n# gzip'd or compressed files.\nctype, encoding = mimetypes.guess_file_type(path)\nif ctype is None or encoding is not None:\n# No guess could be made, or the file is encoded (compressed), so\n# use a generic bag-of-bits type.\nctype = 'application/octet-stream'\nmaintype, subtype = ctype.split('/', 1)\nwith open(path, 'rb') as fp:\nmsg.add_attachment(fp.read(),\nmaintype=maintype,\nsubtype=subtype,\nfilename=filename)\n# Now send or store the message\nif args.output:\nwith open(args.output, 'wb') as fp:\nfp.write(msg.as_bytes(policy=SMTP))\nelse:\nwith smtplib.SMTP('localhost') as s:\ns.send_message(msg)\nif __name__ == '__main__':\nmain()\nHere\u2019s an example of how to unpack a MIME message like the one above, into a directory of files:\n#!/usr/bin/env python3\n\"\"\"Unpack a MIME message into a directory of files.\"\"\"\nimport os\nimport email\nimport mimetypes\nfrom email.policy import default\nfrom argparse import ArgumentParser\ndef main():\nparser = ArgumentParser(description=\"\"\"\\\nUnpack a MIME message into a directory of files.\n\"\"\")\nparser.add_argument('-d', '--directory', required=True,\nhelp=\"\"\"Unpack the MIME message into the named\ndirectory, which will be created if it doesn't already\nexist.\"\"\")\nparser.add_argument('msgfile')\nargs = parser.parse_args()\nwith open(args.msgfile, 'rb') as fp:\nmsg = email.message_from_binary_file(fp, policy=default)\ntry:\nos.mkdir(args.directory)\nexcept FileExistsError:\npass\ncounter = 1\nfor part in msg.walk():\n# multipart/* are just containers\nif part.get_content_maintype() == 'multipart':\ncontinue\n# Applications should really sanitize the given filename so that an\n# email message can't be used to overwrite important files\nfilename = part.get_filename()\nif not filename:\next = mimetypes.guess_extension(part.get_content_type())\nif not ext:\n# Use a generic bag-of-bits extension\next = '.bin'\nfilename = f'part-{counter:03d}{ext}'\ncounter += 1\nwith open(os.path.join(args.directory, filename), 'wb') as fp:\nfp.write(part.get_payload(decode=True))\nif __name__ == '__main__':\nmain()\nHere\u2019s an example of how to create an HTML message with an alternative plain text version. To make things a bit more interesting, we include a related image in the html part, and we save a copy of what we are going to send to disk, as well as sending it.\n#!/usr/bin/env python3\nimport smtplib\nfrom email.message import EmailMessage\nfrom email.headerregistry import Address\nfrom email.utils import make_msgid\n# Create the base text message.\nmsg = EmailMessage()\nmsg['Subject'] = \"Pourquoi pas des asperges pour ce midi ?\"\nmsg['From'] = Address(\"Pep\u00e9 Le Pew\", \"pepe\", \"example.com\")\nmsg['To'] = (Address(\"Penelope Pussycat\", \"penelope\", \"example.com\"),\nAddress(\"Fabrette Pussycat\", \"fabrette\", \"example.com\"))\nmsg.set_content(\"\"\"\\\nSalut!\nCette recette [1] sera s\u00fbrement un tr\u00e8s bon repas.\n[1] http://www.yummly.com/recipe/Roasted-Asparagus-Epicurious-203718\n--Pep\u00e9\n\"\"\")\n# Add the html version. This converts the message into a multipart/alternative\n# container, with the original text message as the first part and the new html\n# message as the second part.\nasparagus_cid = make_msgid()\nmsg.add_alternative(\"\"\"\\\n\n\n\n

Salut!

\n

Cette\n\nrecette\n sera s\u00fbrement un tr\u00e8s bon repas.\n

\n\n\n\n\"\"\".format(asparagus_cid=asparagus_cid[1:-1]), subtype='html')\n# note that we needed to peel the <> off the msgid for use in the html.\n# Now add the related image to the html part.\nwith open(\"roasted-asparagus.jpg\", 'rb') as img:\nmsg.get_payload()[1].add_related(img.read(), 'image', 'jpeg',\ncid=asparagus_cid)\n# Make a local copy of what we are going to send.\nwith open('outgoing.msg', 'wb') as f:\nf.write(bytes(msg))\n# Send the message via local SMTP server.\nwith smtplib.SMTP('localhost') as s:\ns.send_message(msg)\nIf we were sent the message from the last example, here is one way we could process it:\nimport os\nimport sys\nimport tempfile\nimport mimetypes\nimport webbrowser\n# Import the email modules we'll need\nfrom email import policy\nfrom email.parser import BytesParser\ndef magic_html_parser(html_text, partfiles):\n\"\"\"Return safety-sanitized html linked to partfiles.\nRewrite the href=\"cid:....\" attributes to point to the filenames in partfiles.\nThough not trivial, this should be possible using html.parser.\n\"\"\"\nraise NotImplementedError(\"Add the magic needed\")\n# In a real program you'd get the filename from the arguments.\nwith open('outgoing.msg', 'rb') as fp:\nmsg = BytesParser(policy=policy.default).parse(fp)\n# Now the header items can be accessed as a dictionary, and any non-ASCII will\n# be converted to unicode:\nprint('To:', msg['to'])\nprint('From:', msg['from'])\nprint('Subject:', msg['subject'])\n# If we want to print a preview of the message content, we can extract whatever\n# the least formatted payload is and print the first three lines. Of course,\n# if the message has no plain text part printing the first three lines of html\n# is probably useless, but this is just a conceptual example.\nsimplest = msg.get_body(preferencelist=('plain', 'html'))\nprint()\nprint(''.join(simplest.get_content().splitlines(keepends=True)[:3]))\nans = input(\"View full message?\")\nif ans.lower()[0] == 'n':\nsys.exit()\n# We can extract the richest alternative in order to display it:\nrichest = msg.get_body()\npartfiles = {}\nif richest['content-type'].maintype == 'text':\nif richest['content-type'].subtype == 'plain':\nfor line in richest.get_content().splitlines():\nprint(line)\nsys.exit()\nelif richest['content-type'].subtype == 'html':\nbody = richest\nelse:\nprint(\"Don't know how to display {}\".format(richest.get_content_type()))\nsys.exit()\nelif richest['content-type'].content_type == 'multipart/related':\nbody = richest.get_body(preferencelist=('html'))\nfor part in richest.iter_attachments():\nfn = part.get_filename()\nif fn:\nextension = os.path.splitext(part.get_filename())[1]\nelse:\nextension = mimetypes.guess_extension(part.get_content_type())\nwith tempfile.NamedTemporaryFile(suffix=extension, delete=False) as f:\nf.write(part.get_content())\n# again strip the <> to go from email form of cid to html form.\npartfiles[part['content-id'][1:-1]] = f.name\nelse:\nprint(\"Don't know how to display {}\".format(richest.get_content_type()))\nsys.exit()\nwith tempfile.NamedTemporaryFile(mode='w', delete=False) as f:\nf.write(magic_html_parser(body.get_content(), partfiles))\nwebbrowser.open(f.name)\nos.remove(f.name)\nfor fn in partfiles.values():\nos.remove(fn)\n# Of course, there are lots of email messages that could break this simple\n# minded program, but it will handle the most common ones.\nUp to the prompt, the output from the above is:\nTo: Penelope Pussycat , Fabrette Pussycat \nFrom: Pep\u00e9 Le Pew \nSubject: Pourquoi pas des asperges pour ce midi ?\nSalut!\nCette recette [1] sera s\u00fbrement un tr\u00e8s bon repas.\nFootnotes", "code_snippets": ["\n", "\n\n", "\n", " ", "\n\n", "\n", " ", " ", " ", "\n ", "\n ", " ", " ", "\n ", "\n\n", "\n", "\n", " ", " ", "\n", " ", " ", "\n", " ", " ", "\n\n", "\n", " ", " ", "\n", "\n", "\n", "\n", "\n", " ", "\n", " ", "\n\n", "\n", "\n", "\n\n", "\n", " ", " ", "\n ", "\n ", "\n ", "\n ", "\n ", "\n\n", "\n", "\n", "\n", "\n\n", "\n", "\n", "\n", "\n", "\n\n", "\n", " ", "\n\n", "\n", " ", " ", "\n", " ", " ", "\n", "\n", "\n", " ", " ", "\n", " ", " ", "\n", " ", " ", "\n\n", "\n", "\n", " ", " ", " ", "\n ", " ", " ", " ", " ", "\n ", " ", " ", "\n ", " ", "\n ", "\n\n", "\n", " ", " ", " ", "\n ", "\n", "\n\n", "\n\n", "\n", "\n", "\n", "\n\n", " ", "\n\n", " ", "\n", " ", "\n\n\n", "\n ", " ", " ", "\n", "\n", "\n", "\n", "\n", "\n ", " ", "\n ", "\n", "\n", "\n", "\n ", " ", "\n ", "\n ", "\n", "\n ", " ", " ", "\n ", "\n ", " ", " ", "\n ", " ", "\n ", " ", "\n ", "\n ", " ", " ", "\n ", " ", " ", "\n ", " ", " ", "\n ", " ", " ", "\n ", "\n ", " ", " ", "\n ", " ", " ", "\n ", " ", " ", "\n ", " ", " ", "\n ", " ", " ", "\n\n ", " ", " ", " ", "\n ", " ", " ", " ", "\n ", " ", " ", "\n ", "\n ", "\n ", "\n ", "\n ", " ", " ", " ", "\n ", " ", " ", " ", " ", " ", " ", " ", " ", "\n ", "\n ", "\n ", " ", " ", "\n ", " ", " ", " ", " ", "\n ", " ", " ", " ", " ", "\n ", "\n ", "\n ", "\n ", "\n ", "\n ", " ", "\n ", " ", " ", " ", " ", "\n ", "\n ", "\n ", " ", " ", " ", "\n ", "\n\n\n", " ", " ", " ", "\n ", "\n", "\n\n", "\n\n", "\n", "\n", "\n\n", " ", "\n\n", " ", "\n\n\n", "\n ", " ", " ", "\n", "\n", "\n ", " ", " ", "\n ", "\n", "\n", "\n ", "\n ", " ", " ", "\n\n ", " ", " ", " ", " ", "\n ", " ", " ", " ", "\n\n ", "\n ", "\n ", " ", "\n ", "\n\n ", " ", " ", "\n ", " ", " ", " ", "\n ", "\n ", " ", " ", " ", "\n ", "\n ", "\n ", "\n ", " ", " ", "\n ", " ", " ", "\n ", " ", " ", "\n ", " ", " ", "\n ", "\n ", " ", " ", "\n ", " ", " ", "\n ", " ", " ", "\n ", " ", " ", " ", " ", " ", "\n ", "\n\n\n", " ", " ", " ", "\n ", "\n", "\n\n", "\n\n", " ", "\n", " ", "\n", " ", "\n\n", "\n", " ", " ", "\n", " ", " ", "\n", " ", " ", " ", " ", "\n", " ", " ", " ", " ", "\n ", " ", " ", "\n", "\n", "\n\n", "\n\n", "\n\n", "\n", "\n\n", "\n", "\n", "\n", " ", " ", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", " ", "\n", "\n\n", "\n", " ", " ", " ", " ", "\n ", " ", " ", "\n ", "\n\n", "\n", " ", " ", " ", " ", "\n ", "\n\n", "\n", " ", " ", " ", "\n ", "\n", "\n", "\n", "\n", "\n", "\n\n", "\n", " ", "\n", " ", "\n\n\n", " ", "\n", "\n\n", "\n", "\n", "\n ", " ", "\n\n\n", "\n", " ", " ", " ", " ", "\n ", " ", " ", "\n\n", "\n", "\n", " ", "\n", " ", "\n", " ", "\n\n", "\n", "\n", "\n", "\n", " ", " ", " ", "\n", "\n", "\n\n", " ", " ", "\n", " ", " ", " ", "\n ", "\n\n", "\n", " ", " ", "\n", " ", " ", "\n", " ", " ", " ", "\n ", " ", " ", " ", "\n ", " ", " ", " ", "\n ", "\n ", "\n ", " ", " ", " ", "\n ", " ", " ", "\n ", "\n ", "\n ", "\n", " ", " ", " ", "\n ", " ", " ", "\n ", " ", " ", " ", "\n ", " ", " ", "\n ", " ", "\n ", " ", " ", "\n ", "\n ", " ", " ", "\n ", " ", " ", " ", " ", "\n ", "\n ", "\n ", " ", " ", "\n", "\n ", "\n ", "\n", " ", " ", " ", " ", "\n ", " ", "\n", "\n", "\n", " ", " ", " ", "\n ", "\n\n", "\n", "\n"], "language": "Python", "source": "python.org", "token_count": 2908} +{"url": "https://docs.python.org/3/c-api/call.html", "title": "Call Protocol", "content": "Call Protocol\u00b6\nCPython supports two different calling protocols: tp_call and vectorcall.\nThe tp_call Protocol\u00b6\nInstances of classes that set tp_call\nare callable.\nThe signature of the slot is:\nPyObject *tp_call(PyObject *callable, PyObject *args, PyObject *kwargs);\nA call is made using a tuple for the positional arguments\nand a dict for the keyword arguments, similarly to\ncallable(*args, **kwargs)\nin Python code.\nargs must be non-NULL (use an empty tuple if there are no arguments)\nbut kwargs may be NULL if there are no keyword arguments.\nThis convention is not only used by tp_call:\ntp_new\nand tp_init\nalso pass arguments this way.\nTo call an object, use PyObject_Call()\nor another\ncall API.\nThe Vectorcall Protocol\u00b6\nAdded in version 3.9.\nThe vectorcall protocol was introduced in PEP 590 as an additional protocol for making calls more efficient.\nAs rule of thumb, CPython will prefer the vectorcall for internal calls\nif the callable supports it. However, this is not a hard rule.\nAdditionally, some third-party extensions use tp_call directly\n(rather than using PyObject_Call()\n).\nTherefore, a class supporting vectorcall must also implement\ntp_call\n.\nMoreover, the callable must behave the same\nregardless of which protocol is used.\nThe recommended way to achieve this is by setting\ntp_call\nto PyVectorcall_Call()\n.\nThis bears repeating:\nWarning\nA class supporting vectorcall must also implement\ntp_call\nwith the same semantics.\nChanged in version 3.12: The Py_TPFLAGS_HAVE_VECTORCALL\nflag is now removed from a class\nwhen the class\u2019s __call__()\nmethod is reassigned.\n(This internally sets tp_call\nonly, and thus\nmay make it behave differently than the vectorcall function.)\nIn earlier Python versions, vectorcall should only be used with\nimmutable\nor static types.\nA class should not implement vectorcall if that would be slower than tp_call. For example, if the callee needs to convert the arguments to an args tuple and kwargs dict anyway, then there is no point in implementing vectorcall.\nClasses can implement the vectorcall protocol by enabling the\nPy_TPFLAGS_HAVE_VECTORCALL\nflag and setting\ntp_vectorcall_offset\nto the offset inside the\nobject structure where a vectorcallfunc appears.\nThis is a pointer to a function with the following signature:\n-\ntypedef PyObject *(*vectorcallfunc)(PyObject *callable, PyObject *const *args, size_t nargsf, PyObject *kwnames)\u00b6\n- Part of the Stable ABI since version 3.12.\ncallable is the object being called.\n- args is a C array consisting of the positional arguments followed by the\nvalues of the keyword arguments. This can be NULL if there are no arguments.\n- nargsf is the number of positional arguments plus possibly the\nPY_VECTORCALL_ARGUMENTS_OFFSET\nflag. To get the actual number of positional arguments from nargsf, usePyVectorcall_NARGS()\n.\n- kwnames is a tuple containing the names of the keyword arguments;\nin other words, the keys of the kwargs dict. These names must be strings (instances of\nstr\nor a subclass) and they must be unique. If there are no keyword arguments, then kwnames can instead be NULL.\n-\nPY_VECTORCALL_ARGUMENTS_OFFSET\u00b6\n- Part of the Stable ABI since version 3.12.\nIf this flag is set in a vectorcall nargsf argument, the callee is allowed to temporarily change\nargs[-1]\n. In other words, args points to argument 1 (not 0) in the allocated vector. The callee must restore the value ofargs[-1]\nbefore returning.For\nPyObject_VectorcallMethod()\n, this flag means instead thatargs[0]\nmay be changed.Whenever they can do so cheaply (without additional allocation), callers are encouraged to use\nPY_VECTORCALL_ARGUMENTS_OFFSET\n. Doing so will allow callables such as bound methods to make their onward calls (which include a prepended self argument) very efficiently.Added in version 3.8.\nTo call an object that implements vectorcall, use a call API\nfunction as with any other callable.\nPyObject_Vectorcall()\nwill usually be most efficient.\nRecursion Control\u00b6\nWhen using tp_call, callees do not need to worry about\nrecursion: CPython uses\nPy_EnterRecursiveCall()\nand Py_LeaveRecursiveCall()\nfor calls made using tp_call.\nFor efficiency, this is not the case for calls done using vectorcall: the callee should use Py_EnterRecursiveCall and Py_LeaveRecursiveCall if needed.\nVectorcall Support API\u00b6\n-\nPy_ssize_t PyVectorcall_NARGS(size_t nargsf)\u00b6\n- Part of the Stable ABI since version 3.12.\nGiven a vectorcall nargsf argument, return the actual number of arguments. Currently equivalent to:\n(Py_ssize_t)(nargsf & ~PY_VECTORCALL_ARGUMENTS_OFFSET)\nHowever, the function\nPyVectorcall_NARGS\nshould be used to allow for future extensions.Added in version 3.8.\n-\nvectorcallfunc PyVectorcall_Function(PyObject *op)\u00b6\nIf op does not support the vectorcall protocol (either because the type does not or because the specific instance does not), return NULL. Otherwise, return the vectorcall function pointer stored in op. This function never raises an exception.\nThis is mostly useful to check whether or not op supports vectorcall, which can be done by checking\nPyVectorcall_Function(op) != NULL\n.Added in version 3.9.\n-\nPyObject *PyVectorcall_Call(PyObject *callable, PyObject *tuple, PyObject *dict)\u00b6\n- Part of the Stable ABI since version 3.12.\nCall callable\u2019s\nvectorcallfunc\nwith positional and keyword arguments given in a tuple and dict, respectively.This is a specialized function, intended to be put in the\ntp_call\nslot or be used in an implementation oftp_call\n. It does not check thePy_TPFLAGS_HAVE_VECTORCALL\nflag and it does not fall back totp_call\n.Added in version 3.8.\nObject Calling API\u00b6\nVarious functions are available for calling a Python object. Each converts its arguments to a convention supported by the called object \u2013 either tp_call or vectorcall. In order to do as little conversion as possible, pick one that best fits the format of data you have available.\nThe following table summarizes the available functions; please see individual documentation for details.\nFunction |\ncallable |\nargs |\nkwargs |\n|---|---|---|---|\n|\ntuple |\ndict/ |\n|\n|\n\u2014 |\n\u2014 |\n|\n|\n1 object |\n\u2014 |\n|\n|\ntuple/ |\n\u2014 |\n|\n|\nformat |\n\u2014 |\n|\nobj + |\nformat |\n\u2014 |\n|\n|\nvariadic |\n\u2014 |\n|\nobj + name |\nvariadic |\n\u2014 |\n|\nobj + name |\n\u2014 |\n\u2014 |\n|\nobj + name |\n1 object |\n\u2014 |\n|\n|\nvectorcall |\nvectorcall |\n|\n|\nvectorcall |\ndict/ |\n|\narg + name |\nvectorcall |\nvectorcall |\n-\nPyObject *PyObject_Call(PyObject *callable, PyObject *args, PyObject *kwargs)\u00b6\n- Return value: New reference. Part of the Stable ABI.\nCall a callable Python object callable, with arguments given by the tuple args, and named arguments given by the dictionary kwargs.\nargs must not be NULL; use an empty tuple if no arguments are needed. If no named arguments are needed, kwargs can be NULL.\nReturn the result of the call on success, or raise an exception and return NULL on failure.\nThis is the equivalent of the Python expression:\ncallable(*args, **kwargs)\n.\n-\nPyObject *PyObject_CallNoArgs(PyObject *callable)\u00b6\n- Return value: New reference. Part of the Stable ABI since version 3.10.\nCall a callable Python object callable without any arguments. It is the most efficient way to call a callable Python object without any argument.\nReturn the result of the call on success, or raise an exception and return NULL on failure.\nAdded in version 3.9.\n-\nPyObject *PyObject_CallOneArg(PyObject *callable, PyObject *arg)\u00b6\n- Return value: New reference.\nCall a callable Python object callable with exactly 1 positional argument arg and no keyword arguments.\nReturn the result of the call on success, or raise an exception and return NULL on failure.\nAdded in version 3.9.\n-\nPyObject *PyObject_CallObject(PyObject *callable, PyObject *args)\u00b6\n- Return value: New reference. Part of the Stable ABI.\nCall a callable Python object callable, with arguments given by the tuple args. If no arguments are needed, then args can be NULL.\nReturn the result of the call on success, or raise an exception and return NULL on failure.\nThis is the equivalent of the Python expression:\ncallable(*args)\n.\n-\nPyObject *PyObject_CallFunction(PyObject *callable, const char *format, ...)\u00b6\n- Return value: New reference. Part of the Stable ABI.\nCall a callable Python object callable, with a variable number of C arguments. The C arguments are described using a\nPy_BuildValue()\nstyle format string. The format can be NULL, indicating that no arguments are provided.Return the result of the call on success, or raise an exception and return NULL on failure.\nThis is the equivalent of the Python expression:\ncallable(*args)\n.Note that if you only pass PyObject* args,\nPyObject_CallFunctionObjArgs()\nis a faster alternative.Changed in version 3.4: The type of format was changed from\nchar *\n.\n-\nPyObject *PyObject_CallMethod(PyObject *obj, const char *name, const char *format, ...)\u00b6\n- Return value: New reference. Part of the Stable ABI.\nCall the method named name of object obj with a variable number of C arguments. The C arguments are described by a\nPy_BuildValue()\nformat string that should produce a tuple.The format can be NULL, indicating that no arguments are provided.\nReturn the result of the call on success, or raise an exception and return NULL on failure.\nThis is the equivalent of the Python expression:\nobj.name(arg1, arg2, ...)\n.Note that if you only pass PyObject* args,\nPyObject_CallMethodObjArgs()\nis a faster alternative.Changed in version 3.4: The types of name and format were changed from\nchar *\n.\n-\nPyObject *PyObject_CallFunctionObjArgs(PyObject *callable, ...)\u00b6\n- Return value: New reference. Part of the Stable ABI.\nCall a callable Python object callable, with a variable number of PyObject* arguments. The arguments are provided as a variable number of parameters followed by NULL.\nReturn the result of the call on success, or raise an exception and return NULL on failure.\nThis is the equivalent of the Python expression:\ncallable(arg1, arg2, ...)\n.\n-\nPyObject *PyObject_CallMethodObjArgs(PyObject *obj, PyObject *name, ...)\u00b6\n- Return value: New reference. Part of the Stable ABI.\nCall a method of the Python object obj, where the name of the method is given as a Python string object in name. It is called with a variable number of PyObject* arguments. The arguments are provided as a variable number of parameters followed by NULL.\nReturn the result of the call on success, or raise an exception and return NULL on failure.\n-\nPyObject *PyObject_CallMethodNoArgs(PyObject *obj, PyObject *name)\u00b6\nCall a method of the Python object obj without arguments, where the name of the method is given as a Python string object in name.\nReturn the result of the call on success, or raise an exception and return NULL on failure.\nAdded in version 3.9.\n-\nPyObject *PyObject_CallMethodOneArg(PyObject *obj, PyObject *name, PyObject *arg)\u00b6\nCall a method of the Python object obj with a single positional argument arg, where the name of the method is given as a Python string object in name.\nReturn the result of the call on success, or raise an exception and return NULL on failure.\nAdded in version 3.9.\n-\nPyObject *PyObject_Vectorcall(PyObject *callable, PyObject *const *args, size_t nargsf, PyObject *kwnames)\u00b6\n- Part of the Stable ABI since version 3.12.\nCall a callable Python object callable. The arguments are the same as for\nvectorcallfunc\n. If callable supports vectorcall, this directly calls the vectorcall function stored in callable.Return the result of the call on success, or raise an exception and return NULL on failure.\nAdded in version 3.8: as\n_PyObject_Vectorcall\nChanged in version 3.9: Renamed to the current name, without the leading underscore. The old provisional name is soft deprecated.\n-\nPyObject *PyObject_VectorcallDict(PyObject *callable, PyObject *const *args, size_t nargsf, PyObject *kwdict)\u00b6\nCall callable with positional arguments passed exactly as in the vectorcall protocol, but with keyword arguments passed as a dictionary kwdict. The args array contains only the positional arguments.\nRegardless of which protocol is used internally, a conversion of arguments needs to be done. Therefore, this function should only be used if the caller already has a dictionary ready to use for the keyword arguments, but not a tuple for the positional arguments.\nAdded in version 3.9.\n-\nPyObject *PyObject_VectorcallMethod(PyObject *name, PyObject *const *args, size_t nargsf, PyObject *kwnames)\u00b6\n- Part of the Stable ABI since version 3.12.\nCall a method using the vectorcall calling convention. The name of the method is given as a Python string name. The object whose method is called is args[0], and the args array starting at args[1] represents the arguments of the call. There must be at least one positional argument. nargsf is the number of positional arguments including args[0], plus\nPY_VECTORCALL_ARGUMENTS_OFFSET\nif the value ofargs[0]\nmay temporarily be changed. Keyword arguments can be passed just like inPyObject_Vectorcall()\n.If the object has the\nPy_TPFLAGS_METHOD_DESCRIPTOR\nfeature, this will call the unbound method object with the full args vector as arguments.Return the result of the call on success, or raise an exception and return NULL on failure.\nAdded in version 3.9.\nCall Support API\u00b6\n-\nint PyCallable_Check(PyObject *o)\u00b6\n- Part of the Stable ABI.\nDetermine if the object o is callable. Return\n1\nif the object is callable and0\notherwise. This function always succeeds.", "code_snippets": [], "language": "Python", "source": "python.org", "token_count": 3332} +{"url": "https://docs.python.org/3/c-api/time.html", "title": "PyTime C API", "content": "PyTime C API\u00b6\nAdded in version 3.13.\nThe clock C API provides access to system clocks.\nIt is similar to the Python time\nmodule.\nFor C API related to the datetime\nmodule, see DateTime Objects.\nTypes\u00b6\n-\ntype PyTime_t\u00b6\nA timestamp or duration in nanoseconds, represented as a signed 64-bit integer.\nThe reference point for timestamps depends on the clock used. For example,\nPyTime_Time()\nreturns timestamps relative to the UNIX epoch.The supported range is around [-292.3 years; +292.3 years]. Using the Unix epoch (January 1st, 1970) as reference, the supported date range is around [1677-09-21; 2262-04-11]. The exact limits are exposed as constants:\nClock Functions\u00b6\nThe following functions take a pointer to a PyTime_t that they set to the value of a particular clock. Details of each clock are given in the documentation of the corresponding Python function.\nThe functions return 0\non success, or -1\n(with an exception set)\non failure.\nOn integer overflow, they set the PyExc_OverflowError\nexception and\nset *result\nto the value clamped to the [PyTime_MIN; PyTime_MAX]\nrange.\n(On current systems, integer overflows are likely caused by misconfigured\nsystem time.)\nAs any other C API (unless otherwise specified), the functions must be called with an attached thread state.\n-\nint PyTime_Monotonic(PyTime_t *result)\u00b6\nRead the monotonic clock. See\ntime.monotonic()\nfor important details on this clock.\n-\nint PyTime_PerfCounter(PyTime_t *result)\u00b6\nRead the performance counter. See\ntime.perf_counter()\nfor important details on this clock.\n-\nint PyTime_Time(PyTime_t *result)\u00b6\nRead the \u201cwall clock\u201d time. See\ntime.time()\nfor details important on this clock.\nRaw Clock Functions\u00b6\nSimilar to clock functions, but don\u2019t set an exception on error and don\u2019t require the caller to have an attached thread state.\nOn success, the functions return 0\n.\nOn failure, they set *result\nto 0\nand return -1\n, without setting\nan exception. To get the cause of the error, attach a thread state,\nand call the regular (non-Raw\n) function. Note that the regular function may succeed after\nthe Raw\none failed.\n-\nint PyTime_MonotonicRaw(PyTime_t *result)\u00b6\nSimilar to\nPyTime_Monotonic()\n, but don\u2019t set an exception on error and don\u2019t require an attached thread state.\n-\nint PyTime_PerfCounterRaw(PyTime_t *result)\u00b6\nSimilar to\nPyTime_PerfCounter()\n, but don\u2019t set an exception on error and don\u2019t require an attached thread state.\n-\nint PyTime_TimeRaw(PyTime_t *result)\u00b6\nSimilar to\nPyTime_Time()\n, but don\u2019t set an exception on error and don\u2019t require an attached thread state.", "code_snippets": [], "language": "Python", "source": "python.org", "token_count": 637} +{"url": "https://docs.python.org/3/library/winsound.html", "title": " \u2014 Sound-playing interface for Windows", "content": "winsound\n\u2014 Sound-playing interface for Windows\u00b6\nThe winsound\nmodule provides access to the basic sound-playing machinery\nprovided by Windows platforms. It includes functions and several constants.\nAvailability: Windows.\n- winsound.Beep(frequency, duration)\u00b6\nBeep the PC\u2019s speaker. The frequency parameter specifies frequency, in hertz, of the sound, and must be in the range 37 through 32,767. The duration parameter specifies the number of milliseconds the sound should last. If the system is not able to beep the speaker,\nRuntimeError\nis raised.\n- winsound.PlaySound(sound, flags)\u00b6\nCall the underlying\nPlaySound()\nfunction from the Platform API. The sound parameter may be a filename, a system sound alias, audio data as a bytes-like object, orNone\n. Its interpretation depends on the value of flags, which can be a bitwise ORed combination of the constants described below. If the sound parameter isNone\n, any currently playing waveform sound is stopped. If the system indicates an error,RuntimeError\nis raised.\n- winsound.MessageBeep(type=MB_OK)\u00b6\nCall the underlying\nMessageBeep()\nfunction from the Platform API. This plays a sound as specified in the registry. The type argument specifies which sound to play; possible values are-1\n,MB_ICONASTERISK\n,MB_ICONEXCLAMATION\n,MB_ICONHAND\n,MB_ICONQUESTION\n, andMB_OK\n, all described below. The value-1\nproduces a \u201csimple beep\u201d; this is the final fallback if a sound cannot be played otherwise. If the system indicates an error,RuntimeError\nis raised.\n- winsound.SND_ALIAS\u00b6\nThe sound parameter is a sound association name from the registry. If the registry contains no such name, play the system default sound unless\nSND_NODEFAULT\nis also specified. If no default sound is registered, raiseRuntimeError\n. Do not use withSND_FILENAME\n.All Win32 systems support at least the following; most systems support many more:\nPlaySound()\nnameCorresponding Control Panel Sound name\n'SystemAsterisk'\nAsterisk\n'SystemExclamation'\nExclamation\n'SystemExit'\nExit Windows\n'SystemHand'\nCritical Stop\n'SystemQuestion'\nQuestion\nFor example:\nimport winsound # Play Windows exit sound. winsound.PlaySound(\"SystemExit\", winsound.SND_ALIAS) # Probably play Windows default sound, if any is registered (because # \"*\" probably isn't the registered name of any sound). winsound.PlaySound(\"*\", winsound.SND_ALIAS)\n- winsound.SND_LOOP\u00b6\nPlay the sound repeatedly. The\nSND_ASYNC\nflag must also be used to avoid blocking. Cannot be used withSND_MEMORY\n.\n- winsound.SND_MEMORY\u00b6\nThe sound parameter to\nPlaySound()\nis a memory image of a WAV file, as a bytes-like object.Note\nThis module does not support playing from a memory image asynchronously, so a combination of this flag and\nSND_ASYNC\nwill raiseRuntimeError\n.\n- winsound.SND_PURGE\u00b6\nStop playing all instances of the specified sound.\nNote\nThis flag is not supported on modern Windows platforms.\n- winsound.SND_ASYNC\u00b6\nReturn immediately, allowing sounds to play asynchronously.\n- winsound.SND_NODEFAULT\u00b6\nIf the specified sound cannot be found, do not play the system default sound.\n- winsound.SND_NOSTOP\u00b6\nDo not interrupt sounds currently playing.\n- winsound.SND_NOWAIT\u00b6\nReturn immediately if the sound driver is busy.\nNote\nThis flag is not supported on modern Windows platforms.\n- winsound.SND_APPLICATION\u00b6\nThe sound parameter is an application-specific alias in the registry. This flag can be combined with the\nSND_ALIAS\nflag to specify an application-defined sound alias.\n- winsound.SND_SENTRY\u00b6\nTriggers a SoundSentry event when the sound is played.\nAdded in version 3.14.\n- winsound.SND_SYNC\u00b6\nThe sound is played synchronously. This is the default behavior.\nAdded in version 3.14.\n- winsound.SND_SYSTEM\u00b6\nAssign the sound to the audio session for system notification sounds.\nAdded in version 3.14.\n- winsound.MB_ICONASTERISK\u00b6\nPlay the\nSystemDefault\nsound.\n- winsound.MB_ICONEXCLAMATION\u00b6\nPlay the\nSystemExclamation\nsound.\n- winsound.MB_ICONHAND\u00b6\nPlay the\nSystemHand\nsound.\n- winsound.MB_ICONQUESTION\u00b6\nPlay the\nSystemQuestion\nsound.\n- winsound.MB_OK\u00b6\nPlay the\nSystemDefault\nsound.\n- winsound.MB_ICONERROR\u00b6\nPlay the\nSystemHand\nsound.Added in version 3.14.\n- winsound.MB_ICONINFORMATION\u00b6\nPlay the\nSystemDefault\nsound.Added in version 3.14.\n- winsound.MB_ICONSTOP\u00b6\nPlay the\nSystemHand\nsound.Added in version 3.14.\n- winsound.MB_ICONWARNING\u00b6\nPlay the\nSystemExclamation\nsound.Added in version 3.14.", "code_snippets": ["\n", "\n", " ", "\n\n", "\n", "\n", " ", "\n"], "language": "Python", "source": "python.org", "token_count": 1090} +{"url": "https://docs.python.org/3/faq/installed.html", "title": "\u201cWhy is Python Installed on my Computer?\u201d FAQ", "content": "\u201cWhy is Python Installed on my Computer?\u201d FAQ\u00b6\nWhat is Python?\u00b6\nPython is a programming language. It\u2019s used for many different applications. It\u2019s used in some high schools and colleges as an introductory programming language because Python is easy to learn, but it\u2019s also used by professional software developers at places such as Google, NASA, and Lucasfilm Ltd.\nIf you wish to learn more about Python, start with the Beginner\u2019s Guide to Python.\nWhy is Python installed on my machine?\u00b6\nIf you find Python installed on your system but don\u2019t remember installing it, there are several possible ways it could have gotten there.\nPerhaps another user on the computer wanted to learn programming and installed it; you\u2019ll have to figure out who\u2019s been using the machine and might have installed it.\nA third-party application installed on the machine might have been written in Python and included a Python installation. There are many such applications, from GUI programs to network servers and administrative scripts.\nSome Windows machines also have Python installed. At this writing we\u2019re aware of computers from Hewlett-Packard and Compaq that include Python. Apparently some of HP/Compaq\u2019s administrative tools are written in Python.\nMany Unix-compatible operating systems, such as macOS and some Linux distributions, have Python installed by default; it\u2019s included in the base installation.\nCan I delete Python?\u00b6\nThat depends on where Python came from.\nIf someone installed it deliberately, you can remove it without hurting anything. On Windows, use the Add/Remove Programs icon in the Control Panel.\nIf Python was installed by a third-party application, you can also remove it, but that application will no longer work. You should use that application\u2019s uninstaller rather than removing Python directly.\nIf Python came with your operating system, removing it is not recommended. If you remove it, whatever tools were written in Python will no longer run, and some of them might be important to you. Reinstalling the whole system would then be required to fix things again.", "code_snippets": [], "language": "Python", "source": "python.org", "token_count": 518} +{"url": "https://docs.python.org/3/c-api/allocation.html", "title": "Allocating Objects on the Heap", "content": "Allocating Objects on the Heap\u00b6\n-\nPyObject *_PyObject_New(PyTypeObject *type)\u00b6\n- Return value: New reference.\n-\nPyVarObject *_PyObject_NewVar(PyTypeObject *type, Py_ssize_t size)\u00b6\n- Return value: New reference.\n-\nPyObject *PyObject_Init(PyObject *op, PyTypeObject *type)\u00b6\n- Return value: Borrowed reference. Part of the Stable ABI.\nInitialize a newly allocated object op with its type and initial reference. Returns the initialized object. Other fields of the object are not initialized. Despite its name, this function is unrelated to the object\u2019s\n__init__()\nmethod (tp_init\nslot). Specifically, this function does not call the object\u2019s__init__()\nmethod.In general, consider this function to be a low-level routine. Use\ntp_alloc\nwhere possible. For implementingtp_alloc\nfor your type, preferPyType_GenericAlloc()\norPyObject_New()\n.Note\nThis function only initializes the object\u2019s memory corresponding to the initial\nPyObject\nstructure. It does not zero the rest.\n-\nPyVarObject *PyObject_InitVar(PyVarObject *op, PyTypeObject *type, Py_ssize_t size)\u00b6\n- Return value: Borrowed reference. Part of the Stable ABI.\nThis does everything\nPyObject_Init()\ndoes, and also initializes the length information for a variable-size object.Note\nThis function only initializes some of the object\u2019s memory. It does not zero the rest.\n-\nPyObject_New(TYPE, typeobj)\u00b6\nAllocates a new Python object using the C structure type TYPE and the Python type object typeobj (\nPyTypeObject*\n) by callingPyObject_Malloc()\nto allocate memory and initializing it likePyObject_Init()\n. The caller will own the only reference to the object (i.e. its reference count will be one).Avoid calling this directly to allocate memory for an object; call the type\u2019s\ntp_alloc\nslot instead.When populating a type\u2019s\ntp_alloc\nslot,PyType_GenericAlloc()\nis preferred over a custom function that simply calls this macro.This macro does not call\ntp_alloc\n,tp_new\n(__new__()\n), ortp_init\n(__init__()\n).This cannot be used for objects with\nPy_TPFLAGS_HAVE_GC\nset intp_flags\n; usePyObject_GC_New\ninstead.Memory allocated by this macro must be freed with\nPyObject_Free()\n(usually called via the object\u2019stp_free\nslot).Note\nThe returned memory is not guaranteed to have been completely zeroed before it was initialized.\nNote\nThis macro does not construct a fully initialized object of the given type; it merely allocates memory and prepares it for further initialization by\ntp_init\n. To construct a fully initialized object, call typeobj instead. For example:PyObject *foo = PyObject_CallNoArgs((PyObject *)&PyFoo_Type);\n-\nPyObject_NewVar(TYPE, typeobj, size)\u00b6\nLike\nPyObject_New\nexcept:It allocates enough memory for the TYPE structure plus size (\nPy_ssize_t\n) fields of the size given by thetp_itemsize\nfield of typeobj.The memory is initialized like\nPyObject_InitVar()\n.\nThis is useful for implementing objects like tuples, which are able to determine their size at construction time. Embedding the array of fields into the same allocation decreases the number of allocations, improving the memory management efficiency.\nAvoid calling this directly to allocate memory for an object; call the type\u2019s\ntp_alloc\nslot instead.When populating a type\u2019s\ntp_alloc\nslot,PyType_GenericAlloc()\nis preferred over a custom function that simply calls this macro.This cannot be used for objects with\nPy_TPFLAGS_HAVE_GC\nset intp_flags\n; usePyObject_GC_NewVar\ninstead.Memory allocated by this function must be freed with\nPyObject_Free()\n(usually called via the object\u2019stp_free\nslot).Note\nThe returned memory is not guaranteed to have been completely zeroed before it was initialized.\nNote\nThis macro does not construct a fully initialized object of the given type; it merely allocates memory and prepares it for further initialization by\ntp_init\n. To construct a fully initialized object, call typeobj instead. For example:PyObject *list_instance = PyObject_CallNoArgs((PyObject *)&PyList_Type);\n-\nPyObject _Py_NoneStruct\u00b6\nObject which is visible in Python as\nNone\n. This should only be accessed using thePy_None\nmacro, which evaluates to a pointer to this object.\nSee also\n- Module Objects\nTo allocate and create extension modules.\nDeprecated aliases\u00b6\nThese are soft deprecated aliases to existing functions and macros. They exist solely for backwards compatibility.\nDeprecated alias |\nFunction |\n|---|---|\n|\n|\n|\n|\n|\n|\n|\n|\n|\n|\n|\n|\n|\n|\n|\n|\n|", "code_snippets": [], "language": "Python", "source": "python.org", "token_count": 1092} +{"url": "https://docs.python.org/3/whatsnew/3.13.html", "title": "What\u2019s New In Python 3.13", "content": "What\u2019s New In Python 3.13\u00b6\n- Editors:\nAdam Turner and Thomas Wouters\nThis article explains the new features in Python 3.13, compared to 3.12. Python 3.13 was released on October 7, 2024. For full details, see the changelog.\nSee also\nPEP 719 \u2013 Python 3.13 Release Schedule\nSummary \u2013 Release Highlights\u00b6\nPython 3.13 is a stable release of the Python programming language, with a mix of changes to the language, the implementation and the standard library. The biggest changes include a new interactive interpreter, experimental support for running in a free-threaded mode (PEP 703), and a Just-In-Time compiler (PEP 744).\nError messages continue to improve, with tracebacks now highlighted in color\nby default. The locals()\nbuiltin now has defined semantics for changing the returned mapping,\nand type parameters now support default values.\nThe library changes contain removal of deprecated APIs and modules, as well as the usual improvements in user-friendliness and correctness. Several legacy standard library modules have now been removed following their deprecation in Python 3.11 (PEP 594).\nThis article doesn\u2019t attempt to provide a complete specification of all new features, but instead gives a convenient overview. For full details refer to the documentation, such as the Library Reference and Language Reference. To understand the complete implementation and design rationale for a change, refer to the PEP for a particular new feature; but note that PEPs usually are not kept up-to-date once a feature has been fully implemented. See Porting to Python 3.13 for guidance on upgrading from earlier versions of Python.\nInterpreter improvements:\nA greatly improved interactive interpreter and improved error messages.\nPEP 667: The\nlocals()\nbuiltin now has defined semantics when mutating the returned mapping. Python debuggers and similar tools may now more reliably update local variables in optimized scopes even during concurrent code execution.PEP 703: CPython 3.13 has experimental support for running with the global interpreter lock disabled. See Free-threaded CPython for more details.\nPEP 744: A basic JIT compiler was added. It is currently disabled by default (though we may turn it on later). Performance improvements are modest \u2013 we expect to improve this over the next few releases.\nColor support in the new interactive interpreter, as well as in tracebacks and doctest output. This can be disabled through the\nPYTHON_COLORS\nandNO_COLOR\nenvironment variables.\nPython data model improvements:\n__static_attributes__\nstores the names of attributes accessed throughself.X\nin any function in a class body.__firstlineno__\nrecords the first line number of a class definition.\nSignificant improvements in the standard library:\nAdd a new\nPythonFinalizationError\nexception, raised when an operation is blocked during finalization.The\nargparse\nmodule now supports deprecating command-line options, positional arguments, and subcommands.The new functions\nbase64.z85encode()\nandbase64.z85decode()\nsupport encoding and decoding Z85 data.The\ncopy\nmodule now has acopy.replace()\nfunction, with support for many builtin types and any class defining the__replace__()\nmethod.The new\ndbm.sqlite3\nmodule is now the defaultdbm\nbackend.The\nos\nmodule has a suite of new functions for working with Linux\u2019s timer notification file descriptors.The\nrandom\nmodule now has a command-line interface.\nSecurity improvements:\nssl.create_default_context()\nsetsssl.VERIFY_X509_PARTIAL_CHAIN\nandssl.VERIFY_X509_STRICT\nas default flags.\nC API improvements:\nThe\nPy_mod_gil\nslot is now used to indicate that an extension module supports running with the GIL disabled.The PyTime C API has been added, providing access to system clocks.\nPyMutex\nis a new lightweight mutex that occupies a single byte.There is a new suite of functions for generating PEP 669 monitoring events in the C API.\nNew typing features:\nPEP 696: Type parameters (\ntyping.TypeVar\n,typing.ParamSpec\n, andtyping.TypeVarTuple\n) now support defaults.PEP 702: The new\nwarnings.deprecated()\ndecorator adds support for marking deprecations in the type system and at runtime.PEP 705:\ntyping.ReadOnly\ncan be used to mark an item of atyping.TypedDict\nas read-only for type checkers.PEP 742:\ntyping.TypeIs\nprovides more intuitive type narrowing behavior, as an alternative totyping.TypeGuard\n.\nPlatform support:\nPEP 730: Apple\u2019s iOS is now an officially supported platform, at tier 3.\nPEP 738: Android is now an officially supported platform, at tier 3.\nwasm32-wasi\nis now supported as a tier 2 platform.wasm32-emscripten\nis no longer an officially supported platform.\nImportant removals:\nPEP 594: The remaining 19 \u201cdead batteries\u201d (legacy stdlib modules) have been removed from the standard library:\naifc\n,audioop\n,cgi\n,cgitb\n,chunk\n,crypt\n,imghdr\n,mailcap\n,msilib\n,nis\n,nntplib\n,ossaudiodev\n,pipes\n,sndhdr\n,spwd\n,sunau\n,telnetlib\n,uu\nandxdrlib\n.Remove the 2to3 tool and\nlib2to3\nmodule (deprecated in Python 3.11).Remove the\ntkinter.tix\nmodule (deprecated in Python 3.6).Remove the\nlocale.resetlocale()\nfunction.Remove the\ntyping.io\nandtyping.re\nnamespaces.Remove chained\nclassmethod\ndescriptors.\nRelease schedule changes:\nPEP 602 (\u201cAnnual Release Cycle for Python\u201d) has been updated to extend the full support (\u2018bugfix\u2019) period for new releases to two years. This updated policy means that:\nPython 3.9\u20133.12 have one and a half years of full support, followed by three and a half years of security fixes.\nPython 3.13 and later have two years of full support, followed by three years of security fixes.\nNew Features\u00b6\nA better interactive interpreter\u00b6\nPython now uses a new interactive shell by default, based on code from the PyPy project. When the user starts the REPL from an interactive terminal, the following new features are now supported:\nMultiline editing with history preservation.\nDirect support for REPL-specific commands like help, exit, and quit, without the need to call them as functions.\nPrompts and tracebacks with color enabled by default.\nInteractive help browsing using F1 with a separate command history.\nHistory browsing using F2 that skips output as well as the >>> and \u2026 prompts.\n\u201cPaste mode\u201d with F3 that makes pasting larger blocks of code easier (press F3 again to return to the regular prompt).\nTo disable the new interactive shell,\nset the PYTHON_BASIC_REPL\nenvironment variable.\nFor more on interactive mode, see Interactive Mode.\n(Contributed by Pablo Galindo Salgado, \u0141ukasz Langa, and Lysandros Nikolaou in gh-111201 based on code from the PyPy project. Windows support contributed by Dino Viehland and Anthony Shaw.)\nImproved error messages\u00b6\nThe interpreter now uses color by default when displaying tracebacks in the terminal. This feature can be controlled via the new\nPYTHON_COLORS\nenvironment variable as well as the canonicalNO_COLOR\nandFORCE_COLOR\nenvironment variables. (Contributed by Pablo Galindo Salgado in gh-112730.)A common mistake is to write a script with the same name as a standard library module. When this results in errors, we now display a more helpful error message:\n$ python random.py Traceback (most recent call last): File \"/home/me/random.py\", line 1, in import random File \"/home/me/random.py\", line 3, in print(random.randint(5)) ^^^^^^^^^^^^^^ AttributeError: module 'random' has no attribute 'randint' (consider renaming '/home/me/random.py' since it has the same name as the standard library module named 'random' and prevents importing that standard library module)\nSimilarly, if a script has the same name as a third-party module that it attempts to import and this results in errors, we also display a more helpful error message:\n$ python numpy.py Traceback (most recent call last): File \"/home/me/numpy.py\", line 1, in import numpy as np File \"/home/me/numpy.py\", line 3, in np.array([1, 2, 3]) ^^^^^^^^ AttributeError: module 'numpy' has no attribute 'array' (consider renaming '/home/me/numpy.py' if it has the same name as a library you intended to import)\n(Contributed by Shantanu Jain in gh-95754.)\nThe error message now tries to suggest the correct keyword argument when an incorrect keyword argument is passed to a function.\n>>> \"Better error messages!\".split(max_split=1) Traceback (most recent call last): File \"\", line 1, in \"Better error messages!\".split(max_split=1) ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^ TypeError: split() got an unexpected keyword argument 'max_split'. Did you mean 'maxsplit'?\n(Contributed by Pablo Galindo Salgado and Shantanu Jain in gh-107944.)\nFree-threaded CPython\u00b6\nCPython now has experimental support for running in a free-threaded mode,\nwith the global interpreter lock (GIL) disabled.\nThis is an experimental feature and therefore is not enabled by default.\nThe free-threaded mode requires a different executable,\nusually called python3.13t\nor python3.13t.exe\n.\nPre-built binaries marked as free-threaded can be installed as part of\nthe official Windows\nand macOS installers,\nor CPython can be built from source with the --disable-gil\noption.\nFree-threaded execution allows for full utilization of the available\nprocessing power by running threads in parallel on available CPU cores.\nWhile not all software will benefit from this automatically, programs\ndesigned with threading in mind will run faster on multi-core hardware.\nThe free-threaded mode is experimental and work is ongoing to improve it:\nexpect some bugs and a substantial single-threaded performance hit.\nFree-threaded builds of CPython support optionally running with the GIL\nenabled at runtime using the environment variable PYTHON_GIL\nor\nthe command-line option -X gil=1\n.\nTo check if the current interpreter supports free-threading, python -VV\nand sys.version\ncontain \u201cexperimental free-threading build\u201d.\nThe new sys._is_gil_enabled()\nfunction can be used to check whether\nthe GIL is actually disabled in the running process.\nC-API extension modules need to be built specifically for the free-threaded\nbuild. Extensions that support running with the GIL disabled should\nuse the Py_mod_gil\nslot. Extensions using single-phase init should\nuse PyUnstable_Module_SetGIL()\nto indicate whether they support\nrunning with the GIL disabled. Importing C extensions that don\u2019t use these\nmechanisms will cause the GIL to be enabled, unless the GIL was explicitly\ndisabled with the PYTHON_GIL\nenvironment variable or the\n-X gil=0\noption.\npip 24.1 or newer is required to install packages with C extensions in the\nfree-threaded build.\nThis work was made possible thanks to many individuals and organizations, including the large community of contributors to Python and third-party projects to test and enable free-threading support. Notable contributors include: Sam Gross, Ken Jin, Donghee Na, Itamar Oren, Matt Page, Brett Simmers, Dino Viehland, Carl Meyer, Nathan Goldbaum, Ralf Gommers, Lysandros Nikolaou, and many others. Many of these contributors are employed by Meta, which has provided significant engineering resources to support this project.\nSee also\nPEP 703 \u201cMaking the Global Interpreter Lock Optional in CPython\u201d contains rationale and information surrounding this work.\nPorting Extension Modules to Support Free-Threading: A community-maintained porting guide for extension authors.\nAn experimental just-in-time (JIT) compiler\u00b6\nWhen CPython is configured and built using\nthe --enable-experimental-jit\noption,\na just-in-time (JIT) compiler is added which may speed up some Python programs.\nOn Windows, use PCbuild/build.bat --experimental-jit\nto enable the JIT\nor --experimental-jit-interpreter\nto enable the Tier 2 interpreter.\nBuild requirements and further supporting information are contained at\nTools/jit/README.md\n.\nThe --enable-experimental-jit\noption takes these (optional) values,\ndefaulting to yes\nif --enable-experimental-jit\nis present\nwithout the optional value.\nno\n: Disable the entire Tier 2 and JIT pipeline.yes\n: Enable the JIT. To disable the JIT at runtime, pass the environment variablePYTHON_JIT=0\n.yes-off\n: Build the JIT but disable it by default. To enable the JIT at runtime, pass the environment variablePYTHON_JIT=1\n.interpreter\n: Enable the Tier 2 interpreter but disable the JIT. The interpreter can be disabled by running withPYTHON_JIT=0\n.\nThe internal architecture is roughly as follows:\nWe start with specialized Tier 1 bytecode. See What\u2019s new in 3.11 for details.\nWhen the Tier 1 bytecode gets hot enough, it gets translated to a new purely internal intermediate representation (IR), called the Tier 2 IR, and sometimes referred to as micro-ops (\u201cuops\u201d).\nThe Tier 2 IR uses the same stack-based virtual machine as Tier 1, but the instruction format is better suited to translation to machine code.\nWe have several optimization passes for Tier 2 IR, which are applied before it is interpreted or translated to machine code.\nThere is a Tier 2 interpreter, but it is mostly intended for debugging the earlier stages of the optimization pipeline. The Tier 2 interpreter can be enabled by configuring Python with\n--enable-experimental-jit=interpreter\n.When the JIT is enabled, the optimized Tier 2 IR is translated to machine code, which is then executed.\nThe machine code translation process uses a technique called copy-and-patch. It has no runtime dependencies, but there is a new build-time dependency on LLVM.\nSee also\n(JIT by Brandt Bucher, inspired by a paper by Haoran Xu and Fredrik Kjolstad. Tier 2 IR by Mark Shannon and Guido van Rossum. Tier 2 optimizer by Ken Jin.)\nDefined mutation semantics for locals()\n\u00b6\nHistorically, the expected result of mutating the return value of\nlocals()\nhas been left to individual Python implementations to define.\nStarting from Python 3.13, PEP 667 standardises\nthe historical behavior of CPython for most code execution scopes,\nbut changes optimized scopes\n(functions, generators, coroutines, comprehensions, and generator expressions)\nto explicitly return independent snapshots of the currently assigned local\nvariables, including locally referenced nonlocal variables captured in closures.\nThis change to the semantics of locals()\nin optimized scopes also\naffects the default behavior of code execution functions that implicitly\ntarget locals()\nif no explicit namespace is provided\n(such as exec()\nand eval()\n).\nIn previous versions, whether or not changes could be accessed by calling\nlocals()\nafter calling the code execution function was\nimplementation-dependent. In CPython specifically, such code would typically\nappear to work as desired, but could sometimes fail in optimized scopes based\non other code (including debuggers and code execution tracing tools)\npotentially resetting the shared snapshot in that scope.\nNow, the code will always run against an independent snapshot of\nthe local variables in optimized scopes, and hence the changes will never\nbe visible in subsequent calls to locals()\n.\nTo access the changes made in these cases, an explicit namespace reference\nmust now be passed to the relevant function.\nAlternatively, it may make sense to update affected code to use a higher level\ncode execution API that returns the resulting code execution namespace\n(e.g. runpy.run_path()\nwhen executing Python files from disk).\nTo ensure debuggers and similar tools can reliably update local variables in\nscopes affected by this change, FrameType.f_locals\nnow\nreturns a write-through proxy to the frame\u2019s local and locally referenced\nnonlocal variables in these scopes, rather than returning an inconsistently\nupdated shared dict\ninstance with undefined runtime semantics.\nSee PEP 667 for more details, including related C API changes and deprecations. Porting notes are also provided below for the affected Python APIs and C APIs.\n(PEP and implementation contributed by Mark Shannon and Tian Gao in gh-74929. Documentation updates provided by Guido van Rossum and Alyssa Coghlan.)\nSupport for mobile platforms\u00b6\nPEP 730: iOS is now a PEP 11 supported platform, with the\narm64-apple-ios\nand arm64-apple-ios-simulator\ntargets at tier 3\n(iPhone and iPad devices released after 2013 and the Xcode iOS simulator\nrunning on Apple silicon hardware, respectively).\nx86_64-apple-ios-simulator\n(the Xcode iOS simulator running on older x86_64\nhardware)\nis not a tier 3 supported platform, but will have best-effort support.\n(PEP written and implementation contributed by Russell Keith-Magee in\ngh-114099.)\nPEP 738: Android is now a PEP 11 supported platform, with the\naarch64-linux-android\nand x86_64-linux-android\ntargets at tier 3.\nThe 32-bit targets arm-linux-androideabi\nand i686-linux-android\nare not tier 3 supported platforms, but will have best-effort support.\n(PEP written and implementation contributed by Malcolm Smith in\ngh-116622.)\nOther Language Changes\u00b6\nThe compiler now strips common leading whitespace from every line in a docstring. This reduces the size of the bytecode cache (such as\n.pyc\nfiles), with reductions in file size of around 5%, for example insqlalchemy.orm.session\nfrom SQLAlchemy 2.0. This change affects tools that use docstrings, such asdoctest\n.>>> def spam(): ... \"\"\" ... This is a docstring with ... leading whitespace. ... ... It even has multiple paragraphs! ... \"\"\" ... >>> spam.__doc__ '\\nThis is a docstring with\\n leading whitespace.\\n\\nIt even has multiple paragraphs!\\n'\n(Contributed by Inada Naoki in gh-81283.)\nAnnotation scopes within class scopes can now contain lambdas and comprehensions. Comprehensions that are located within class scopes are not inlined into their parent scope.\nclass C[T]: type Alias = lambda: T\nFuture statements are no longer triggered by relative imports of the\n__future__\nmodule, meaning that statements of the formfrom .__future__ import ...\nare now simply standard relative imports, with no special features activated. (Contributed by Jeremiah Gabriel Pascual in gh-118216.)global\ndeclarations are now permitted inexcept\nblocks when that global is used in theelse\nblock. Previously this raised an erroneousSyntaxError\n. (Contributed by Irit Katriel in gh-111123.)Add\nPYTHON_FROZEN_MODULES\n, a new environment variable that determines whether frozen modules are ignored by the import machinery, equivalent to the-X frozen_modules\ncommand-line option. (Contributed by Yilei Yang in gh-111374.)Add support for the perf profiler working without frame pointers through the new environment variable\nPYTHON_PERF_JIT_SUPPORT\nand command-line option-X perf_jit\n. (Contributed by Pablo Galindo in gh-118518.)The location of a\n.python_history\nfile can be changed via the newPYTHON_HISTORY\nenvironment variable. (Contributed by Levi Sabah, Zackery Spytz and Hugo van Kemenade in gh-73965.)Classes have a new\n__static_attributes__\nattribute. This is populated by the compiler with a tuple of the class\u2019s attribute names which are assigned throughself.\nfrom any function in its body. (Contributed by Irit Katriel in gh-115775.)The compiler now creates a\n__firstlineno__\nattribute on classes with the line number of the first line of the class definition. (Contributed by Serhiy Storchaka in gh-118465.)The\nexec()\nandeval()\nbuiltins now accept the globals and locals arguments as keywords. (Contributed by Raphael Gaschignard in gh-105879)The\ncompile()\nbuiltin now accepts a new flag,ast.PyCF_OPTIMIZED_AST\n, which is similar toast.PyCF_ONLY_AST\nexcept that the returned AST is optimized according to the value of the optimize argument. (Contributed by Irit Katriel in gh-108113).Add a\n__name__\nattribute onproperty\nobjects. (Contributed by Eugene Toder in gh-101860.)Add\nPythonFinalizationError\n, a new exception derived fromRuntimeError\nand used to signal when operations are blocked during finalization. The following callables now raisePythonFinalizationError\n, instead ofRuntimeError\n:(Contributed by Victor Stinner in gh-114570.)\nAllow the count argument of\nstr.replace()\nto be a keyword. (Contributed by Hugo van Kemenade in gh-106487.)Many functions now emit a warning if a boolean value is passed as a file descriptor argument. This can help catch some errors earlier. (Contributed by Serhiy Storchaka in gh-82626.)\nAdded\nname\nandmode\nattributes for compressed and archived file-like objects in thebz2\n,lzma\n,tarfile\n, andzipfile\nmodules. (Contributed by Serhiy Storchaka in gh-115961.)\nNew Modules\u00b6\ndbm.sqlite3\n: An SQLite backend fordbm\n. (Contributed by Raymond Hettinger and Erlend E. Aasland in gh-100414.)\nImproved Modules\u00b6\nargparse\u00b6\nAdd the deprecated parameter to the\nadd_argument()\nandadd_parser()\nmethods, to enable deprecating command-line options, positional arguments, and subcommands. (Contributed by Serhiy Storchaka in gh-83648.)\narray\u00b6\nAdd the\n'w'\ntype code (Py_UCS4\n) for Unicode characters. It should be used instead of the deprecated'u'\ntype code. (Contributed by Inada Naoki in gh-80480.)Register\narray.array\nas aMutableSequence\nby implementing theclear()\nmethod. (Contributed by Mike Zimin in gh-114894.)\nast\u00b6\nThe constructors of node types in the\nast\nmodule are now stricter in the arguments they accept, with more intuitive behavior when arguments are omitted.If an optional field on an AST node is not included as an argument when constructing an instance, the field will now be set to\nNone\n. Similarly, if a list field is omitted, that field will now be set to an empty list, and if anexpr_context\nfield is omitted, it defaults toLoad()\n. (Previously, in all cases, the attribute would be missing on the newly constructed AST node instance.)In all other cases, where a required argument is omitted, the node constructor will emit a\nDeprecationWarning\n. This will raise an exception in Python 3.15. Similarly, passing a keyword argument to the constructor that does not map to a field on the AST node is now deprecated, and will raise an exception in Python 3.15.These changes do not apply to user-defined subclasses of\nast.AST\nunless the class opts in to the new behavior by defining theAST._field_types\nmapping.(Contributed by Jelle Zijlstra in gh-105858, gh-117486, and gh-118851.)\nast.parse()\nnow accepts an optional argument optimize which is passed on tocompile()\n. This makes it possible to obtain an optimized AST. (Contributed by Irit Katriel in gh-108113.)\nasyncio\u00b6\nasyncio.as_completed()\nnow returns an object that is both an asynchronous iterator and a plain iterator of awaitables. The awaitables yielded by asynchronous iteration include original task or future objects that were passed in, making it easier to associate results with the tasks being completed. (Contributed by Justin Arthur in gh-77714.)asyncio.loop.create_unix_server()\nwill now automatically remove the Unix socket when the server is closed. (Contributed by Pierre Ossman in gh-111246.)DatagramTransport.sendto()\nwill now send zero-length datagrams if called with an empty bytes object. The transport flow control also now accounts for the datagram header when calculating the buffer size. (Contributed by Jamie Phan in gh-115199.)Add\nQueue.shutdown\nandQueueShutDown\nto manage queue termination. (Contributed by Laurie Opperman and Yves Duprat in gh-104228.)Add the\nServer.close_clients()\nandServer.abort_clients()\nmethods, which more forcefully close an asyncio server. (Contributed by Pierre Ossman in gh-113538.)Accept a tuple of separators in\nStreamReader.readuntil()\n, stopping when any one of them is encountered. (Contributed by Bruce Merry in gh-81322.)Improve the behavior of\nTaskGroup\nwhen an external cancellation collides with an internal cancellation. For example, when two task groups are nested and both experience an exception in a child task simultaneously, it was possible that the outer task group would hang, because its internal cancellation was swallowed by the inner task group.In the case where a task group is cancelled externally and also must raise an\nExceptionGroup\n, it will now call the parent task\u2019scancel()\nmethod. This ensures that aCancelledError\nwill be raised at the nextawait\n, so the cancellation is not lost.An added benefit of these changes is that task groups now preserve the cancellation count (\ncancelling()\n).In order to handle some corner cases,\nuncancel()\nmay now reset the undocumented_must_cancel\nflag when the cancellation count reaches zero.(Inspired by an issue reported by Arthur Tacca in gh-116720.)\nWhen\nTaskGroup.create_task()\nis called on an inactiveTaskGroup\n, the given coroutine will be closed (which prevents aRuntimeWarning\nabout the given coroutine being never awaited). (Contributed by Arthur Tacca and Jason Zhang in gh-115957.)The function and methods named\ncreate_task\nhave received a new**kwargs\nargument that is passed through to the task constructor. This change was accidentally added in 3.13.3, and broke the API contract for custom task factories. Several third-party task factories implemented workarounds for this. In 3.13.4 and later releases the old factory contract is honored once again (until 3.14). To keep the workarounds working, the extra**kwargs\nargument still allows passing additional keyword arguments toTask\nand to custom task factories.This affects the following function and methods:\nasyncio.create_task()\n,asyncio.loop.create_task()\n,asyncio.TaskGroup.create_task()\n. (Contributed by Thomas Grainger in gh-128307.)\nbase64\u00b6\nAdd\nz85encode()\nandz85decode()\nfunctions for encodingbytes\nas Z85 data and decoding Z85-encoded data tobytes\n. (Contributed by Matan Perelman in gh-75299.)\ncompileall\u00b6\nThe default number of worker threads and processes is now selected using\nos.process_cpu_count()\ninstead ofos.cpu_count()\n. (Contributed by Victor Stinner in gh-109649.)\nconcurrent.futures\u00b6\nThe default number of worker threads and processes is now selected using\nos.process_cpu_count()\ninstead ofos.cpu_count()\n. (Contributed by Victor Stinner in gh-109649.)\nconfigparser\u00b6\nConfigParser\nnow has support for unnamed sections, which allows for top-level key-value pairs. This can be enabled with the new allow_unnamed_section parameter. (Contributed by Pedro Sousa Lacerda in gh-66449.)\ncopy\u00b6\nThe new\nreplace()\nfunction and thereplace protocol\nmake creating modified copies of objects much simpler. This is especially useful when working with immutable objects. The following types support thereplace()\nfunction and implement the replace protocol:Any user-defined class can also support\ncopy.replace()\nby defining the__replace__()\nmethod. (Contributed by Serhiy Storchaka in gh-108751.)\nctypes\u00b6\nAs a consequence of necessary internal refactoring, initialization of internal metaclasses now happens in\n__init__\nrather than in__new__\n. This affects projects that subclass these internal metaclasses to provide custom initialization. Generally:Custom logic that was done in\n__new__\nafter callingsuper().__new__\nshould be moved to__init__\n.To create a class, call the metaclass, not only the metaclass\u2019s\n__new__\nmethod.\nSee gh-124520 for discussion and links to changes in some affected projects.\nctypes.Structure\nobjects have a new_align_\nattribute which allows the alignment of the structure being packed to/from memory to be specified explicitly. (Contributed by Matt Sanderson in gh-112433)\ndbm\u00b6\nAdd\ndbm.sqlite3\n, a new module which implements an SQLite backend, and make it the defaultdbm\nbackend. (Contributed by Raymond Hettinger and Erlend E. Aasland in gh-100414.)Allow removing all items from the database through the new\nclear()\nmethods of the GDBM and NDBM database objects. (Contributed by Donghee Na in gh-107122.)\ndis\u00b6\nChange the output of\ndis\nmodule functions to show logical labels for jump targets and exception handlers, rather than offsets. The offsets can be added with the new-O\ncommand-line option or the show_offsets argument. (Contributed by Irit Katriel in gh-112137.)get_instructions()\nno longer represents cache entries as separate instructions. Instead, it returns them as part of theInstruction\n, in the new cache_info field. The show_caches argument toget_instructions()\nis deprecated and no longer has any effect. (Contributed by Irit Katriel in gh-112962.)\ndoctest\u00b6\ndoctest\noutput is now colored by default. This can be controlled via the newPYTHON_COLORS\nenvironment variable as well as the canonicalNO_COLOR\nandFORCE_COLOR\nenvironment variables. See also Controlling color. (Contributed by Hugo van Kemenade in gh-117225.)The\nDocTestRunner.run()\nmethod now counts the number of skipped tests. Add theDocTestRunner.skips\nandTestResults.skipped\nattributes. (Contributed by Victor Stinner in gh-108794.)\nemail\u00b6\nHeaders with embedded newlines are now quoted on output. The\ngenerator\nwill now refuse to serialize (write) headers that are improperly folded or delimited, such that they would be parsed as multiple headers or joined with adjacent data. If you need to turn this safety feature off, setverify_generated_headers\n. (Contributed by Bas Bloemsaat and Petr Viktorin in gh-121650.)getaddresses()\nandparseaddr()\nnow return('', '')\npairs in more situations where invalid email addresses are encountered instead of potentially inaccurate values. The two functions have a new optional strict parameter (defaultTrue\n). To get the old behavior (accepting malformed input), usestrict=False\n.getattr(email.utils, 'supports_strict_parsing', False)\ncan be used to check if the strict parameter is available. (Contributed by Thomas Dwyer and Victor Stinner for gh-102988 to improve the CVE 2023-27043 fix.)\nenum\u00b6\nfractions\u00b6\nFraction\nobjects now support the standard format specification mini-language rules for fill, alignment, sign handling, minimum width, and grouping. (Contributed by Mark Dickinson in gh-111320.)\nglob\u00b6\nAdd\ntranslate()\n, a function to convert a path specification with shell-style wildcards to a regular expression. (Contributed by Barney Gale in gh-72904.)\nimportlib\u00b6\nThe following functions in\nimportlib.resources\nnow allow accessing a directory (or tree) of resources, using multiple positional arguments (the encoding and errors arguments in the text-reading functions are now keyword-only):These functions are no longer deprecated and are not scheduled for removal. (Contributed by Petr Viktorin in gh-116608.)\ncontents()\nremains deprecated in favor of the fully-featuredTraversable\nAPI. However, there is now no plan to remove it. (Contributed by Petr Viktorin in gh-116608.)\nio\u00b6\nThe\nIOBase\nfinalizer now logs any errors raised by theclose()\nmethod withsys.unraisablehook\n. Previously, errors were ignored silently by default, and only logged in Python Development Mode or when using a Python debug build. (Contributed by Victor Stinner in gh-62948.)\nipaddress\u00b6\nAdd the\nIPv4Address.ipv6_mapped\nproperty, which returns the IPv4-mapped IPv6 address. (Contributed by Charles Machalow in gh-109466.)Fix\nis_global\nandis_private\nbehavior inIPv4Address\n,IPv6Address\n,IPv4Network\n, andIPv6Network\n. (Contributed by Jakub Stasiak in gh-113171.)\nitertools\u00b6\nbatched()\nhas a new strict parameter, which raises aValueError\nif the final batch is shorter than the specified batch size. (Contributed by Raymond Hettinger in gh-113202.)\nmarshal\u00b6\nAdd the allow_code parameter in module functions. Passing\nallow_code=False\nprevents serialization and de-serialization of code objects which are incompatible between Python versions. (Contributed by Serhiy Storchaka in gh-113626.)\nmath\u00b6\nThe new function\nfma()\nperforms fused multiply-add operations. This computesx * y + z\nwith only a single round, and so avoids any intermediate loss of precision. It wraps thefma()\nfunction provided by C99, and follows the specification of the IEEE 754 \u201cfusedMultiplyAdd\u201d operation for special cases. (Contributed by Mark Dickinson and Victor Stinner in gh-73468.)\nmimetypes\u00b6\nAdd the\nguess_file_type()\nfunction to guess a MIME type from a filesystem path. Using paths withguess_type()\nis now soft deprecated. (Contributed by Serhiy Storchaka in gh-66543.)\nmmap\u00b6\nmmap\nis now protected from crashing on Windows when the mapped memory is inaccessible due to file system errors or access violations. (Contributed by Jannis Weigend in gh-118209.)mmap\nhas a newseekable()\nmethod that can be used when a seekable file-like object is required. Theseek()\nmethod now returns the new absolute position. (Contributed by Donghee Na and Sylvie Liberman in gh-111835.)The new UNIX-only trackfd parameter for\nmmap\ncontrols file descriptor duplication; if false, the file descriptor specified by fileno will not be duplicated. (Contributed by Zackery Spytz and Petr Viktorin in gh-78502.)\nmultiprocessing\u00b6\nThe default number of worker threads and processes is now selected using\nos.process_cpu_count()\ninstead ofos.cpu_count()\n. (Contributed by Victor Stinner in gh-109649.)\nos\u00b6\nAdd\nprocess_cpu_count()\nfunction to get the number of logical CPU cores usable by the calling thread of the current process. (Contributed by Victor Stinner in gh-109649.)cpu_count()\nandprocess_cpu_count()\ncan be overridden through the new environment variablePYTHON_CPU_COUNT\nor the new command-line option-X cpu_count\n. This option is useful for users who need to limit CPU resources of a container system without having to modify application code or the container itself. (Contributed by Donghee Na in gh-109595.)Add a low level interface to Linux\u2019s timer file descriptors via\ntimerfd_create()\n,timerfd_settime()\n,timerfd_settime_ns()\n,timerfd_gettime()\n,timerfd_gettime_ns()\n,TFD_NONBLOCK\n,TFD_CLOEXEC\n,TFD_TIMER_ABSTIME\n, andTFD_TIMER_CANCEL_ON_SET\n(Contributed by Masaru Tsuchiyama in gh-108277.)lchmod()\nand the follow_symlinks argument ofchmod()\nare both now available on Windows. Note that the default value of follow_symlinks inlchmod()\nisFalse\non Windows. (Contributed by Serhiy Storchaka in gh-59616.)fchmod()\nand support for file descriptors inchmod()\nare both now available on Windows. (Contributed by Serhiy Storchaka in gh-113191.)On Windows,\nmkdir()\nandmakedirs()\nnow support passing a mode value of0o700\nto apply access control to the new directory. This implicitly affectstempfile.mkdtemp()\nand is a mitigation for CVE 2024-4030. Other values for mode continue to be ignored. (Contributed by Steve Dower in gh-118486.)posix_spawn()\nnow acceptsNone\nfor the env argument, which makes the newly spawned process use the current process environment. (Contributed by Jakub Kulik in gh-113119.)posix_spawn()\ncan now use thePOSIX_SPAWN_CLOSEFROM\nattribute in the file_actions parameter on platforms that supportposix_spawn_file_actions_addclosefrom_np()\n. (Contributed by Jakub Kulik in gh-113117.)\nos.path\u00b6\nAdd\nisreserved()\nto check if a path is reserved on the current system. This function is only available on Windows. (Contributed by Barney Gale in gh-88569.)On Windows,\nisabs()\nno longer considers paths starting with exactly one slash (\\\nor/\n) to be absolute. (Contributed by Barney Gale and Jon Foster in gh-44626.)realpath()\nnow resolves MS-DOS style file names even if the file is not accessible. (Contributed by Moonsik Park in gh-82367.)\npathlib\u00b6\nAdd\nUnsupportedOperation\n, which is raised instead ofNotImplementedError\nwhen a path operation isn\u2019t supported. (Contributed by Barney Gale in gh-89812.)Add a new constructor for creating\nPath\nobjects from \u2018file\u2019 URIs (file:///\n),Path.from_uri()\n. (Contributed by Barney Gale in gh-107465.)Add\nPurePath.full_match()\nfor matching paths with shell-style wildcards, including the recursive wildcard \u201c**\n\u201d. (Contributed by Barney Gale in gh-73435.)Add the\nPurePath.parser\nclass attribute to store the implementation ofos.path\nused for low-level path parsing and joining. This will be eitherposixpath\norntpath\n.Add recurse_symlinks keyword-only argument to\nPath.glob()\nandrglob()\n. (Contributed by Barney Gale in gh-77609.)Path.glob()\nandrglob()\nnow return files and directories when given a pattern that ends with \u201c**\n\u201d. Previously, only directories were returned. (Contributed by Barney Gale in gh-70303.)Add the follow_symlinks keyword-only argument to\nPath.is_file\n,Path.is_dir\n,Path.owner()\n, andPath.group()\n. (Contributed by Barney Gale in gh-105793 and Kamil Turek in gh-107962.)\npdb\u00b6\nbreakpoint()\nandset_trace()\nnow enter the debugger immediately rather than on the next line of code to be executed. This change prevents the debugger from breaking outside of the context whenbreakpoint()\nis positioned at the end of the context. (Contributed by Tian Gao in gh-118579.)sys.path[0]\nis no longer replaced by the directory of the script being debugged whensys.flags.safe_path\nis set. (Contributed by Tian Gao and Christian Walther in gh-111762.)zipapp\nis now supported as a debugging target. (Contributed by Tian Gao in gh-118501.)Add ability to move between chained exceptions during post-mortem debugging in\npm()\nusing the newexceptions [exc_number]\ncommand for Pdb. (Contributed by Matthias Bussonnier in gh-106676.)Expressions and statements whose prefix is a pdb command are now correctly identified and executed. (Contributed by Tian Gao in gh-108464.)\nqueue\u00b6\nAdd\nQueue.shutdown\nandShutDown\nto manage queue termination. (Contributed by Laurie Opperman and Yves Duprat in gh-104750.)\nrandom\u00b6\nAdd a command-line interface. (Contributed by Hugo van Kemenade in gh-118131.)\nre\u00b6\nRename\nre.error\ntoPatternError\nfor improved clarity.re.error\nis kept for backward compatibility.\nshutil\u00b6\nsite\u00b6\n.pth\nfiles are now decoded using UTF-8 first, and then with the locale encoding if UTF-8 decoding fails. (Contributed by Inada Naoki in gh-117802.)\nsqlite3\u00b6\nA\nResourceWarning\nis now emitted if aConnection\nobject is notclosed\nexplicitly. (Contributed by Erlend E. Aasland in gh-105539.)Add the filter keyword-only parameter to\nConnection.iterdump()\nfor filtering database objects to dump. (Contributed by Mariusz Felisiak in gh-91602.)\nssl\u00b6\nThe\ncreate_default_context()\nAPI now includesVERIFY_X509_PARTIAL_CHAIN\nandVERIFY_X509_STRICT\nin its default flags.Note\nVERIFY_X509_STRICT\nmay reject pre-RFC 5280 or malformed certificates that the underlying OpenSSL implementation might otherwise accept. Whilst disabling this is not recommended, you can do so using:import ssl ctx = ssl.create_default_context() ctx.verify_flags &= ~ssl.VERIFY_X509_STRICT\n(Contributed by William Woodruff in gh-112389.)\nstatistics\u00b6\nAdd\nkde()\nfor kernel density estimation. This makes it possible to estimate a continuous probability density function from a fixed number of discrete samples. (Contributed by Raymond Hettinger in gh-115863.)Add\nkde_random()\nfor sampling from an estimated probability density function created bykde()\n. (Contributed by Raymond Hettinger in gh-115863.)\nsubprocess\u00b6\nThe\nsubprocess\nmodule now uses theposix_spawn()\nfunction in more situations.Notably, when close_fds is\nTrue\n(the default),posix_spawn()\nwill be used when the C library providesposix_spawn_file_actions_addclosefrom_np()\n, which includes recent versions of Linux, FreeBSD, and Solaris. On Linux, this should perform similarly to the existing Linuxvfork()\nbased code.A private control knob\nsubprocess._USE_POSIX_SPAWN\ncan be set toFalse\nif you need to forcesubprocess\nto never useposix_spawn()\n. Please report your reason and platform details in the issue tracker if you set this so that we can improve our API selection logic for everyone. (Contributed by Jakub Kulik in gh-113117.)\nsys\u00b6\nAdd the\n_is_interned()\nfunction to test if a string was interned. This function is not guaranteed to exist in all implementations of Python. (Contributed by Serhiy Storchaka in gh-78573.)\ntempfile\u00b6\nOn Windows, the default mode\n0o700\nused bytempfile.mkdtemp()\nnow limits access to the new directory due to changes toos.mkdir()\n. This is a mitigation for CVE 2024-4030. (Contributed by Steve Dower in gh-118486.)\ntime\u00b6\nOn Windows,\nmonotonic()\nnow uses theQueryPerformanceCounter()\nclock for a resolution of 1 microsecond, instead of theGetTickCount64()\nclock which has a resolution of 15.6 milliseconds. (Contributed by Victor Stinner in gh-88494.)On Windows,\ntime()\nnow uses theGetSystemTimePreciseAsFileTime()\nclock for a resolution of 1 microsecond, instead of theGetSystemTimeAsFileTime()\nclock which has a resolution of 15.6 milliseconds. (Contributed by Victor Stinner in gh-63207.)\ntkinter\u00b6\nAdd\ntkinter\nwidget methods:tk_busy_hold()\n,tk_busy_configure()\n,tk_busy_cget()\n,tk_busy_forget()\n,tk_busy_current()\n, andtk_busy_status()\n. (Contributed by Miguel, klappnase and Serhiy Storchaka in gh-72684.)The\ntkinter\nwidget methodwm_attributes()\nnow accepts the attribute name without the minus prefix to get window attributes, for examplew.wm_attributes('alpha')\nand allows specifying attributes and values to set as keyword arguments, for examplew.wm_attributes(alpha=0.5)\n. (Contributed by Serhiy Storchaka in gh-43457.)wm_attributes()\ncan now return attributes as adict\n, by using the new optional keyword-only parameter return_python_dict. (Contributed by Serhiy Storchaka in gh-43457.)Text.count()\ncan now return a simpleint\nwhen the new optional keyword-only parameter return_ints is used. Otherwise, the single count is returned as a 1-tuple orNone\n. (Contributed by Serhiy Storchaka in gh-97928.)Support the \u201cvsapi\u201d element type in the\nelement_create()\nmethod oftkinter.ttk.Style\n. (Contributed by Serhiy Storchaka in gh-68166.)Add the\nafter_info()\nmethod for Tkinter widgets. (Contributed by Cheryl Sabella in gh-77020.)Add a new\ncopy_replace()\nmethod toPhotoImage\nto copy a region from one image to another, possibly with pixel zooming, subsampling, or both. (Contributed by Serhiy Storchaka in gh-118225.)Add from_coords parameter to the\nPhotoImage\nmethodscopy()\n,zoom()\nandsubsample()\n. Add zoom and subsample parameters to thePhotoImage\nmethodcopy()\n. (Contributed by Serhiy Storchaka in gh-118225.)Add the\nPhotoImage\nmethodsread()\nto read an image from a file anddata()\nto get the image data. Add background and grayscale parameters to thewrite()\nmethod. (Contributed by Serhiy Storchaka in gh-118271.)\ntraceback\u00b6\nAdd the\nexc_type_str\nattribute toTracebackException\n, which holds a string display of the exc_type. Deprecate theexc_type\nattribute, which holds the type object itself. Add parameter save_exc_type (defaultTrue\n) to indicate whetherexc_type\nshould be saved. (Contributed by Irit Katriel in gh-112332.)Add a new show_group keyword-only parameter to\nTracebackException.format_exception_only()\nto (recursively) format the nested exceptions of aBaseExceptionGroup\ninstance. (Contributed by Irit Katriel in gh-105292.)\ntypes\u00b6\nSimpleNamespace\ncan now take a single positional argument to initialise the namespace\u2019s arguments. This argument must either be a mapping or an iterable of key-value pairs. (Contributed by Serhiy Storchaka in gh-108191.)\ntyping\u00b6\nPEP 705: Add\nReadOnly\n, a special typing construct to mark aTypedDict\nitem as read-only for type checkers.PEP 742: Add\nTypeIs\n, a typing construct that can be used to instruct a type checker how to narrow a type.Add\nNoDefault\n, a sentinel object used to represent the defaults of some parameters in thetyping\nmodule. (Contributed by Jelle Zijlstra in gh-116126.)Add\nget_protocol_members()\nto return the set of members defining atyping.Protocol\n. (Contributed by Jelle Zijlstra in gh-104873.)Add\nis_protocol()\nto check whether a class is aProtocol\n. (Contributed by Jelle Zijlstra in gh-104873.)ClassVar\ncan now be nested inFinal\n, and vice versa. (Contributed by Mehdi Drissi in gh-89547.)\nunicodedata\u00b6\nUpdate the Unicode database to version 15.1.0. (Contributed by James Gerity in gh-109559.)\nvenv\u00b6\nAdd support for creating source control management (SCM) ignore files in a virtual environment\u2019s directory. By default, Git is supported. This is implemented as opt-in via the API, which can be extended to support other SCMs (\nEnvBuilder\nandcreate()\n), and opt-out via the CLI, using--without-scm-ignore-files\n. (Contributed by Brett Cannon in gh-108125.)\nwarnings\u00b6\nPEP 702: The new\nwarnings.deprecated()\ndecorator provides a way to communicate deprecations to a static type checker and to warn on usage of deprecated classes and functions. ADeprecationWarning\nmay also be emitted when a decorated function or class is used at runtime. (Contributed by Jelle Zijlstra in gh-104003.)\nxml\u00b6\nAllow controlling Expat >=2.6.0 reparse deferral (CVE 2023-52425) by adding five new methods:\nxml.sax.expatreader.ExpatParser.flush()\n(Contributed by Sebastian Pipping in gh-115623.)\nAdd the\nclose()\nmethod for the iterator returned byiterparse()\nfor explicit cleanup. (Contributed by Serhiy Storchaka in gh-69893.)\nzipimport\u00b6\nOptimizations\u00b6\nSeveral standard library modules have had their import times significantly improved. For example, the import time of the\ntyping\nmodule has been reduced by around a third by removing dependencies onre\nandcontextlib\n. Other modules to enjoy import-time speedups includeemail.utils\n,enum\n,functools\n,importlib.metadata\n, andthreading\n. (Contributed by Alex Waygood, Shantanu Jain, Adam Turner, Daniel Hollas, and others in gh-109653.)textwrap.indent()\nis now around 30% faster than before for large input. (Contributed by Inada Naoki in gh-107369.)The\nsubprocess\nmodule now uses theposix_spawn()\nfunction in more situations, including when close_fds isTrue\n(the default) on many modern platforms. This should provide a notable performance increase when launching processes on FreeBSD and Solaris. See the subprocess section above for details. (Contributed by Jakub Kulik in gh-113117.)\nRemoved Modules And APIs\u00b6\nPEP 594: Remove \u201cdead batteries\u201d from the standard library\u00b6\nPEP 594 proposed removing 19 modules from the standard library, colloquially referred to as \u2018dead batteries\u2019 due to their historic, obsolete, or insecure status. All of the following modules were deprecated in Python 3.11, and are now removed:\naifc\nstandard-aifc: Use the redistribution of\naifc\nlibrary from PyPI.\naudioop\naudioop-lts: Use\naudioop-lts\nlibrary from PyPI.\nchunk\nstandard-chunk: Use the redistribution of\nchunk\nlibrary from PyPI.\ncgi\nandcgitb\ncgi.FieldStorage\ncan typically be replaced withurllib.parse.parse_qsl()\nforGET\nandHEAD\nrequests, and theemail.message\nmodule or the multipart library forPOST\nandPUT\nrequests.cgi.parse()\ncan be replaced by callingurllib.parse.parse_qs()\ndirectly on the desired query string, unless the input ismultipart/form-data\n, which should be replaced as described below forcgi.parse_multipart()\n.cgi.parse_header()\ncan be replaced with the functionality in theemail\npackage, which implements the same MIME RFCs. For example, withemail.message.EmailMessage\n:from email.message import EmailMessage msg = EmailMessage() msg['content-type'] = 'application/json; charset=\"utf8\"' main, params = msg.get_content_type(), msg['content-type'].params\ncgi.parse_multipart()\ncan be replaced with the functionality in theemail\npackage, which implements the same MIME RFCs, or with the multipart library. For example, theemail.message.EmailMessage\nandemail.message.Message\nclasses.standard-cgi: and standard-cgitb: Use the redistribution of\ncgi\nandcgitb\nlibrary from PyPI.\ncrypt\nand the private_crypt\nextension. Thehashlib\nmodule may be an appropriate replacement when simply hashing a value is required. Otherwise, various third-party libraries on PyPI are available:bcrypt: Modern password hashing for your software and your servers.\nargon2-cffi: The secure Argon2 password hashing algorithm.\nlegacycrypt:\nctypes\nwrapper to the POSIX crypt library call and associated functionality.crypt_r: Fork of the\ncrypt\nmodule, wrapper to the crypt_r(3) library call and associated functionality.standard-crypt and deprecated-crypt-alternative: Use the redistribution of\ncrypt\nand reimplementation of_crypt\nlibraries from PyPI.\nimghdr\n: The filetype, puremagic, or python-magic libraries should be used as replacements. For example, thepuremagic.what()\nfunction can be used to replace theimghdr.what()\nfunction for all file formats that were supported byimghdr\n.standard-imghdr: Use the redistribution of\nimghdr\nlibrary from PyPI.\nmailcap\n: Use themimetypes\nmodule instead.standard-mailcap: Use the redistribution of\nmailcap\nlibrary from PyPI.\nmsilib\nnis\nnntplib\n: Use the pynntp library from PyPI instead.standard-nntplib: Use the redistribution of\nnntplib\nlibrary from PyPI.\nossaudiodev\n: For audio playback, use the pygame library from PyPI instead.pipes\n: Use thesubprocess\nmodule instead. Useshlex.quote()\nto replace the undocumentedpipes.quote\nfunction.standard-pipes: Use the redistribution of\npipes\nlibrary from PyPI.\nsndhdr\n: The filetype, puremagic, or python-magic libraries should be used as replacements.standard-sndhdr: Use the redistribution of\nsndhdr\nlibrary from PyPI.\nspwd\n: Use the python-pam library from PyPI instead.sunau\nstandard-sunau: Use the redistribution of\nsunau\nlibrary from PyPI.\ntelnetlib\n, Use the telnetlib3 or Exscript libraries from PyPI instead.standard-telnetlib: Use the redistribution of\ntelnetlib\nlibrary from PyPI.\nuu\n: Use thebase64\nmodule instead, as a modern alternative.standard-uu: Use the redistribution of\nuu\nlibrary from PyPI.\nxdrlib\nstandard-xdrlib: Use the redistribution of\nxdrlib\nlibrary from PyPI.\n(Contributed by Victor Stinner and Zachary Ware in gh-104773 and gh-104780.)\n2to3\u00b6\nRemove the 2to3 program and the\nlib2to3\nmodule, previously deprecated in Python 3.11. (Contributed by Victor Stinner in gh-104780.)\nbuiltins\u00b6\nRemove support for chained\nclassmethod\ndescriptors (introduced in gh-63272). These can no longer be used to wrap other descriptors, such asproperty\n. The core design of this feature was flawed and led to several problems. To \u201cpass-through\u201d aclassmethod\n, consider using the__wrapped__\nattribute that was added in Python 3.10. (Contributed by Raymond Hettinger in gh-89519.)Raise a\nRuntimeError\nwhen callingframe.clear()\non a suspended frame (as has always been the case for an executing frame). (Contributed by Irit Katriel in gh-79932.)\nconfigparser\u00b6\nRemove the undocumented\nLegacyInterpolation\nclass, deprecated in the docstring since Python 3.2, and at runtime since Python 3.11. (Contributed by Hugo van Kemenade in gh-104886.)\nimportlib.metadata\u00b6\nRemove deprecated subscript (\n__getitem__()\n) access for EntryPoint objects. (Contributed by Jason R. Coombs in gh-113175.)\nlocale\u00b6\nRemove the\nlocale.resetlocale()\nfunction, deprecated in Python 3.11. Uselocale.setlocale(locale.LC_ALL, \"\")\ninstead. (Contributed by Victor Stinner in gh-104783.)\nopcode\u00b6\nMove\nopcode.ENABLE_SPECIALIZATION\nto_opcode.ENABLE_SPECIALIZATION\n. This field was added in 3.12, it was never documented, and is not intended for external use. (Contributed by Irit Katriel in gh-105481.)Remove\nopcode.is_pseudo()\n,opcode.MIN_PSEUDO_OPCODE\n, andopcode.MAX_PSEUDO_OPCODE\n, which were added in Python 3.12, but were neither documented nor exposed throughdis\n, and were not intended to be used externally. (Contributed by Irit Katriel in gh-105481.)\noptparse\u00b6\nThis module is no longer considered soft deprecated. While\nargparse\nremains preferred for new projects that aren\u2019t using a third party command line argument processing library, there are aspects of the wayargparse\nworks that mean the lower leveloptparse\nmodule may provide a better foundation for writing argument processing libraries, and for implementing command line applications which adhere more strictly thanargparse\ndoes to various Unix command line processing conventions that originate in the behaviour of the Cgetopt()\nfunction . (Contributed by Alyssa Coghlan and Serhiy Storchaka in gh-126180.)\npathlib\u00b6\nre\u00b6\nRemove the undocumented, deprecated, and broken\nre.template()\nfunction andre.TEMPLATE\n/re.T\nflag. (Contributed by Serhiy Storchaka and Nikita Sobolev in gh-105687.)\ntkinter.tix\u00b6\nRemove the\ntkinter.tix\nmodule, deprecated in Python 3.6. The third-party Tix library which the module wrapped is unmaintained. (Contributed by Zachary Ware in gh-75552.)\nturtle\u00b6\nRemove the\nRawTurtle.settiltangle()\nmethod, deprecated in the documentation since Python 3.1 and at runtime since Python 3.11. (Contributed by Hugo van Kemenade in gh-104876.)\ntyping\u00b6\nRemove the\ntyping.io\nandtyping.re\nnamespaces, deprecated since Python 3.8. The items in those namespaces can be imported directly from thetyping\nmodule. (Contributed by Sebastian Rittau in gh-92871.)Remove the keyword-argument method of creating\nTypedDict\ntypes, deprecated in Python 3.11. (Contributed by Tomas Roun in gh-104786.)\nunittest\u00b6\nRemove the following\nunittest\nfunctions, deprecated in Python 3.11:unittest.findTestCases()\nunittest.makeSuite()\nunittest.getTestCaseNames()\nUse\nTestLoader\nmethods instead:(Contributed by Hugo van Kemenade in gh-104835.)\nRemove the untested and undocumented\nTestProgram.usageExit()\nmethod, deprecated in Python 3.11. (Contributed by Hugo van Kemenade in gh-104992.)\nurllib\u00b6\nRemove the cafile, capath, and cadefault parameters of the\nurllib.request.urlopen()\nfunction, deprecated in Python 3.6. Use the context parameter instead with anSSLContext\ninstance. Thessl.SSLContext.load_cert_chain()\nfunction can be used to load specific certificates, or letssl.create_default_context()\nselect the operating system\u2019s trusted certificate authority (CA) certificates. (Contributed by Victor Stinner in gh-105382.)\nwebbrowser\u00b6\nRemove the untested and undocumented\nMacOSX\nclass, deprecated in Python 3.11. Use theMacOSXOSAScript\nclass (introduced in Python 3.2) instead. (Contributed by Hugo van Kemenade in gh-104804.)Remove the deprecated\nMacOSXOSAScript._name\nattribute. Use theMacOSXOSAScript.name\nattribute instead. (Contributed by Nikita Sobolev in gh-105546.)\nNew Deprecations\u00b6\n-\nDeprecate the undocumented\nSetPointerType()\nfunction, to be removed in Python 3.15. (Contributed by Victor Stinner in gh-105733.)Soft-deprecate the\nARRAY()\nfunction in favour oftype * length\nmultiplication. (Contributed by Victor Stinner in gh-105733.)\ndis\n:-\nDeprecate non-integer numbers as arguments to functions and methods that consider plural forms in the\ngettext\nmodule, even if no translation was found. (Contributed by Serhiy Storchaka in gh-88434.)\nglob\n:Deprecate the undocumented\nglob0()\nandglob1()\nfunctions. Useglob()\nand pass a path-like object specifying the root directory to the root_dir parameter instead. (Contributed by Barney Gale in gh-117337.)\n-\nDeprecate\nCGIHTTPRequestHandler\n, to be removed in Python 3.15. Process-based CGI HTTP servers have been out of favor for a very long time. This code was outdated, unmaintained, and rarely used. It has a high potential for both security and functionality bugs. (Contributed by Gregory P. Smith in gh-109096.)Deprecate the\n--cgi\nflag to the python -m http.server command-line interface, to be removed in Python 3.15. (Contributed by Gregory P. Smith in gh-109096.)\n-\nSoft-deprecate file path arguments to\nguess_type()\n, useguess_file_type()\ninstead. (Contributed by Serhiy Storchaka in gh-66543.)\nre\n:Deprecate passing the optional maxsplit, count, or flags arguments as positional arguments to the module-level\nsplit()\n,sub()\n, andsubn()\nfunctions. These parameters will become keyword-only in a future version of Python. (Contributed by Serhiy Storchaka in gh-56166.)\n-\nDeprecate\nPurePath.is_reserved()\n, to be removed in Python 3.15. Useos.path.isreserved()\nto detect reserved paths on Windows. (Contributed by Barney Gale in gh-88569.)\n-\nDeprecate\njava_ver()\n, to be removed in Python 3.15. This function is only useful for Jython support, has a confusing API, and is largely untested. (Contributed by Nikita Sobolev in gh-116349.)\n-\nDeprecate the undocumented\nispackage()\nfunction. (Contributed by Zackery Spytz in gh-64020.)\n-\nDeprecate passing more than one positional argument to the\nconnect()\nfunction and theConnection\nconstructor. The remaining parameters will become keyword-only in Python 3.15. (Contributed by Erlend E. Aasland in gh-107948.)Deprecate passing name, number of arguments, and the callable as keyword arguments for\nConnection.create_function()\nandConnection.create_aggregate()\nThese parameters will become positional-only in Python 3.15. (Contributed by Erlend E. Aasland in gh-108278.)Deprecate passing the callback callable by keyword for the\nset_authorizer()\n,set_progress_handler()\n, andset_trace_callback()\nConnection\nmethods. The callback callables will become positional-only in Python 3.15. (Contributed by Erlend E. Aasland in gh-108278.)\nsys\n:Deprecate the\n_enablelegacywindowsfsencoding()\nfunction, to be removed in Python 3.16. Use thePYTHONLEGACYWINDOWSFSENCODING\nenvironment variable instead. (Contributed by Inada Naoki in gh-73427.)\n-\nDeprecate the undocumented and unused\nTarFile.tarfile\nattribute, to be removed in Python 3.16. (Contributed in gh-115256.)\n-\nDeprecate the\nTracebackException.exc_type\nattribute. UseTracebackException.exc_type_str\ninstead. (Contributed by Irit Katriel in gh-112332.)\n-\nDeprecate the undocumented keyword argument syntax for creating\nNamedTuple\nclasses (e.g.Point = NamedTuple(\"Point\", x=int, y=int)\n), to be removed in Python 3.15. Use the class-based syntax or the functional syntax instead. (Contributed by Alex Waygood in gh-105566.)Deprecate omitting the fields parameter when creating a\nNamedTuple\nortyping.TypedDict\nclass, and deprecate passingNone\nto the fields parameter of both types. Python 3.15 will require a valid sequence for the fields parameter. To create a NamedTuple class with zero fields, useclass NT(NamedTuple): pass\norNT = NamedTuple(\"NT\", ())\n. To create a TypedDict class with zero fields, useclass TD(TypedDict): pass\norTD = TypedDict(\"TD\", {})\n. (Contributed by Alex Waygood in gh-105566 and gh-105570.)Deprecate the\ntyping.no_type_check_decorator()\ndecorator function, to be removed in Python 3.15. After eight years in thetyping\nmodule, it has yet to be supported by any major type checker. (Contributed by Alex Waygood in gh-106309.)Deprecate\ntyping.AnyStr\n. In Python 3.16, it will be removed fromtyping.__all__\n, and aDeprecationWarning\nwill be emitted at runtime when it is imported or accessed. It will be removed entirely in Python 3.18. Use the new type parameter syntax instead. (Contributed by Michael The in gh-107116.)\nwave\n:Deprecate the\ngetmark()\n,setmark()\n, andgetmarkers()\nmethods of theWave_read\nandWave_write\nclasses, to be removed in Python 3.15. (Contributed by Victor Stinner in gh-105096.)\nPending removal in Python 3.14\u00b6\nargparse\n: The type, choices, and metavar parameters ofargparse.BooleanOptionalAction\nare deprecated and will be removed in 3.14. (Contributed by Nikita Sobolev in gh-92248.)ast\n: The following features have been deprecated in documentation since Python 3.8, now cause aDeprecationWarning\nto be emitted at runtime when they are accessed or used, and will be removed in Python 3.14:ast.Num\nast.Str\nast.Bytes\nast.NameConstant\nast.Ellipsis\nUse\nast.Constant\ninstead. (Contributed by Serhiy Storchaka in gh-90953.)-\nThe child watcher classes\nasyncio.MultiLoopChildWatcher\n,asyncio.FastChildWatcher\n,asyncio.AbstractChildWatcher\nandasyncio.SafeChildWatcher\nare deprecated and will be removed in Python 3.14. (Contributed by Kumar Aditya in gh-94597.)asyncio.set_child_watcher()\n,asyncio.get_child_watcher()\n,asyncio.AbstractEventLoopPolicy.set_child_watcher()\nandasyncio.AbstractEventLoopPolicy.get_child_watcher()\nare deprecated and will be removed in Python 3.14. (Contributed by Kumar Aditya in gh-94597.)The\nget_event_loop()\nmethod of the default event loop policy now emits aDeprecationWarning\nif there is no current event loop set and it decides to create one. (Contributed by Serhiy Storchaka and Guido van Rossum in gh-100160.)\nemail\n: Deprecated the isdst parameter inemail.utils.localtime()\n. (Contributed by Alan Williams in gh-72346.)importlib.abc\ndeprecated classes:importlib.abc.ResourceReader\nimportlib.abc.Traversable\nimportlib.abc.TraversableResources\nUse\nimportlib.resources.abc\nclasses instead:(Contributed by Jason R. Coombs and Hugo van Kemenade in gh-93963.)\nitertools\nhad undocumented, inefficient, historically buggy, and inconsistent support for copy, deepcopy, and pickle operations. This will be removed in 3.14 for a significant reduction in code volume and maintenance burden. (Contributed by Raymond Hettinger in gh-101588.)multiprocessing\n: The default start method will change to a safer one on Linux, BSDs, and other non-macOS POSIX platforms where'fork'\nis currently the default (gh-84559). Adding a runtime warning about this was deemed too disruptive as the majority of code is not expected to care. Use theget_context()\norset_start_method()\nAPIs to explicitly specify when your code requires'fork'\n. See Contexts and start methods.pathlib\n:is_relative_to()\nandrelative_to()\n: passing additional arguments is deprecated.pkgutil\n:pkgutil.find_loader()\nandpkgutil.get_loader()\nnow raiseDeprecationWarning\n; useimportlib.util.find_spec()\ninstead. (Contributed by Nikita Sobolev in gh-97850.)pty\n:master_open()\n: usepty.openpty()\n.slave_open()\n: usepty.openpty()\n.\n-\nversion\nandversion_info\n.execute()\nandexecutemany()\nif named placeholders are used and parameters is a sequence instead of adict\n.\nurllib\n:urllib.parse.Quoter\nis deprecated: it was not intended to be a public API. (Contributed by Gregory P. Smith in gh-88168.)\nPending removal in Python 3.15\u00b6\nThe import system:\nSetting\n__cached__\non a module while failing to set__spec__.cached\nis deprecated. In Python 3.15,__cached__\nwill cease to be set or take into consideration by the import system or standard library. (gh-97879)Setting\n__package__\non a module while failing to set__spec__.parent\nis deprecated. In Python 3.15,__package__\nwill cease to be set or take into consideration by the import system or standard library. (gh-97879)\n-\nThe undocumented\nctypes.SetPointerType()\nfunction has been deprecated since Python 3.13.\n-\nThe obsolete and rarely used\nCGIHTTPRequestHandler\nhas been deprecated since Python 3.13. No direct replacement exists. Anything is better than CGI to interface a web server with a request handler.The\n--cgi\nflag to the python -m http.server command-line interface has been deprecated since Python 3.13.\n-\nload_module()\nmethod: useexec_module()\ninstead.\n-\nThe\ngetdefaultlocale()\nfunction has been deprecated since Python 3.11. Its removal was originally planned for Python 3.13 (gh-90817), but has been postponed to Python 3.15. Usegetlocale()\n,setlocale()\n, andgetencoding()\ninstead. (Contributed by Hugo van Kemenade in gh-111187.)\n-\nPurePath.is_reserved()\nhas been deprecated since Python 3.13. Useos.path.isreserved()\nto detect reserved paths on Windows.\n-\njava_ver()\nhas been deprecated since Python 3.13. This function is only useful for Jython support, has a confusing API, and is largely untested.\n-\nThe check_home argument of\nsysconfig.is_python_build()\nhas been deprecated since Python 3.12.\n-\nRLock()\nwill take no arguments in Python 3.15. Passing any arguments has been deprecated since Python 3.14, as the Python version does not permit any arguments, but the C version allows any number of positional or keyword arguments, ignoring every argument.\n-\ntypes.CodeType\n: Accessingco_lnotab\nwas deprecated in PEP 626 since 3.10 and was planned to be removed in 3.12, but it only got a properDeprecationWarning\nin 3.12. May be removed in 3.15. (Contributed by Nikita Sobolev in gh-101866.)\n-\nThe undocumented keyword argument syntax for creating\nNamedTuple\nclasses (for example,Point = NamedTuple(\"Point\", x=int, y=int)\n) has been deprecated since Python 3.13. Use the class-based syntax or the functional syntax instead.When using the functional syntax of\nTypedDict\ns, failing to pass a value to the fields parameter (TD = TypedDict(\"TD\")\n) or passingNone\n(TD = TypedDict(\"TD\", None)\n) has been deprecated since Python 3.13. Useclass TD(TypedDict): pass\norTD = TypedDict(\"TD\", {})\nto create a TypedDict with zero field.The\ntyping.no_type_check_decorator()\ndecorator function has been deprecated since Python 3.13. After eight years in thetyping\nmodule, it has yet to be supported by any major type checker.\nwave\n:The\ngetmark()\n,setmark()\n, andgetmarkers()\nmethods of theWave_read\nandWave_write\nclasses have been deprecated since Python 3.13.\n-\nload_module()\nhas been deprecated since Python 3.10. Useexec_module()\ninstead. (Contributed by Jiahao Li in gh-125746.)\nPending removal in Python 3.16\u00b6\nThe import system:\nSetting\n__loader__\non a module while failing to set__spec__.loader\nis deprecated. In Python 3.16,__loader__\nwill cease to be set or taken into consideration by the import system or the standard library.\n-\nThe\n'u'\nformat code (wchar_t\n) has been deprecated in documentation since Python 3.3 and at runtime since Python 3.13. Use the'w'\nformat code (Py_UCS4\n) for Unicode characters instead.\n-\nasyncio.iscoroutinefunction()\nis deprecated and will be removed in Python 3.16; useinspect.iscoroutinefunction()\ninstead. (Contributed by Jiahao Li and Kumar Aditya in gh-122875.)asyncio\npolicy system is deprecated and will be removed in Python 3.16. In particular, the following classes and functions are deprecated:Users should use\nasyncio.run()\norasyncio.Runner\nwith loop_factory to use the desired event loop implementation.For example, to use\nasyncio.SelectorEventLoop\non Windows:import asyncio async def main(): ... asyncio.run(main(), loop_factory=asyncio.SelectorEventLoop)\n(Contributed by Kumar Aditya in gh-127949.)\n-\nBitwise inversion on boolean types,\n~True\nor~False\nhas been deprecated since Python 3.12, as it produces surprising and unintuitive results (-2\nand-1\n). Usenot x\ninstead for the logical negation of a Boolean. In the rare case that you need the bitwise inversion of the underlying integer, convert toint\nexplicitly (~int(x)\n).\n-\nCalling the Python implementation of\nfunctools.reduce()\nwith function or sequence as keyword arguments has been deprecated since Python 3.14.\n-\nSupport for custom logging handlers with the strm argument is deprecated and scheduled for removal in Python 3.16. Define handlers with the stream argument instead. (Contributed by Mariusz Felisiak in gh-115032.)\n-\nValid extensions start with a \u2018.\u2019 or are empty for\nmimetypes.MimeTypes.add_type()\n. Undotted extensions are deprecated and will raise aValueError\nin Python 3.16. (Contributed by Hugo van Kemenade in gh-75223.)\n-\nThe\nExecError\nexception has been deprecated since Python 3.14. It has not been used by any function inshutil\nsince Python 3.4, and is now an alias ofRuntimeError\n.\n-\nThe\nClass.get_methods\nmethod has been deprecated since Python 3.14.\nsys\n:The\n_enablelegacywindowsfsencoding()\nfunction has been deprecated since Python 3.13. Use thePYTHONLEGACYWINDOWSFSENCODING\nenvironment variable instead.\n-\nThe\nsysconfig.expand_makefile_vars()\nfunction has been deprecated since Python 3.14. Use thevars\nargument ofsysconfig.get_paths()\ninstead.\n-\nThe undocumented and unused\nTarFile.tarfile\nattribute has been deprecated since Python 3.13.\nPending removal in Python 3.17\u00b6\n-\ncollections.abc.ByteString\nis scheduled for removal in Python 3.17.Use\nisinstance(obj, collections.abc.Buffer)\nto test ifobj\nimplements the buffer protocol at runtime. For use in type annotations, either useBuffer\nor a union that explicitly specifies the types your code supports (e.g.,bytes | bytearray | memoryview\n).ByteString\nwas originally intended to be an abstract class that would serve as a supertype of bothbytes\nandbytearray\n. However, since the ABC never had any methods, knowing that an object was an instance ofByteString\nnever actually told you anything useful about the object. Other common buffer types such asmemoryview\nwere also never understood as subtypes ofByteString\n(either at runtime or by static type checkers).See PEP 688 for more details. (Contributed by Shantanu Jain in gh-91896.)\n-\nBefore Python 3.14, old-style unions were implemented using the private class\ntyping._UnionGenericAlias\n. This class is no longer needed for the implementation, but it has been retained for backward compatibility, with removal scheduled for Python 3.17. Users should use documented introspection helpers liketyping.get_origin()\nandtyping.get_args()\ninstead of relying on private implementation details.typing.ByteString\n, deprecated since Python 3.9, is scheduled for removal in Python 3.17.Use\nisinstance(obj, collections.abc.Buffer)\nto test ifobj\nimplements the buffer protocol at runtime. For use in type annotations, either useBuffer\nor a union that explicitly specifies the types your code supports (e.g.,bytes | bytearray | memoryview\n).ByteString\nwas originally intended to be an abstract class that would serve as a supertype of bothbytes\nandbytearray\n. However, since the ABC never had any methods, knowing that an object was an instance ofByteString\nnever actually told you anything useful about the object. Other common buffer types such asmemoryview\nwere also never understood as subtypes ofByteString\n(either at runtime or by static type checkers).See PEP 688 for more details. (Contributed by Shantanu Jain in gh-91896.)\nPending removal in Python 3.18\u00b6\nPending removal in Python 3.19\u00b6\nPending removal in future versions\u00b6\nThe following APIs will be removed in the future, although there is currently no date scheduled for their removal.\n-\nNesting argument groups and nesting mutually exclusive groups are deprecated.\nPassing the undocumented keyword argument prefix_chars to\nadd_argument_group()\nis now deprecated.The\nargparse.FileType\ntype converter is deprecated.\n-\nGenerators:\nthrow(type, exc, tb)\nandathrow(type, exc, tb)\nsignature is deprecated: usethrow(exc)\nandathrow(exc)\ninstead, the single argument signature.Currently Python accepts numeric literals immediately followed by keywords, for example\n0in x\n,1or x\n,0if 1else 2\n. It allows confusing and ambiguous expressions like[0x1for x in y]\n(which can be interpreted as[0x1 for x in y]\nor[0x1f or x in y]\n). A syntax warning is raised if the numeric literal is immediately followed by one of keywordsand\n,else\n,for\n,if\n,in\n,is\nandor\n. In a future release it will be changed to a syntax error. (gh-87999)Support for\n__index__()\nand__int__()\nmethod returning non-int type: these methods will be required to return an instance of a strict subclass ofint\n.Support for\n__float__()\nmethod returning a strict subclass offloat\n: these methods will be required to return an instance offloat\n.Support for\n__complex__()\nmethod returning a strict subclass ofcomplex\n: these methods will be required to return an instance ofcomplex\n.Passing a complex number as the real or imag argument in the\ncomplex()\nconstructor is now deprecated; it should only be passed as a single positional argument. (Contributed by Serhiy Storchaka in gh-109218.)\ncalendar\n:calendar.January\nandcalendar.February\nconstants are deprecated and replaced bycalendar.JANUARY\nandcalendar.FEBRUARY\n. (Contributed by Prince Roshan in gh-103636.)codecs\n: useopen()\ninstead ofcodecs.open()\n. (gh-133038)codeobject.co_lnotab\n: use thecodeobject.co_lines()\nmethod instead.-\nutcnow()\n: usedatetime.datetime.now(tz=datetime.UTC)\n.utcfromtimestamp()\n: usedatetime.datetime.fromtimestamp(timestamp, tz=datetime.UTC)\n.\ngettext\n: Plural value must be an integer.-\ncache_from_source()\ndebug_override parameter is deprecated: use the optimization parameter instead.\n-\nEntryPoints\ntuple interface.Implicit\nNone\non return values.\nlogging\n: thewarn()\nmethod has been deprecated since Python 3.3, usewarning()\ninstead.mailbox\n: Use of StringIO input and text mode is deprecated, use BytesIO and binary mode instead.os\n: Callingos.register_at_fork()\nin multi-threaded process.pydoc.ErrorDuringImport\n: A tuple value for exc_info parameter is deprecated, use an exception instance.re\n: More strict rules are now applied for numerical group references and group names in regular expressions. Only sequence of ASCII digits is now accepted as a numerical reference. The group name in bytes patterns and replacement strings can now only contain ASCII letters and digits and underscore. (Contributed by Serhiy Storchaka in gh-91760.)sre_compile\n,sre_constants\nandsre_parse\nmodules.shutil\n:rmtree()\n\u2019s onerror parameter is deprecated in Python 3.12; use the onexc parameter instead.ssl\noptions and protocols:ssl.SSLContext\nwithout protocol argument is deprecated.ssl.SSLContext\n:set_npn_protocols()\nandselected_npn_protocol()\nare deprecated: use ALPN instead.ssl.OP_NO_SSL*\noptionsssl.OP_NO_TLS*\noptionsssl.PROTOCOL_SSLv3\nssl.PROTOCOL_TLS\nssl.PROTOCOL_TLSv1\nssl.PROTOCOL_TLSv1_1\nssl.PROTOCOL_TLSv1_2\nssl.TLSVersion.SSLv3\nssl.TLSVersion.TLSv1\nssl.TLSVersion.TLSv1_1\nthreading\nmethods:threading.Condition.notifyAll()\n: usenotify_all()\n.threading.Event.isSet()\n: useis_set()\n.threading.Thread.isDaemon()\n,threading.Thread.setDaemon()\n: usethreading.Thread.daemon\nattribute.threading.Thread.getName()\n,threading.Thread.setName()\n: usethreading.Thread.name\nattribute.threading.currentThread()\n: usethreading.current_thread()\n.threading.activeCount()\n: usethreading.active_count()\n.\nThe internal class\ntyping._UnionGenericAlias\nis no longer used to implementtyping.Union\n. To preserve compatibility with users using this private class, a compatibility shim will be provided until at least Python 3.17. (Contributed by Jelle Zijlstra in gh-105499.)unittest.IsolatedAsyncioTestCase\n: it is deprecated to return a value that is notNone\nfrom a test case.urllib.parse\ndeprecated functions:urlparse()\ninsteadsplitattr()\nsplithost()\nsplitnport()\nsplitpasswd()\nsplitport()\nsplitquery()\nsplittag()\nsplittype()\nsplituser()\nsplitvalue()\nto_bytes()\nwsgiref\n:SimpleHandler.stdout.write()\nshould not do partial writes.xml.etree.ElementTree\n: Testing the truth value of anElement\nis deprecated. In a future release it will always returnTrue\n. Prefer explicitlen(elem)\norelem is not None\ntests instead.sys._clear_type_cache()\nis deprecated: usesys._clear_internal_caches()\ninstead.\nCPython Bytecode Changes\u00b6\nThe oparg of\nYIELD_VALUE\nis now1\nif the yield is part of a yield-from or await, and0\notherwise. The oparg ofRESUME\nwas changed to add a bit indicating if the except-depth is 1, which is needed to optimize closing of generators. (Contributed by Irit Katriel in gh-111354.)\nC API Changes\u00b6\nNew Features\u00b6\nAdd the PyMonitoring C API for generating PEP 669 monitoring events:\nPyMonitoring_FireBranchEvent\n(Contributed by Irit Katriel in gh-111997).\nAdd\nPyMutex\n, a lightweight mutex that occupies a single byte, and the newPyMutex_Lock()\nandPyMutex_Unlock()\nfunctions.PyMutex_Lock()\nwill release the GIL (if currently held) if the operation needs to block. (Contributed by Sam Gross in gh-108724.)Add the PyTime C API to provide access to system clocks:\nPyTime_MIN\nandPyTime_MAX\n.\n(Contributed by Victor Stinner and Petr Viktorin in gh-110850.)\nAdd the\nPyDict_ContainsString()\nfunction with the same behavior asPyDict_Contains()\n, but key is specified as a const char* UTF-8 encoded bytes string, rather than a PyObject*. (Contributed by Victor Stinner in gh-108314.)Add the\nPyDict_GetItemRef()\nandPyDict_GetItemStringRef()\nfunctions, which behave similarly toPyDict_GetItemWithError()\n, but return a strong reference instead of a borrowed reference. Moreover, these functions return-1\non error, removing the need to checkPyErr_Occurred()\n. (Contributed by Victor Stinner in gh-106004.)Add the\nPyDict_SetDefaultRef()\nfunction, which behaves similarly toPyDict_SetDefault()\n, but returns a strong reference instead of a borrowed reference. This function returns-1\non error,0\non insertion, and1\nif the key was already present in the dictionary. (Contributed by Sam Gross in gh-112066.)Add the\nPyDict_Pop()\nandPyDict_PopString()\nfunctions to remove a key from a dictionary and optionally return the removed value. This is similar todict.pop()\n, though there is no default value, andKeyError\nis not raised for missing keys. (Contributed by Stefan Behnel and Victor Stinner in gh-111262.)Add the\nPyMapping_GetOptionalItem()\nandPyMapping_GetOptionalItemString()\nfunctions as alternatives toPyObject_GetItem()\nandPyMapping_GetItemString()\nrespectively. The new functions do not raiseKeyError\nif the requested key is missing from the mapping. These variants are more convenient and faster if a missing key should not be treated as a failure. (Contributed by Serhiy Storchaka in gh-106307.)Add the\nPyObject_GetOptionalAttr()\nandPyObject_GetOptionalAttrString()\nfunctions as alternatives toPyObject_GetAttr()\nandPyObject_GetAttrString()\nrespectively. The new functions do not raiseAttributeError\nif the requested attribute is not found on the object. These variants are more convenient and faster if the missing attribute should not be treated as a failure. (Contributed by Serhiy Storchaka in gh-106521.)Add the\nPyErr_FormatUnraisable()\nfunction as an extension toPyErr_WriteUnraisable()\nthat allows customizing the warning message. (Contributed by Serhiy Storchaka in gh-108082.)Add new functions that return a strong reference instead of a borrowed reference for frame locals, globals, and builtins, as part of PEP 667:\n(Contributed by Mark Shannon and Tian Gao in gh-74929.)\nAdd the\nPy_GetConstant()\nandPy_GetConstantBorrowed()\nfunctions to get strong or borrowed references to constants. For example,Py_GetConstant(Py_CONSTANT_ZERO)\nreturns a strong reference to the constant zero. (Contributed by Victor Stinner in gh-115754.)Add the\nPyImport_AddModuleRef()\nfunction as a replacement forPyImport_AddModule()\nthat returns a strong reference instead of a borrowed reference. (Contributed by Victor Stinner in gh-105922.)Add the\nPy_IsFinalizing()\nfunction to check whether the main Python interpreter is shutting down. (Contributed by Victor Stinner in gh-108014.)Add the\nPyList_GetItemRef()\nfunction as a replacement forPyList_GetItem()\nthat returns a strong reference instead of a borrowed reference. (Contributed by Sam Gross in gh-114329.)Add the\nPyList_Extend()\nandPyList_Clear()\nfunctions, mirroring the Pythonlist.extend()\nandlist.clear()\nmethods. (Contributed by Victor Stinner in gh-111138.)Add the\nPyLong_AsInt()\nfunction. It behaves similarly toPyLong_AsLong()\n, but stores the result in a C int instead of a C long. (Contributed by Victor Stinner in gh-108014.)Add the\nPyLong_AsNativeBytes()\n,PyLong_FromNativeBytes()\n, andPyLong_FromUnsignedNativeBytes()\nfunctions to simplify converting between native integer types and Pythonint\nobjects. (Contributed by Steve Dower in gh-111140.)Add\nPyModule_Add()\nfunction, which is similar toPyModule_AddObjectRef()\nandPyModule_AddObject()\n, but always steals a reference to the value. (Contributed by Serhiy Storchaka in gh-86493.)Add the\nPyObject_GenericHash()\nfunction that implements the default hashing function of a Python object. (Contributed by Serhiy Storchaka in gh-113024.)Add the\nPy_HashPointer()\nfunction to hash a raw pointer. (Contributed by Victor Stinner in gh-111545.)Add the\nPyObject_VisitManagedDict()\nandPyObject_ClearManagedDict()\nfunctions. which must be called by the traverse and clear functions of a type using thePy_TPFLAGS_MANAGED_DICT\nflag. The pythoncapi-compat project can be used to use these functions with Python 3.11 and 3.12. (Contributed by Victor Stinner in gh-107073.)Add the\nPyRefTracer_SetTracer()\nandPyRefTracer_GetTracer()\nfunctions, which enable tracking object creation and destruction in the same way that thetracemalloc\nmodule does. (Contributed by Pablo Galindo in gh-93502.)Add the\nPySys_AuditTuple()\nfunction as an alternative toPySys_Audit()\nthat takes event arguments as a Pythontuple\nobject. (Contributed by Victor Stinner in gh-85283.)Add the\nPyThreadState_GetUnchecked()\nfunction as an alternative toPyThreadState_Get()\nthat doesn\u2019t kill the process with a fatal error if it isNULL\n. The caller is responsible for checking if the result isNULL\n. (Contributed by Victor Stinner in gh-108867.)Add the\nPyType_GetFullyQualifiedName()\nfunction to get the type\u2019s fully qualified name. The module name is prepended iftype.__module__\nis a string and is not equal to either'builtins'\nor'__main__'\n. (Contributed by Victor Stinner in gh-111696.)Add the\nPyType_GetModuleName()\nfunction to get the type\u2019s module name. This is equivalent to getting thetype.__module__\nattribute. (Contributed by Eric Snow and Victor Stinner in gh-111696.)Add the\nPyUnicode_EqualToUTF8AndSize()\nandPyUnicode_EqualToUTF8()\nfunctions to compare a Unicode object with a const char* UTF-8 encoded string and1\nif they are equal or0\notherwise. These functions do not raise exceptions. (Contributed by Serhiy Storchaka in gh-110289.)Add the\nPyWeakref_GetRef()\nfunction as an alternative toPyWeakref_GetObject()\nthat returns a strong reference orNULL\nif the referent is no longer live. (Contributed by Victor Stinner in gh-105927.)Add fixed variants of functions which silently ignore errors:\nPyObject_HasAttrStringWithError()\nreplacesPyObject_HasAttrString()\n.PyMapping_HasKeyStringWithError()\nreplacesPyMapping_HasKeyString()\n.\nThe new functions return\n-1\nfor errors and the standard1\nfor true and0\nfor false.(Contributed by Serhiy Storchaka in gh-108511.)\nChanged C APIs\u00b6\nThe keywords parameter of\nPyArg_ParseTupleAndKeywords()\nandPyArg_VaParseTupleAndKeywords()\nnow has type char *const* in C and const char *const* in C++, instead of char**. In C++, this makes these functions compatible with arguments of type const char *const*, const char**, or char *const* without an explicit type cast. In C, the functions only support arguments of type char *const*. This can be overridden with thePY_CXX_CONST\nmacro. (Contributed by Serhiy Storchaka in gh-65210.)PyArg_ParseTupleAndKeywords()\nnow supports non-ASCII keyword parameter names. (Contributed by Serhiy Storchaka in gh-110815.)The\nPyCode_GetFirstFree()\nfunction is now unstable API and is now namedPyUnstable_Code_GetFirstFree()\n. (Contributed by Bogdan Romanyuk in gh-115781.)The\nPyDict_GetItem()\n,PyDict_GetItemString()\n,PyMapping_HasKey()\n,PyMapping_HasKeyString()\n,PyObject_HasAttr()\n,PyObject_HasAttrString()\n, andPySys_GetObject()\nfunctions, each of which clears all errors which occurred when calling them now reports these errors usingsys.unraisablehook()\n. You may replace them with other functions as recommended in the documentation. (Contributed by Serhiy Storchaka in gh-106672.)Add support for the\n%T\n,%#T\n,%N\nand%#N\nformats toPyUnicode_FromFormat()\n:%T\n: Get the fully qualified name of an object type%#T\n: As above, but use a colon as the separator%N\n: Get the fully qualified name of a type%#N\n: As above, but use a colon as the separator\nSee PEP 737 for more information. (Contributed by Victor Stinner in gh-111696.)\nYou no longer have to define the\nPY_SSIZE_T_CLEAN\nmacro before includingPython.h\nwhen using#\nformats in format codes. APIs accepting the format codes always usePy_ssize_t\nfor#\nformats. (Contributed by Inada Naoki in gh-104922.)If Python is built in debug mode or\nwith assertions\n,PyTuple_SET_ITEM()\nandPyList_SET_ITEM()\nnow check the index argument with an assertion. (Contributed by Victor Stinner in gh-106168.)\nLimited C API Changes\u00b6\nThe following functions are now included in the Limited C API:\nPython built with\n--with-trace-refs\n(tracing references) now supports the Limited API. (Contributed by Victor Stinner in gh-108634.)\nRemoved C APIs\u00b6\nRemove several functions, macros, variables, etc with names prefixed by\n_Py\nor_PY\n(which are considered private). If your project is affected by one of these removals and you believe that the removed API should remain available, please open a new issue to request a public C API and addcc: @vstinner\nto the issue to notify Victor Stinner. (Contributed by Victor Stinner in gh-106320.)Remove old buffer protocols deprecated in Python 3.0. Use Buffer Protocol instead.\nPyObject_CheckReadBuffer()\n: UsePyObject_CheckBuffer()\nto test whether the object supports the buffer protocol. Note thatPyObject_CheckBuffer()\ndoesn\u2019t guarantee thatPyObject_GetBuffer()\nwill succeed. To test if the object is actually readable, see the next example ofPyObject_GetBuffer()\n.PyObject_AsCharBuffer()\n,PyObject_AsReadBuffer()\n: UsePyObject_GetBuffer()\nandPyBuffer_Release()\ninstead:Py_buffer view; if (PyObject_GetBuffer(obj, &view, PyBUF_SIMPLE) < 0) { return NULL; } // Use `view.buf` and `view.len` to read from the buffer. // You may need to cast buf as `(const char*)view.buf`. PyBuffer_Release(&view);\nPyObject_AsWriteBuffer()\n: UsePyObject_GetBuffer()\nandPyBuffer_Release()\ninstead:Py_buffer view; if (PyObject_GetBuffer(obj, &view, PyBUF_WRITABLE) < 0) { return NULL; } // Use `view.buf` and `view.len` to write to the buffer. PyBuffer_Release(&view);\n(Contributed by Inada Naoki in gh-85275.)\nRemove various functions deprecated in Python 3.9:\nPyEval_CallObject()\n,PyEval_CallObjectWithKeywords()\n: UsePyObject_CallNoArgs()\norPyObject_Call()\ninstead.Warning\nIn\nPyObject_Call()\n, positional arguments must be atuple\nand must not beNULL\n, and keyword arguments must be adict\norNULL\n, whereas the removed functions checked argument types and acceptedNULL\npositional and keyword arguments. To replacePyEval_CallObjectWithKeywords(func, NULL, kwargs)\nwithPyObject_Call()\n, pass an empty tuple as positional arguments usingPyTuple_New(0)\n.PyEval_CallFunction()\n: UsePyObject_CallFunction()\ninstead.PyEval_CallMethod()\n: UsePyObject_CallMethod()\ninstead.PyCFunction_Call()\n: UsePyObject_Call()\ninstead.\n(Contributed by Victor Stinner in gh-105107.)\nRemove the following old functions to configure the Python initialization, deprecated in Python 3.11:\nPySys_AddWarnOptionUnicode()\n: UsePyConfig.warnoptions\ninstead.PySys_AddWarnOption()\n: UsePyConfig.warnoptions\ninstead.PySys_AddXOption()\n: UsePyConfig.xoptions\ninstead.PySys_HasWarnOptions()\n: UsePyConfig.xoptions\ninstead.PySys_SetPath()\n: SetPyConfig.module_search_paths\ninstead.Py_SetPath()\n: SetPyConfig.module_search_paths\ninstead.Py_SetStandardStreamEncoding()\n: SetPyConfig.stdio_encoding\ninstead, and set also maybePyConfig.legacy_windows_stdio\n(on Windows)._Py_SetProgramFullPath()\n: SetPyConfig.executable\ninstead.\nUse the new\nPyConfig\nAPI of the Python Initialization Configuration instead (PEP 587), added to Python 3.8. (Contributed by Victor Stinner in gh-105145.)Remove\nPyEval_AcquireLock()\nandPyEval_ReleaseLock()\nfunctions, deprecated in Python 3.2. They didn\u2019t update the current thread state. They can be replaced with:low-level\nPyEval_AcquireThread()\nandPyEval_RestoreThread()\n;\n(Contributed by Victor Stinner in gh-105182.)\nRemove the\nPyEval_ThreadsInitialized()\nfunction, deprecated in Python 3.9. Since Python 3.7,Py_Initialize()\nalways creates the GIL: callingPyEval_InitThreads()\ndoes nothing andPyEval_ThreadsInitialized()\nalways returns non-zero. (Contributed by Victor Stinner in gh-105182.)Remove the\n_PyInterpreterState_Get()\nalias toPyInterpreterState_Get()\nwhich was kept for backward compatibility with Python 3.8. The pythoncapi-compat project can be used to getPyInterpreterState_Get()\non Python 3.8 and older. (Contributed by Victor Stinner in gh-106320.)Remove the private\n_PyObject_FastCall()\nfunction: usePyObject_Vectorcall()\nwhich is available since Python 3.8 (PEP 590). (Contributed by Victor Stinner in gh-106023.)Remove the\ncpython/pytime.h\nheader file, which only contained private functions. (Contributed by Victor Stinner in gh-106316.)Remove the undocumented\nPY_TIMEOUT_MAX\nconstant from the limited C API. (Contributed by Victor Stinner in gh-110014.)Remove the old trashcan macros\nPy_TRASHCAN_SAFE_BEGIN\nandPy_TRASHCAN_SAFE_END\n. Replace both with the new macrosPy_TRASHCAN_BEGIN\nandPy_TRASHCAN_END\n. (Contributed by Irit Katriel in gh-105111.)\nDeprecated C APIs\u00b6\nDeprecate old Python initialization functions:\nPySys_ResetWarnOptions()\n: Clearsys.warnoptions\nandwarnings.filters\ninstead.Py_GetExecPrefix()\n: Getsys.exec_prefix\ninstead.Py_GetPath()\n: Getsys.path\ninstead.Py_GetPrefix()\n: Getsys.prefix\ninstead.Py_GetProgramFullPath()\n: Getsys.executable\ninstead.Py_GetProgramName()\n: Getsys.executable\ninstead.Py_GetPythonHome()\n: GetPyConfig.home\nor thePYTHONHOME\nenvironment variable instead.\n(Contributed by Victor Stinner in gh-105145.)\nSoft deprecate the\nPyEval_GetBuiltins()\n,PyEval_GetGlobals()\n, andPyEval_GetLocals()\nfunctions, which return a borrowed reference. (Soft deprecated as part of PEP 667.)Deprecate the\nPyImport_ImportModuleNoBlock()\nfunction, which is just an alias toPyImport_ImportModule()\nsince Python 3.3. (Contributed by Victor Stinner in gh-105396.)Soft deprecate the\nPyModule_AddObject()\nfunction. It should be replaced withPyModule_Add()\norPyModule_AddObjectRef()\n. (Contributed by Serhiy Storchaka in gh-86493.)Deprecate the old\nPy_UNICODE\nandPY_UNICODE_TYPE\ntypes and thePy_UNICODE_WIDE\ndefine. Use thewchar_t\ntype directly instead. Since Python 3.3,Py_UNICODE\nandPY_UNICODE_TYPE\nare just aliases towchar_t\n. (Contributed by Victor Stinner in gh-105156.)Deprecate the\nPyWeakref_GetObject()\nandPyWeakref_GET_OBJECT()\nfunctions, which return a borrowed reference. Replace them with the newPyWeakref_GetRef()\nfunction, which returns a strong reference. The pythoncapi-compat project can be used to getPyWeakref_GetRef()\non Python 3.12 and older. (Contributed by Victor Stinner in gh-105927.)\nPending removal in Python 3.14\u00b6\nThe\nma_version_tag\nfield inPyDictObject\nfor extension modules (PEP 699; gh-101193).Creating\nimmutable types\nwith mutable bases (gh-95388).\nPending removal in Python 3.15\u00b6\nThe\nPyImport_ImportModuleNoBlock()\n: UsePyImport_ImportModule()\ninstead.PyWeakref_GetObject()\nandPyWeakref_GET_OBJECT()\n: UsePyWeakref_GetRef()\ninstead. The pythoncapi-compat project can be used to getPyWeakref_GetRef()\non Python 3.12 and older.Py_UNICODE\ntype and thePy_UNICODE_WIDE\nmacro: Usewchar_t\ninstead.PyUnicode_AsDecodedObject()\n: UsePyCodec_Decode()\ninstead.PyUnicode_AsDecodedUnicode()\n: UsePyCodec_Decode()\ninstead; Note that some codecs (for example, \u201cbase64\u201d) may return a type other thanstr\n, such asbytes\n.PyUnicode_AsEncodedObject()\n: UsePyCodec_Encode()\ninstead.PyUnicode_AsEncodedUnicode()\n: UsePyCodec_Encode()\ninstead; Note that some codecs (for example, \u201cbase64\u201d) may return a type other thanbytes\n, such asstr\n.Python initialization functions, deprecated in Python 3.13:\nPy_GetPath()\n: UsePyConfig_Get(\"module_search_paths\")\n(sys.path\n) instead.Py_GetPrefix()\n: UsePyConfig_Get(\"base_prefix\")\n(sys.base_prefix\n) instead. UsePyConfig_Get(\"prefix\")\n(sys.prefix\n) if virtual environments need to be handled.Py_GetExecPrefix()\n: UsePyConfig_Get(\"base_exec_prefix\")\n(sys.base_exec_prefix\n) instead. UsePyConfig_Get(\"exec_prefix\")\n(sys.exec_prefix\n) if virtual environments need to be handled.Py_GetProgramFullPath()\n: UsePyConfig_Get(\"executable\")\n(sys.executable\n) instead.Py_GetProgramName()\n: UsePyConfig_Get(\"executable\")\n(sys.executable\n) instead.Py_GetPythonHome()\n: UsePyConfig_Get(\"home\")\nor thePYTHONHOME\nenvironment variable instead.\nThe pythoncapi-compat project can be used to get\nPyConfig_Get()\non Python 3.13 and older.Functions to configure Python\u2019s initialization, deprecated in Python 3.11:\nPySys_SetArgvEx()\n: SetPyConfig.argv\ninstead.PySys_SetArgv()\n: SetPyConfig.argv\ninstead.Py_SetProgramName()\n: SetPyConfig.program_name\ninstead.Py_SetPythonHome()\n: SetPyConfig.home\ninstead.PySys_ResetWarnOptions()\n: Clearsys.warnoptions\nandwarnings.filters\ninstead.\nThe\nPy_InitializeFromConfig()\nAPI should be used withPyConfig\ninstead.Global configuration variables:\nPy_DebugFlag\n: UsePyConfig.parser_debug\norPyConfig_Get(\"parser_debug\")\ninstead.Py_VerboseFlag\n: UsePyConfig.verbose\norPyConfig_Get(\"verbose\")\ninstead.Py_QuietFlag\n: UsePyConfig.quiet\norPyConfig_Get(\"quiet\")\ninstead.Py_InteractiveFlag\n: UsePyConfig.interactive\norPyConfig_Get(\"interactive\")\ninstead.Py_InspectFlag\n: UsePyConfig.inspect\norPyConfig_Get(\"inspect\")\ninstead.Py_OptimizeFlag\n: UsePyConfig.optimization_level\norPyConfig_Get(\"optimization_level\")\ninstead.Py_NoSiteFlag\n: UsePyConfig.site_import\norPyConfig_Get(\"site_import\")\ninstead.Py_BytesWarningFlag\n: UsePyConfig.bytes_warning\norPyConfig_Get(\"bytes_warning\")\ninstead.Py_FrozenFlag\n: UsePyConfig.pathconfig_warnings\norPyConfig_Get(\"pathconfig_warnings\")\ninstead.Py_IgnoreEnvironmentFlag\n: UsePyConfig.use_environment\norPyConfig_Get(\"use_environment\")\ninstead.Py_DontWriteBytecodeFlag\n: UsePyConfig.write_bytecode\norPyConfig_Get(\"write_bytecode\")\ninstead.Py_NoUserSiteDirectory\n: UsePyConfig.user_site_directory\norPyConfig_Get(\"user_site_directory\")\ninstead.Py_UnbufferedStdioFlag\n: UsePyConfig.buffered_stdio\norPyConfig_Get(\"buffered_stdio\")\ninstead.Py_HashRandomizationFlag\n: UsePyConfig.use_hash_seed\nandPyConfig.hash_seed\norPyConfig_Get(\"hash_seed\")\ninstead.Py_IsolatedFlag\n: UsePyConfig.isolated\norPyConfig_Get(\"isolated\")\ninstead.Py_LegacyWindowsFSEncodingFlag\n: UsePyPreConfig.legacy_windows_fs_encoding\norPyConfig_Get(\"legacy_windows_fs_encoding\")\ninstead.Py_LegacyWindowsStdioFlag\n: UsePyConfig.legacy_windows_stdio\norPyConfig_Get(\"legacy_windows_stdio\")\ninstead.Py_FileSystemDefaultEncoding\n,Py_HasFileSystemDefaultEncoding\n: UsePyConfig.filesystem_encoding\norPyConfig_Get(\"filesystem_encoding\")\ninstead.Py_FileSystemDefaultEncodeErrors\n: UsePyConfig.filesystem_errors\norPyConfig_Get(\"filesystem_errors\")\ninstead.Py_UTF8Mode\n: UsePyPreConfig.utf8_mode\norPyConfig_Get(\"utf8_mode\")\ninstead. (seePy_PreInitialize()\n)\nThe\nPy_InitializeFromConfig()\nAPI should be used withPyConfig\nto set these options. OrPyConfig_Get()\ncan be used to get these options at runtime.\nPending removal in Python 3.16\u00b6\nThe bundled copy of\nlibmpdec\n.\nPending removal in Python 3.18\u00b6\nThe following private functions are deprecated and planned for removal in Python 3.18:\n_PyBytes_Join()\n: usePyBytes_Join()\n._PyDict_GetItemStringWithError()\n: usePyDict_GetItemStringRef()\n._PyDict_Pop()\n: usePyDict_Pop()\n._PyLong_Sign()\n: usePyLong_GetSign()\n._PyLong_FromDigits()\nand_PyLong_New()\n: usePyLongWriter_Create()\n._PyThreadState_UncheckedGet()\n: usePyThreadState_GetUnchecked()\n._PyUnicode_AsString()\n: usePyUnicode_AsUTF8()\n._PyUnicodeWriter_Init()\n: replace_PyUnicodeWriter_Init(&writer)\nwithwriter = PyUnicodeWriter_Create(0)\n._PyUnicodeWriter_Finish()\n: replace_PyUnicodeWriter_Finish(&writer)\nwithPyUnicodeWriter_Finish(writer)\n._PyUnicodeWriter_Dealloc()\n: replace_PyUnicodeWriter_Dealloc(&writer)\nwithPyUnicodeWriter_Discard(writer)\n._PyUnicodeWriter_WriteChar()\n: replace_PyUnicodeWriter_WriteChar(&writer, ch)\nwithPyUnicodeWriter_WriteChar(writer, ch)\n._PyUnicodeWriter_WriteStr()\n: replace_PyUnicodeWriter_WriteStr(&writer, str)\nwithPyUnicodeWriter_WriteStr(writer, str)\n._PyUnicodeWriter_WriteSubstring()\n: replace_PyUnicodeWriter_WriteSubstring(&writer, str, start, end)\nwithPyUnicodeWriter_WriteSubstring(writer, str, start, end)\n._PyUnicodeWriter_WriteASCIIString()\n: replace_PyUnicodeWriter_WriteASCIIString(&writer, str)\nwithPyUnicodeWriter_WriteASCII(writer, str)\n._PyUnicodeWriter_WriteLatin1String()\n: replace_PyUnicodeWriter_WriteLatin1String(&writer, str)\nwithPyUnicodeWriter_WriteUTF8(writer, str)\n._PyUnicodeWriter_Prepare()\n: (no replacement)._PyUnicodeWriter_PrepareKind()\n: (no replacement)._Py_HashPointer()\n: usePy_HashPointer()\n._Py_fopen_obj()\n: usePy_fopen()\n.\nThe pythoncapi-compat project can be used to get these new public functions on Python 3.13 and older. (Contributed by Victor Stinner in gh-128863.)\nPending removal in future versions\u00b6\nThe following APIs are deprecated and will be removed, although there is currently no date scheduled for their removal.\nPy_TPFLAGS_HAVE_FINALIZE\n: Unneeded since Python 3.8.PyErr_Fetch()\n: UsePyErr_GetRaisedException()\ninstead.PyErr_NormalizeException()\n: UsePyErr_GetRaisedException()\ninstead.PyErr_Restore()\n: UsePyErr_SetRaisedException()\ninstead.PyModule_GetFilename()\n: UsePyModule_GetFilenameObject()\ninstead.PyOS_AfterFork()\n: UsePyOS_AfterFork_Child()\ninstead.PySlice_GetIndicesEx()\n: UsePySlice_Unpack()\nandPySlice_AdjustIndices()\ninstead.PyUnicode_READY()\n: Unneeded since Python 3.12PyErr_Display()\n: UsePyErr_DisplayException()\ninstead._PyErr_ChainExceptions()\n: Use_PyErr_ChainExceptions1()\ninstead.PyBytesObject.ob_shash\nmember: callPyObject_Hash()\ninstead.Thread Local Storage (TLS) API:\nPyThread_create_key()\n: UsePyThread_tss_alloc()\ninstead.PyThread_delete_key()\n: UsePyThread_tss_free()\ninstead.PyThread_set_key_value()\n: UsePyThread_tss_set()\ninstead.PyThread_get_key_value()\n: UsePyThread_tss_get()\ninstead.PyThread_delete_key_value()\n: UsePyThread_tss_delete()\ninstead.PyThread_ReInitTLS()\n: Unneeded since Python 3.7.\nBuild Changes\u00b6\narm64-apple-ios\nandarm64-apple-ios-simulator\nare both now PEP 11 tier 3 platforms. (PEP 730 written and implementation contributed by Russell Keith-Magee in gh-114099.)aarch64-linux-android\nandx86_64-linux-android\nare both now PEP 11 tier 3 platforms. (PEP 738 written and implementation contributed by Malcolm Smith in gh-116622.)wasm32-wasi\nis now a PEP 11 tier 2 platform. (Contributed by Brett Cannon in gh-115192.)wasm32-emscripten\nis no longer a PEP 11 supported platform. (Contributed by Brett Cannon in gh-115192.)Building CPython now requires a compiler with support for the C11 atomic library, GCC built-in atomic functions, or MSVC interlocked intrinsics.\nAutoconf 2.71 and aclocal 1.16.5 are now required to regenerate the\nconfigure\nscript. (Contributed by Christian Heimes in gh-89886 and by Victor Stinner in gh-112090.)SQLite 3.15.2 or newer is required to build the\nsqlite3\nextension module. (Contributed by Erlend Aasland in gh-105875.)CPython now bundles the mimalloc library by default. It is licensed under the MIT license; see mimalloc license. The bundled mimalloc has custom changes, see gh-113141 for details. (Contributed by Dino Viehland in gh-109914.)\nThe\nconfigure\noption--with-system-libmpdec\nnow defaults toyes\n. The bundled copy oflibmpdec\nwill be removed in Python 3.16.Python built with\nconfigure\n--with-trace-refs\n(tracing references) is now ABI compatible with the Python release build and debug build. (Contributed by Victor Stinner in gh-108634.)On POSIX systems, the pkg-config (\n.pc\n) filenames now include the ABI flags. For example, the free-threaded build generatespython-3.13t.pc\nand the debug build generatespython-3.13d.pc\n.The\nerrno\n,fcntl\n,grp\n,md5\n,pwd\n,resource\n,termios\n,winsound\n,_ctypes_test\n,_multiprocessing.posixshmem\n,_scproxy\n,_stat\n,_statistics\n,_testconsole\n,_testimportmultiple\nand_uuid\nC extensions are now built with the limited C API. (Contributed by Victor Stinner in gh-85283.)\nPorting to Python 3.13\u00b6\nThis section lists previously described changes and other bugfixes that may require changes to your code.\nChanges in the Python API\u00b6\nPEP 667 introduces several changes to the semantics of\nlocals()\nandf_locals\n:Calling\nlocals()\nin an optimized scope now produces an independent snapshot on each call, and hence no longer implicitly updates previously returned references. Obtaining the legacy CPython behavior now requires explicit calls to update the initially returned dictionary with the results of subsequent calls tolocals()\n. Code execution functions that implicitly targetlocals()\n(such asexec\nandeval\n) must be passed an explicit namespace to access their results in an optimized scope. (Changed as part of PEP 667.)Calling\nlocals()\nfrom a comprehension at module or class scope (including viaexec\noreval\n) once more behaves as if the comprehension were running as an independent nested function (i.e. the local variables from the containing scope are not included). In Python 3.12, this had changed to include the local variables from the containing scope when implementing PEP 709. (Changed as part of PEP 667.)Accessing\nFrameType.f_locals\nin an optimized scope now returns a write-through proxy rather than a snapshot that gets updated at ill-specified times. If a snapshot is desired, it must be created explicitly withdict\nor the proxy\u2019s.copy()\nmethod. (Changed as part of PEP 667.)\nfunctools.partial\nnow emits aFutureWarning\nwhen used as a method. The behavior will change in future Python versions. Wrap it instaticmethod()\nif you want to preserve the old behavior. (Contributed by Serhiy Storchaka in gh-121027.)An\nOSError\nis now raised bygetpass.getuser()\nfor any failure to retrieve a username, instead ofImportError\non non-Unix platforms orKeyError\non Unix platforms where the password database is empty.The value of the\nmode\nattribute ofgzip.GzipFile\nis now a string ('rb'\nor'wb'\n) instead of an integer (1\nor2\n). The value of themode\nattribute of the readable file-like object returned byzipfile.ZipFile.open()\nis now'rb'\ninstead of'r'\n. (Contributed by Serhiy Storchaka in gh-115961.)mailbox.Maildir\nnow ignores files with a leading dot (.\n). (Contributed by Zackery Spytz in gh-65559.)pathlib.Path.glob()\nandrglob()\nnow return both files and directories if a pattern that ends with \u201c**\n\u201d is given, rather than directories only. Add a trailing slash to keep the previous behavior and only match directories.The\nthreading\nmodule now expects the_thread\nmodule to have an_is_main_interpreter()\nfunction. This function takes no arguments and returnsTrue\nif the current interpreter is the main interpreter.Any library or application that provides a custom\n_thread\nmodule must provide_is_main_interpreter()\n, just like the module\u2019s other \u201cprivate\u201d attributes. (gh-112826.)\nChanges in the C API\u00b6\nPython.h\nno longer includes the\nstandard header. It was included for thefinite()\nfunction which is now provided by the\nheader. It should now be included explicitly if needed. Remove also theHAVE_IEEEFP_H\nmacro. (Contributed by Victor Stinner in gh-108765.)Python.h\nno longer includes these standard header files:\n,\nand\n. If needed, they should now be included explicitly. For example,\nprovides theclock()\nandgmtime()\nfunctions,\nprovides theselect()\nfunction, and\nprovides thefutimes()\n,gettimeofday()\nandsetitimer()\nfunctions. (Contributed by Victor Stinner in gh-108765.)On Windows,\nPython.h\nno longer includes the\nstandard header file. If needed, it should now be included explicitly. For example, it providesoffsetof()\nfunction, andsize_t\nandptrdiff_t\ntypes. Including\nexplicitly was already needed by all other platforms, theHAVE_STDDEF_H\nmacro is only defined on Windows. (Contributed by Victor Stinner in gh-108765.)If the\nPy_LIMITED_API\nmacro is defined,Py_BUILD_CORE\n,Py_BUILD_CORE_BUILTIN\nandPy_BUILD_CORE_MODULE\nmacros are now undefined by\n. (Contributed by Victor Stinner in gh-85283.)The old trashcan macros\nPy_TRASHCAN_SAFE_BEGIN\nandPy_TRASHCAN_SAFE_END\nwere removed. They should be replaced by the new macrosPy_TRASHCAN_BEGIN\nandPy_TRASHCAN_END\n.A\ntp_dealloc\nfunction that has the old macros, such as:static void mytype_dealloc(mytype *p) { PyObject_GC_UnTrack(p); Py_TRASHCAN_SAFE_BEGIN(p); ... Py_TRASHCAN_SAFE_END }\nshould migrate to the new macros as follows:\nstatic void mytype_dealloc(mytype *p) { PyObject_GC_UnTrack(p); Py_TRASHCAN_BEGIN(p, mytype_dealloc) ... Py_TRASHCAN_END }\nNote that\nPy_TRASHCAN_BEGIN\nhas a second argument which should be the deallocation function it is in. The new macros were added in Python 3.8 and the old macros were deprecated in Python 3.11. (Contributed by Irit Katriel in gh-105111.)\nPEP 667 introduces several changes to frame-related functions:\nThe effects of mutating the dictionary returned from\nPyEval_GetLocals()\nin an optimized scope have changed. New dict entries added this way will now only be visible to subsequentPyEval_GetLocals()\ncalls in that frame, asPyFrame_GetLocals()\n,locals()\n, andFrameType.f_locals\nno longer access the same underlying cached dictionary. Changes made to entries for actual variable names and names added via the write-through proxy interfaces will be overwritten on subsequent calls toPyEval_GetLocals()\nin that frame. The recommended code update depends on how the function was being used, so refer to the deprecation notice on the function for details.Calling\nPyFrame_GetLocals()\nin an optimized scope now returns a write-through proxy rather than a snapshot that gets updated at ill-specified times. If a snapshot is desired, it must be created explicitly (e.g. withPyDict_Copy()\n), or by calling the newPyEval_GetFrameLocals()\nAPI.PyFrame_FastToLocals()\nandPyFrame_FastToLocalsWithError()\nno longer have any effect. Calling these functions has been redundant since Python 3.11, whenPyFrame_GetLocals()\nwas first introduced.PyFrame_LocalsToFast()\nno longer has any effect. Calling this function is redundant now thatPyFrame_GetLocals()\nreturns a write-through proxy for optimized scopes.\nPython 3.13 removed many private functions. Some of them can be replaced using these alternatives:\n_PyDict_Pop()\n:PyDict_Pop()\norPyDict_PopString()\n;_PyDict_GetItemWithError()\n:PyDict_GetItemRef()\n;_PyErr_WriteUnraisableMsg()\n:PyErr_FormatUnraisable()\n;_PyEval_SetTrace()\n:PyEval_SetTrace()\norPyEval_SetTraceAllThreads()\n;_PyList_Extend()\n:PyList_Extend()\n;_PyLong_AsInt()\n:PyLong_AsInt()\n;_PyMem_RawStrdup()\n:strdup()\n;_PyMem_Strdup()\n:strdup()\n;_PyObject_ClearManagedDict()\n:PyObject_ClearManagedDict()\n;_PyObject_VisitManagedDict()\n:PyObject_VisitManagedDict()\n;_PyThreadState_UncheckedGet()\n:PyThreadState_GetUnchecked()\n;_PyTime_AsSecondsDouble()\n:PyTime_AsSecondsDouble()\n;_PyTime_GetMonotonicClock()\n:PyTime_Monotonic()\norPyTime_MonotonicRaw()\n;_PyTime_GetPerfCounter()\n:PyTime_PerfCounter()\norPyTime_PerfCounterRaw()\n;_PyTime_GetSystemClock()\n:PyTime_Time()\norPyTime_TimeRaw()\n;_PyTime_MAX\n:PyTime_MAX\n;_PyTime_MIN\n:PyTime_MIN\n;_PyTime_t\n:PyTime_t\n;_Py_HashPointer()\n:Py_HashPointer()\n;_Py_IsFinalizing()\n:Py_IsFinalizing()\n.\nThe pythoncapi-compat project can be used to get most of these new functions on Python 3.12 and older.\nRegression Test Changes\u00b6\nPython built with\nconfigure\n--with-pydebug\nnow supports a-X presite=package.module\ncommand-line option. If used, it specifies a module that should be imported early in the lifecycle of the interpreter, beforesite.py\nis executed. (Contributed by \u0141ukasz Langa in gh-110769.)", "code_snippets": ["\n\n", " ", "\n ", "\n\n", " ", "\n", " ", "\n", " ", "\n", "\n ", "\n ", "\n ", "\n ", "\n", "\n", " ", "\n", " ", "\n", "\n ", "\n ", " ", "\n ", "\n ", "\n", "\n"], "language": "Python", "source": "python.org", "token_count": 26330} +{"url": "https://docs.python.org/3/howto/perf_profiling.html", "title": "Python support for the Linux ", "content": "Python support for the Linux perf\nprofiler\u00b6\n- author:\nPablo Galindo\nThe Linux perf profiler\nis a very powerful tool that allows you to profile and obtain\ninformation about the performance of your application.\nperf\nalso has a very vibrant ecosystem of tools\nthat aid with the analysis of the data that it produces.\nThe main problem with using the perf\nprofiler with Python applications is that\nperf\nonly gets information about native symbols, that is, the names of\nfunctions and procedures written in C. This means that the names and file names\nof Python functions in your code will not appear in the output of perf\n.\nSince Python 3.12, the interpreter can run in a special mode that allows Python\nfunctions to appear in the output of the perf\nprofiler. When this mode is\nenabled, the interpreter will interpose a small piece of code compiled on the\nfly before the execution of every Python function and it will teach perf\nthe\nrelationship between this piece of code and the associated Python function using\nperf map files.\nNote\nSupport for the perf\nprofiler is currently only available for Linux on\nselect architectures. Check the output of the configure\nbuild step or\ncheck the output of python -m sysconfig | grep HAVE_PERF_TRAMPOLINE\nto see if your system is supported.\nFor example, consider the following script:\ndef foo(n):\nresult = 0\nfor _ in range(n):\nresult += 1\nreturn result\ndef bar(n):\nfoo(n)\ndef baz(n):\nbar(n)\nif __name__ == \"__main__\":\nbaz(1000000)\nWe can run perf\nto sample CPU stack traces at 9999 hertz:\n$ perf record -F 9999 -g -o perf.data python my_script.py\nThen we can use perf report\nto analyze the data:\n$ perf report --stdio -n -g\n# Children Self Samples Command Shared Object Symbol\n# ........ ........ ............ .......... .................. ..........................................\n#\n91.08% 0.00% 0 python.exe python.exe [.] _start\n|\n---_start\n|\n--90.71%--__libc_start_main\nPy_BytesMain\n|\n|--56.88%--pymain_run_python.constprop.0\n| |\n| |--56.13%--_PyRun_AnyFileObject\n| | _PyRun_SimpleFileObject\n| | |\n| | |--55.02%--run_mod\n| | | |\n| | | --54.65%--PyEval_EvalCode\n| | | _PyEval_EvalFrameDefault\n| | | PyObject_Vectorcall\n| | | _PyEval_Vector\n| | | _PyEval_EvalFrameDefault\n| | | PyObject_Vectorcall\n| | | _PyEval_Vector\n| | | _PyEval_EvalFrameDefault\n| | | PyObject_Vectorcall\n| | | _PyEval_Vector\n| | | |\n| | | |--51.67%--_PyEval_EvalFrameDefault\n| | | | |\n| | | | |--11.52%--_PyLong_Add\n| | | | | |\n| | | | | |--2.97%--_PyObject_Malloc\n...\nAs you can see, the Python functions are not shown in the output, only _PyEval_EvalFrameDefault\n(the function that evaluates the Python bytecode) shows up. Unfortunately that\u2019s not very useful because all Python\nfunctions use the same C function to evaluate bytecode so we cannot know which Python function corresponds to which\nbytecode-evaluating function.\nInstead, if we run the same experiment with perf\nsupport enabled we get:\n$ perf report --stdio -n -g\n# Children Self Samples Command Shared Object Symbol\n# ........ ........ ............ .......... .................. .....................................................................\n#\n90.58% 0.36% 1 python.exe python.exe [.] _start\n|\n---_start\n|\n--89.86%--__libc_start_main\nPy_BytesMain\n|\n|--55.43%--pymain_run_python.constprop.0\n| |\n| |--54.71%--_PyRun_AnyFileObject\n| | _PyRun_SimpleFileObject\n| | |\n| | |--53.62%--run_mod\n| | | |\n| | | --53.26%--PyEval_EvalCode\n| | | py:::/src/script.py\n| | | _PyEval_EvalFrameDefault\n| | | PyObject_Vectorcall\n| | | _PyEval_Vector\n| | | py::baz:/src/script.py\n| | | _PyEval_EvalFrameDefault\n| | | PyObject_Vectorcall\n| | | _PyEval_Vector\n| | | py::bar:/src/script.py\n| | | _PyEval_EvalFrameDefault\n| | | PyObject_Vectorcall\n| | | _PyEval_Vector\n| | | py::foo:/src/script.py\n| | | |\n| | | |--51.81%--_PyEval_EvalFrameDefault\n| | | | |\n| | | | |--13.77%--_PyLong_Add\n| | | | | |\n| | | | | |--3.26%--_PyObject_Malloc\nHow to enable perf\nprofiling support\u00b6\nperf\nprofiling support can be enabled either from the start using\nthe environment variable PYTHONPERFSUPPORT\nor the\n-X perf\noption,\nor dynamically using sys.activate_stack_trampoline()\nand\nsys.deactivate_stack_trampoline()\n.\nThe sys\nfunctions take precedence over the -X\noption,\nthe -X\noption takes precedence over the environment variable.\nExample, using the environment variable:\n$ PYTHONPERFSUPPORT=1 perf record -F 9999 -g -o perf.data python my_script.py\n$ perf report -g -i perf.data\nExample, using the -X\noption:\n$ perf record -F 9999 -g -o perf.data python -X perf my_script.py\n$ perf report -g -i perf.data\nExample, using the sys\nAPIs in file example.py\n:\nimport sys\nsys.activate_stack_trampoline(\"perf\")\ndo_profiled_stuff()\nsys.deactivate_stack_trampoline()\nnon_profiled_stuff()\n\u2026then:\n$ perf record -F 9999 -g -o perf.data python ./example.py\n$ perf report -g -i perf.data\nHow to obtain the best results\u00b6\nFor best results, Python should be compiled with\nCFLAGS=\"-fno-omit-frame-pointer -mno-omit-leaf-frame-pointer\"\nas this allows\nprofilers to unwind using only the frame pointer and not on DWARF debug\ninformation. This is because as the code that is interposed to allow perf\nsupport is dynamically generated it doesn\u2019t have any DWARF debugging information\navailable.\nYou can check if your system has been compiled with this flag by running:\n$ python -m sysconfig | grep 'no-omit-frame-pointer'\nIf you don\u2019t see any output it means that your interpreter has not been compiled with\nframe pointers and therefore it may not be able to show Python functions in the output\nof perf\n.\nHow to work without frame pointers\u00b6\nIf you are working with a Python interpreter that has been compiled without\nframe pointers, you can still use the perf\nprofiler, but the overhead will be\na bit higher because Python needs to generate unwinding information for every\nPython function call on the fly. Additionally, perf\nwill take more time to\nprocess the data because it will need to use the DWARF debugging information to\nunwind the stack and this is a slow process.\nTo enable this mode, you can use the environment variable\nPYTHON_PERF_JIT_SUPPORT\nor the -X perf_jit\noption,\nwhich will enable the JIT mode for the perf\nprofiler.\nNote\nDue to a bug in the perf\ntool, only perf\nversions higher than v6.8\nwill work with the JIT mode. The fix was also backported to the v6.7.2\nversion of the tool.\nNote that when checking the version of the perf\ntool (which can be done\nby running perf version\n) you must take into account that some distros\nadd some custom version numbers including a -\ncharacter. This means\nthat perf 6.7-3\nis not necessarily perf 6.7.3\n.\nWhen using the perf JIT mode, you need an extra step before you can run perf\nreport\n. You need to call the perf inject\ncommand to inject the JIT\ninformation into the perf.data\nfile.:\n$ perf record -F 9999 -g -k 1 --call-graph dwarf -o perf.data python -Xperf_jit my_script.py\n$ perf inject -i perf.data --jit --output perf.jit.data\n$ perf report -g -i perf.jit.data\nor using the environment variable:\n$ PYTHON_PERF_JIT_SUPPORT=1 perf record -F 9999 -g --call-graph dwarf -o perf.data python my_script.py\n$ perf inject -i perf.data --jit --output perf.jit.data\n$ perf report -g -i perf.jit.data\nperf inject --jit\ncommand will read perf.data\n,\nautomatically pick up the perf dump file that Python creates (in\n/tmp/perf-$PID.dump\n), and then create perf.jit.data\nwhich merges all the\nJIT information together. It should also create a lot of jitted-XXXX-N.so\nfiles in the current directory which are ELF images for all the JIT trampolines\nthat were created by Python.\nWarning\nWhen using --call-graph dwarf\n, the perf\ntool will take\nsnapshots of the stack of the process being profiled and save the\ninformation in the perf.data\nfile. By default, the size of the stack dump\nis 8192 bytes, but you can change the size by passing it after\na comma like --call-graph dwarf,16384\n.\nThe size of the stack dump is important because if the size is too small\nperf\nwill not be able to unwind the stack and the output will be\nincomplete. On the other hand, if the size is too big, then perf\nwon\u2019t\nbe able to sample the process as frequently as it would like as the overhead\nwill be higher.\nThe stack size is particularly important when profiling Python code compiled\nwith low optimization levels (like -O0\n), as these builds tend to have\nlarger stack frames. If you are compiling Python with -O0\nand not seeing\nPython functions in your profiling output, try increasing the stack dump\nsize to 65528 bytes (the maximum):\n$ perf record -F 9999 -g -k 1 --call-graph dwarf,65528 -o perf.data python -Xperf_jit my_script.py\nDifferent compilation flags can significantly impact stack sizes:\nBuilds with\n-O0\ntypically have much larger stack frames than those with-O1\nor higherAdding optimizations (\n-O1\n,-O2\n, etc.) typically reduces stack sizeFrame pointers (\n-fno-omit-frame-pointer\n) generally provide more reliable stack unwinding", "code_snippets": [], "language": "Python", "source": "python.org", "token_count": 2222} +{"url": "https://docs.python.org/3/c-api/cell.html", "title": "Cell Objects", "content": "Cell Objects\u00b6\n\u201cCell\u201d objects are used to implement variables referenced by multiple scopes. For each such variable, a cell object is created to store the value; the local variables of each stack frame that references the value contain a reference to the cells from outer scopes which also use that variable. When the value is accessed, the value contained in the cell is used instead of the cell object itself. This de-referencing of the cell object requires support from the generated byte-code; these are not automatically de-referenced when accessed. Cell objects are not likely to be useful elsewhere.\n-\ntype PyCellObject\u00b6\nThe C structure used for cell objects.\n-\nPyTypeObject PyCell_Type\u00b6\nThe type object corresponding to cell objects.\n-\nint PyCell_Check(PyObject *ob)\u00b6\nReturn true if ob is a cell object; ob must not be\nNULL\n. This function always succeeds.\n-\nPyObject *PyCell_New(PyObject *ob)\u00b6\n- Return value: New reference.\nCreate and return a new cell object containing the value ob. The parameter may be\nNULL\n.\n-\nPyObject *PyCell_Get(PyObject *cell)\u00b6\n- Return value: New reference.\nReturn the contents of the cell cell, which can be\nNULL\n. If cell is not a cell object, returnsNULL\nwith an exception set.\n-\nPyObject *PyCell_GET(PyObject *cell)\u00b6\n- Return value: Borrowed reference.\nReturn the contents of the cell cell, but without checking that cell is non-\nNULL\nand a cell object.", "code_snippets": [], "language": "Python", "source": "python.org", "token_count": 348} +{"url": "https://docs.python.org/3/library/asyncio-exceptions.html", "title": "Exceptions", "content": "Exceptions\u00b6\nSource code: Lib/asyncio/exceptions.py\n- exception asyncio.TimeoutError\u00b6\nA deprecated alias of\nTimeoutError\n, raised when the operation has exceeded the given deadline.Changed in version 3.11: This class was made an alias of\nTimeoutError\n.\n- exception asyncio.CancelledError\u00b6\nThe operation has been cancelled.\nThis exception can be caught to perform custom operations when asyncio Tasks are cancelled. In almost all situations the exception must be re-raised.\nChanged in version 3.8:\nCancelledError\nis now a subclass ofBaseException\nrather thanException\n.\n- exception asyncio.InvalidStateError\u00b6\nInvalid internal state of\nTask\norFuture\n.Can be raised in situations like setting a result value for a Future object that already has a result value set.\n- exception asyncio.SendfileNotAvailableError\u00b6\nThe \u201csendfile\u201d syscall is not available for the given socket or file type.\nA subclass of\nRuntimeError\n.\n- exception asyncio.IncompleteReadError\u00b6\nThe requested read operation did not complete fully.\nRaised by the asyncio stream APIs.\nThis exception is a subclass of\nEOFError\n.\n- exception asyncio.LimitOverrunError\u00b6\nReached the buffer size limit while looking for a separator.\nRaised by the asyncio stream APIs.\n- consumed\u00b6\nThe total number of to be consumed bytes.", "code_snippets": [], "language": "Python", "source": "python.org", "token_count": 318} +{"url": "https://docs.python.org/3/", "title": "Python 3.14.3 documentation", "content": "Python 3.14.3 documentation\nWelcome! This is the official documentation for Python 3.14.3.\nDocumentation sections:\n|\nWhat's new in Python 3.14?\nOr all \"What's new\" documents since Python 2.0\nTutorial\nStart here: a tour of Python's syntax and features\nLibrary reference\nStandard library and builtins\nLanguage reference\nSyntax and language elements\nPython setup and usage\nHow to install, configure, and use Python\nPython HOWTOs\nIn-depth topic manuals\n|\nInstalling Python modules\nThird-party modules and PyPI.org\nDistributing Python modules\nPublishing modules for use by other people\nExtending and embedding\nFor C/C++ programmers\nPython's C API\nC API reference\nFAQs\nFrequently asked questions (with answers!)\nDeprecations\nDeprecated functionality\n|\nIndices, glossary, and search:\n|\nGlobal module index\nAll modules and libraries\nGeneral index\nAll functions, classes, and terms\nGlossary\nTerms explained\n|\nSearch page\nSearch this documentation\nComplete table of contents\nLists all sections and subsections\n|\nProject information:", "code_snippets": [], "language": "Python", "source": "python.org", "token_count": 255} +{"url": "https://docs.python.org/3/c-api/init_config.html", "title": "Python Initialization Configuration", "content": "Python Initialization Configuration\u00b6\nPyInitConfig C API\u00b6\nAdded in version 3.14.\nPython can be initialized with Py_InitializeFromInitConfig()\n.\nThe Py_RunMain()\nfunction can be used to write a customized Python\nprogram.\nSee also Initialization, Finalization, and Threads.\nSee also\nPEP 741 \u201cPython Configuration C API\u201d.\nExample\u00b6\nExample of customized Python always running with the Python Development\nMode enabled; return -1\non error:\nint init_python(void)\n{\nPyInitConfig *config = PyInitConfig_Create();\nif (config == NULL) {\nprintf(\"PYTHON INIT ERROR: memory allocation failed\\n\");\nreturn -1;\n}\n// Enable the Python Development Mode\nif (PyInitConfig_SetInt(config, \"dev_mode\", 1) < 0) {\ngoto error;\n}\n// Initialize Python with the configuration\nif (Py_InitializeFromInitConfig(config) < 0) {\ngoto error;\n}\nPyInitConfig_Free(config);\nreturn 0;\nerror:\n{\n// Display the error message.\n//\n// This uncommon braces style is used, because you cannot make\n// goto targets point to variable declarations.\nconst char *err_msg;\n(void)PyInitConfig_GetError(config, &err_msg);\nprintf(\"PYTHON INIT ERROR: %s\\n\", err_msg);\nPyInitConfig_Free(config);\nreturn -1;\n}\n}\nCreate Config\u00b6\n-\nstruct PyInitConfig\u00b6\nOpaque structure to configure the Python initialization.\n-\nPyInitConfig *PyInitConfig_Create(void)\u00b6\nCreate a new initialization configuration using Isolated Configuration default values.\nIt must be freed by\nPyInitConfig_Free()\n.Return\nNULL\non memory allocation failure.\n-\nvoid PyInitConfig_Free(PyInitConfig *config)\u00b6\nFree memory of the initialization configuration config.\nIf config is\nNULL\n, no operation is performed.\nError Handling\u00b6\n-\nint PyInitConfig_GetError(PyInitConfig *config, const char **err_msg)\u00b6\nGet the config error message.\nSet *err_msg and return\n1\nif an error is set.Set *err_msg to\nNULL\nand return0\notherwise.\nAn error message is a UTF-8 encoded string.\nIf config has an exit code, format the exit code as an error message.\nThe error message remains valid until another\nPyInitConfig\nfunction is called with config. The caller doesn\u2019t have to free the error message.\n-\nint PyInitConfig_GetExitCode(PyInitConfig *config, int *exitcode)\u00b6\nGet the config exit code.\nSet *exitcode and return\n1\nif config has an exit code set.Return\n0\nif config has no exit code set.\nOnly the\nPy_InitializeFromInitConfig()\nfunction can set an exit code if theparse_argv\noption is non-zero.An exit code can be set when parsing the command line failed (exit code\n2\n) or when a command line option asks to display the command line help (exit code0\n).\nGet Options\u00b6\nThe configuration option name parameter must be a non-NULL null-terminated UTF-8 encoded string. See Configuration Options.\n-\nint PyInitConfig_HasOption(PyInitConfig *config, const char *name)\u00b6\nTest if the configuration has an option called name.\nReturn\n1\nif the option exists, or return0\notherwise.\n-\nint PyInitConfig_GetInt(PyInitConfig *config, const char *name, int64_t *value)\u00b6\nGet an integer configuration option.\nSet *value, and return\n0\non success.Set an error in config and return\n-1\non error.\n-\nint PyInitConfig_GetStr(PyInitConfig *config, const char *name, char **value)\u00b6\nGet a string configuration option as a null-terminated UTF-8 encoded string.\nSet *value, and return\n0\non success.Set an error in config and return\n-1\non error.\n*value can be set to\nNULL\nif the option is an optional string and the option is unset.On success, the string must be released with\nfree(value)\nif it\u2019s notNULL\n.\n-\nint PyInitConfig_GetStrList(PyInitConfig *config, const char *name, size_t *length, char ***items)\u00b6\nGet a string list configuration option as an array of null-terminated UTF-8 encoded strings.\nSet *length and *value, and return\n0\non success.Set an error in config and return\n-1\non error.\nOn success, the string list must be released with\nPyInitConfig_FreeStrList(length, items)\n.\n-\nvoid PyInitConfig_FreeStrList(size_t length, char **items)\u00b6\nFree memory of a string list created by\nPyInitConfig_GetStrList()\n.\nSet Options\u00b6\nThe configuration option name parameter must be a non-NULL null-terminated UTF-8 encoded string. See Configuration Options.\nSome configuration options have side effects on other options. This logic is\nonly implemented when Py_InitializeFromInitConfig()\nis called, not by the\n\u201cSet\u201d functions below. For example, setting dev_mode\nto 1\ndoes not set\nfaulthandler\nto 1\n.\n-\nint PyInitConfig_SetInt(PyInitConfig *config, const char *name, int64_t value)\u00b6\nSet an integer configuration option.\nReturn\n0\non success.Set an error in config and return\n-1\non error.\n-\nint PyInitConfig_SetStr(PyInitConfig *config, const char *name, const char *value)\u00b6\nSet a string configuration option from a null-terminated UTF-8 encoded string. The string is copied.\nReturn\n0\non success.Set an error in config and return\n-1\non error.\n-\nint PyInitConfig_SetStrList(PyInitConfig *config, const char *name, size_t length, char *const *items)\u00b6\nSet a string list configuration option from an array of null-terminated UTF-8 encoded strings. The string list is copied.\nReturn\n0\non success.Set an error in config and return\n-1\non error.\nModule\u00b6\n-\nint PyInitConfig_AddModule(PyInitConfig *config, const char *name, PyObject *(*initfunc)(void))\u00b6\nAdd a built-in extension module to the table of built-in modules.\nThe new module can be imported by the name name, and uses the function initfunc as the initialization function called on the first attempted import.\nReturn\n0\non success.Set an error in config and return\n-1\non error.\nIf Python is initialized multiple times,\nPyInitConfig_AddModule()\nmust be called at each Python initialization.Similar to the\nPyImport_AppendInittab()\nfunction.\nInitialize Python\u00b6\n-\nint Py_InitializeFromInitConfig(PyInitConfig *config)\u00b6\nInitialize Python from the initialization configuration.\nReturn\n0\non success.Set an error in config and return\n-1\non error.Set an exit code in config and return\n-1\nif Python wants to exit.\nSee\nPyInitConfig_GetExitcode()\nfor the exit code case.\nConfiguration Options\u00b6\nOption |\nPyConfig/PyPreConfig member |\nType |\nVisibility |\n|---|---|---|---|\n|\n|\nRead-only |\n|\n|\n|\nPublic |\n|\n|\n|\nPublic |\n|\n|\n|\nPublic |\n|\n|\n|\nPublic |\n|\n|\n|\nRead-only |\n|\n|\n|\nPublic |\n|\n|\n|\nRead-only |\n|\n|\n|\nRead-only |\n|\n|\n|\nRead-only |\n|\n|\n|\nRead-only |\n|\n|\n|\nRead-only |\n|\n|\n|\nRead-only |\n|\n|\n|\nPublic |\n|\n|\n|\nRead-only |\n|\n|\n|\nRead-only |\n|\n|\n|\nRead-only |\n|\n|\n|\nPublic |\n|\n|\n|\nPublic |\n|\n|\n|\nRead-only |\n|\n|\n|\nRead-only |\n|\n|\n|\nRead-only |\n|\n|\n|\nRead-only |\n|\n|\n|\nRead-only |\n|\n|\n|\nRead-only |\n|\n|\n|\nPublic |\n|\n|\n|\nRead-only |\n|\n|\n|\nPublic |\n|\n|\n|\nPublic |\n|\n|\n|\nRead-only |\n|\n|\n|\nRead-only |\n|\n|\n|\nRead-only |\n|\n|\n|\nRead-only |\n|\n|\n|\nPublic |\n|\n|\n|\nPublic |\n|\n|\n|\nRead-only |\n|\n|\n|\nRead-only |\n|\n|\n|\nPublic |\n|\n|\n|\nRead-only |\n|\n|\n|\nRead-only |\n|\n|\n|\nPublic |\n|\n|\n|\nPublic |\n|\n|\n|\nRead-only |\n|\n|\n|\nPublic |\n|\n|\n|\nPublic |\n|\n|\n|\nRead-only |\n|\n|\n|\nRead-only |\n|\n|\n|\nRead-only |\n|\n|\n|\nRead-only |\n|\n|\n|\nRead-only |\n|\n|\n|\nRead-only |\n|\n|\n|\nRead-only |\n|\n|\n|\nRead-only |\n|\n|\n|\nRead-only |\n|\n|\n|\nRead-only |\n|\n|\n|\nPublic |\n|\n|\n|\nRead-only |\n|\n|\n|\nPublic |\n|\n|\n|\nRead-only |\n|\n|\n|\nRead-only |\n|\n|\n|\nRead-only |\n|\n|\n|\nRead-only |\n|\n|\n|\nRead-only |\n|\n|\n|\nPublic |\n|\n|\n|\nRead-only |\n|\n|\n|\nPublic |\n|\n|\n|\nPublic |\n|\n|\n|\nPublic |\n|\n|\n|\nRead-only |\nVisibility:\nPublic: Can be retrieved by\nPyConfig_Get()\nand set byPyConfig_Set()\n.Read-only: Can be retrieved by\nPyConfig_Get()\n, but cannot be set byPyConfig_Set()\n.\nRuntime Python configuration API\u00b6\nAt runtime, it\u2019s possible to get and set configuration options using\nPyConfig_Get()\nand PyConfig_Set()\nfunctions.\nThe configuration option name parameter must be a non-NULL null-terminated UTF-8 encoded string. See Configuration Options.\nSome options are read from the sys\nattributes. For example, the option\n\"argv\"\nis read from sys.argv\n.\n-\nPyObject *PyConfig_Get(const char *name)\u00b6\nGet the current runtime value of a configuration option as a Python object.\nReturn a new reference on success.\nSet an exception and return\nNULL\non error.\nThe object type depends on the configuration option. It can be:\nbool\nint\nstr\nlist[str]\ndict[str, str]\nThe caller must have an attached thread state. The function cannot be called before Python initialization nor after Python finalization.\nAdded in version 3.14.\n-\nint PyConfig_GetInt(const char *name, int *value)\u00b6\nSimilar to\nPyConfig_Get()\n, but get the value as a C int.Return\n0\non success.Set an exception and return\n-1\non error.\nAdded in version 3.14.\n-\nPyObject *PyConfig_Names(void)\u00b6\nGet all configuration option names as a\nfrozenset\n.Return a new reference on success.\nSet an exception and return\nNULL\non error.\nThe caller must have an attached thread state. The function cannot be called before Python initialization nor after Python finalization.\nAdded in version 3.14.\n-\nint PyConfig_Set(const char *name, PyObject *value)\u00b6\nSet the current runtime value of a configuration option.\nRaise a\nValueError\nif there is no option name.Raise a\nValueError\nif value is an invalid value.Raise a\nValueError\nif the option is read-only (cannot be set).Raise a\nTypeError\nif value has not the proper type.\nThe caller must have an attached thread state. The function cannot be called before Python initialization nor after Python finalization.\nRaises an auditing event\ncpython.PyConfig_Set\nwith argumentsname\n,value\n.Added in version 3.14.\nPyConfig C API\u00b6\nAdded in version 3.8.\nPython can be initialized with Py_InitializeFromConfig()\nand the\nPyConfig\nstructure. It can be preinitialized with\nPy_PreInitialize()\nand the PyPreConfig\nstructure.\nThere are two kinds of configuration:\nThe Python Configuration can be used to build a customized Python which behaves as the regular Python. For example, environment variables and command line arguments are used to configure Python.\nThe Isolated Configuration can be used to embed Python into an application. It isolates Python from the system. For example, environment variables are ignored, the LC_CTYPE locale is left unchanged and no signal handler is registered.\nThe Py_RunMain()\nfunction can be used to write a customized Python\nprogram.\nSee also Initialization, Finalization, and Threads.\nSee also\nPEP 587 \u201cPython Initialization Configuration\u201d.\nExample\u00b6\nExample of customized Python always running in isolated mode:\nint main(int argc, char **argv)\n{\nPyStatus status;\nPyConfig config;\nPyConfig_InitPythonConfig(&config);\nconfig.isolated = 1;\n/* Decode command line arguments.\nImplicitly preinitialize Python (in isolated mode). */\nstatus = PyConfig_SetBytesArgv(&config, argc, argv);\nif (PyStatus_Exception(status)) {\ngoto exception;\n}\nstatus = Py_InitializeFromConfig(&config);\nif (PyStatus_Exception(status)) {\ngoto exception;\n}\nPyConfig_Clear(&config);\nreturn Py_RunMain();\nexception:\nPyConfig_Clear(&config);\nif (PyStatus_IsExit(status)) {\nreturn status.exitcode;\n}\n/* Display the error message and exit the process with\nnon-zero exit code */\nPy_ExitStatusException(status);\n}\nPyWideStringList\u00b6\n-\ntype PyWideStringList\u00b6\nList of\nwchar_t*\nstrings.If length is non-zero, items must be non-\nNULL\nand all strings must be non-NULL\n.Methods:\n-\nPyStatus PyWideStringList_Append(PyWideStringList *list, const wchar_t *item)\u00b6\nAppend item to list.\nPython must be preinitialized to call this function.\n-\nPyStatus PyWideStringList_Insert(PyWideStringList *list, Py_ssize_t index, const wchar_t *item)\u00b6\nInsert item into list at index.\nIf index is greater than or equal to list length, append item to list.\nindex must be greater than or equal to\n0\n.Python must be preinitialized to call this function.\nStructure fields:\n-\nPy_ssize_t length\u00b6\nList length.\n-\nwchar_t **items\u00b6\nList items.\n-\nPyStatus PyWideStringList_Append(PyWideStringList *list, const wchar_t *item)\u00b6\nPyStatus\u00b6\n-\ntype PyStatus\u00b6\nStructure to store an initialization function status: success, error or exit.\nFor an error, it can store the C function name which created the error.\nStructure fields:\n-\nint exitcode\u00b6\nExit code. Argument passed to\nexit()\n.\n-\nconst char *err_msg\u00b6\nError message.\n-\nconst char *func\u00b6\nName of the function which created an error, can be\nNULL\n.\nFunctions to create a status:\n-\nPyStatus PyStatus_Error(const char *err_msg)\u00b6\nInitialization error with a message.\nerr_msg must not be\nNULL\n.\nFunctions to handle a status:\n-\nint PyStatus_Exception(PyStatus status)\u00b6\nIs the status an error or an exit? If true, the exception must be handled; by calling\nPy_ExitStatusException()\nfor example.\n-\nint exitcode\u00b6\nNote\nInternally, Python uses macros which set PyStatus.func\n,\nwhereas functions to create a status set func\nto NULL\n.\nExample:\nPyStatus alloc(void **ptr, size_t size)\n{\n*ptr = PyMem_RawMalloc(size);\nif (*ptr == NULL) {\nreturn PyStatus_NoMemory();\n}\nreturn PyStatus_Ok();\n}\nint main(int argc, char **argv)\n{\nvoid *ptr;\nPyStatus status = alloc(&ptr, 16);\nif (PyStatus_Exception(status)) {\nPy_ExitStatusException(status);\n}\nPyMem_Free(ptr);\nreturn 0;\n}\nPyPreConfig\u00b6\n-\ntype PyPreConfig\u00b6\nStructure used to preinitialize Python.\nFunction to initialize a preconfiguration:\n-\nvoid PyPreConfig_InitPythonConfig(PyPreConfig *preconfig)\u00b6\nInitialize the preconfiguration with Python Configuration.\n-\nvoid PyPreConfig_InitIsolatedConfig(PyPreConfig *preconfig)\u00b6\nInitialize the preconfiguration with Isolated Configuration.\nStructure fields:\n-\nint allocator\u00b6\nName of the Python memory allocators:\nPYMEM_ALLOCATOR_NOT_SET\n(0\n): don\u2019t change memory allocators (use defaults).PYMEM_ALLOCATOR_DEFAULT\n(1\n): default memory allocators.PYMEM_ALLOCATOR_DEBUG\n(2\n): default memory allocators with debug hooks.PYMEM_ALLOCATOR_MALLOC\n(3\n): usemalloc()\nof the C library.PYMEM_ALLOCATOR_MALLOC_DEBUG\n(4\n): force usage ofmalloc()\nwith debug hooks.PYMEM_ALLOCATOR_PYMALLOC\n(5\n): Python pymalloc memory allocator.PYMEM_ALLOCATOR_PYMALLOC_DEBUG\n(6\n): Python pymalloc memory allocator with debug hooks.PYMEM_ALLOCATOR_MIMALLOC\n(6\n): usemimalloc\n, a fast malloc replacement.PYMEM_ALLOCATOR_MIMALLOC_DEBUG\n(7\n): usemimalloc\n, a fast malloc replacement with debug hooks.\nPYMEM_ALLOCATOR_PYMALLOC\nandPYMEM_ALLOCATOR_PYMALLOC_DEBUG\nare not supported if Python isconfigured using --without-pymalloc\n.PYMEM_ALLOCATOR_MIMALLOC\nandPYMEM_ALLOCATOR_MIMALLOC_DEBUG\nare not supported if Python isconfigured using --without-mimalloc\nor if the underlying atomic support isn\u2019t available.See Memory Management.\nDefault:\nPYMEM_ALLOCATOR_NOT_SET\n.\n-\nint configure_locale\u00b6\nSet the LC_CTYPE locale to the user preferred locale.\nIf equals to\n0\n, setcoerce_c_locale\nandcoerce_c_locale_warn\nmembers to0\n.See the locale encoding.\nDefault:\n1\nin Python config,0\nin isolated config.\n-\nint coerce_c_locale\u00b6\nIf equals to\n2\n, coerce the C locale.If equals to\n1\n, read the LC_CTYPE locale to decide if it should be coerced.See the locale encoding.\nDefault:\n-1\nin Python config,0\nin isolated config.\n-\nint coerce_c_locale_warn\u00b6\nIf non-zero, emit a warning if the C locale is coerced.\nDefault:\n-1\nin Python config,0\nin isolated config.\n-\nint dev_mode\u00b6\nPython Development Mode: see\nPyConfig.dev_mode\n.Default:\n-1\nin Python mode,0\nin isolated mode.\n-\nint isolated\u00b6\nIsolated mode: see\nPyConfig.isolated\n.Default:\n0\nin Python mode,1\nin isolated mode.\n-\nint legacy_windows_fs_encoding\u00b6\nIf non-zero:\nSet\nPyPreConfig.utf8_mode\nto0\n,Set\nPyConfig.filesystem_encoding\nto\"mbcs\"\n,Set\nPyConfig.filesystem_errors\nto\"replace\"\n.\nInitialized from the\nPYTHONLEGACYWINDOWSFSENCODING\nenvironment variable value.Only available on Windows.\n#ifdef MS_WINDOWS\nmacro can be used for Windows specific code.Default:\n0\n.\n-\nint parse_argv\u00b6\nIf non-zero,\nPy_PreInitializeFromArgs()\nandPy_PreInitializeFromBytesArgs()\nparse theirargv\nargument the same way the regular Python parses command line arguments: see Command Line Arguments.Default:\n1\nin Python config,0\nin isolated config.\n-\nint use_environment\u00b6\nUse environment variables? See\nPyConfig.use_environment\n.Default:\n1\nin Python config and0\nin isolated config.\n-\nint utf8_mode\u00b6\nIf non-zero, enable the Python UTF-8 Mode.\nSet to\n0\nor1\nby the-X utf8\ncommand line option and thePYTHONUTF8\nenvironment variable.Also set to\n1\nif theLC_CTYPE\nlocale isC\norPOSIX\n.Default:\n-1\nin Python config and0\nin isolated config.\n-\nvoid PyPreConfig_InitPythonConfig(PyPreConfig *preconfig)\u00b6\nPreinitialize Python with PyPreConfig\u00b6\nThe preinitialization of Python:\nSet the Python memory allocators (\nPyPreConfig.allocator\n)Configure the LC_CTYPE locale (locale encoding)\nSet the Python UTF-8 Mode (\nPyPreConfig.utf8_mode\n)\nThe current preconfiguration (PyPreConfig\ntype) is stored in\n_PyRuntime.preconfig\n.\nFunctions to preinitialize Python:\n-\nPyStatus Py_PreInitialize(const PyPreConfig *preconfig)\u00b6\nPreinitialize Python from preconfig preconfiguration.\npreconfig must not be\nNULL\n.\n-\nPyStatus Py_PreInitializeFromBytesArgs(const PyPreConfig *preconfig, int argc, char *const *argv)\u00b6\nPreinitialize Python from preconfig preconfiguration.\nParse argv command line arguments (bytes strings) if\nparse_argv\nof preconfig is non-zero.preconfig must not be\nNULL\n.\n-\nPyStatus Py_PreInitializeFromArgs(const PyPreConfig *preconfig, int argc, wchar_t *const *argv)\u00b6\nPreinitialize Python from preconfig preconfiguration.\nParse argv command line arguments (wide strings) if\nparse_argv\nof preconfig is non-zero.preconfig must not be\nNULL\n.\nThe caller is responsible to handle exceptions (error or exit) using\nPyStatus_Exception()\nand Py_ExitStatusException()\n.\nFor Python Configuration\n(PyPreConfig_InitPythonConfig()\n), if Python is initialized with\ncommand line arguments, the command line arguments must also be passed to\npreinitialize Python, since they have an effect on the pre-configuration\nlike encodings. For example, the -X utf8\ncommand line option\nenables the Python UTF-8 Mode.\nPyMem_SetAllocator()\ncan be called after Py_PreInitialize()\nand\nbefore Py_InitializeFromConfig()\nto install a custom memory allocator.\nIt can be called before Py_PreInitialize()\nif\nPyPreConfig.allocator\nis set to PYMEM_ALLOCATOR_NOT_SET\n.\nPython memory allocation functions like PyMem_RawMalloc()\nmust not be\nused before the Python preinitialization, whereas calling directly malloc()\nand free()\nis always safe. Py_DecodeLocale()\nmust not be called\nbefore the Python preinitialization.\nExample using the preinitialization to enable the Python UTF-8 Mode:\nPyStatus status;\nPyPreConfig preconfig;\nPyPreConfig_InitPythonConfig(&preconfig);\npreconfig.utf8_mode = 1;\nstatus = Py_PreInitialize(&preconfig);\nif (PyStatus_Exception(status)) {\nPy_ExitStatusException(status);\n}\n/* at this point, Python speaks UTF-8 */\nPy_Initialize();\n/* ... use Python API here ... */\nPy_Finalize();\nPyConfig\u00b6\n-\ntype PyConfig\u00b6\nStructure containing most parameters to configure Python.\nWhen done, the\nPyConfig_Clear()\nfunction must be used to release the configuration memory.Structure methods:\n-\nvoid PyConfig_InitPythonConfig(PyConfig *config)\u00b6\nInitialize configuration with the Python Configuration.\n-\nvoid PyConfig_InitIsolatedConfig(PyConfig *config)\u00b6\nInitialize configuration with the Isolated Configuration.\n-\nPyStatus PyConfig_SetString(PyConfig *config, wchar_t *const *config_str, const wchar_t *str)\u00b6\nCopy the wide character string str into\n*config_str\n.Preinitialize Python if needed.\n-\nPyStatus PyConfig_SetBytesString(PyConfig *config, wchar_t *const *config_str, const char *str)\u00b6\nDecode str using\nPy_DecodeLocale()\nand set the result into*config_str\n.Preinitialize Python if needed.\n-\nPyStatus PyConfig_SetArgv(PyConfig *config, int argc, wchar_t *const *argv)\u00b6\nSet command line arguments (\nargv\nmember of config) from the argv list of wide character strings.Preinitialize Python if needed.\n-\nPyStatus PyConfig_SetBytesArgv(PyConfig *config, int argc, char *const *argv)\u00b6\nSet command line arguments (\nargv\nmember of config) from the argv list of bytes strings. Decode bytes usingPy_DecodeLocale()\n.Preinitialize Python if needed.\n-\nPyStatus PyConfig_SetWideStringList(PyConfig *config, PyWideStringList *list, Py_ssize_t length, wchar_t **items)\u00b6\nSet the list of wide strings list to length and items.\nPreinitialize Python if needed.\n-\nPyStatus PyConfig_Read(PyConfig *config)\u00b6\nRead all Python configuration.\nFields which are already initialized are left unchanged.\nFields for path configuration are no longer calculated or modified when calling this function, as of Python 3.11.\nThe\nPyConfig_Read()\nfunction only parsesPyConfig.argv\narguments once:PyConfig.parse_argv\nis set to2\nafter arguments are parsed. Since Python arguments are stripped fromPyConfig.argv\n, parsing arguments twice would parse the application options as Python options.Preinitialize Python if needed.\nChanged in version 3.10: The\nPyConfig.argv\narguments are now only parsed once,PyConfig.parse_argv\nis set to2\nafter arguments are parsed, and arguments are only parsed ifPyConfig.parse_argv\nequals1\n.Changed in version 3.11:\nPyConfig_Read()\nno longer calculates all paths, and so fields listed under Python Path Configuration may no longer be updated untilPy_InitializeFromConfig()\nis called.\nMost\nPyConfig\nmethods preinitialize Python if needed. In that case, the Python preinitialization configuration (PyPreConfig\n) is based on thePyConfig\n. If configuration fields which are in common withPyPreConfig\nare tuned, they must be set before calling aPyConfig\nmethod:Moreover, if\nPyConfig_SetArgv()\norPyConfig_SetBytesArgv()\nis used, this method must be called before other methods, since the preinitialization configuration depends on command line arguments (ifparse_argv\nis non-zero).The caller of these methods is responsible to handle exceptions (error or exit) using\nPyStatus_Exception()\nandPy_ExitStatusException()\n.Structure fields:\n-\nPyWideStringList argv\u00b6\nSet\nsys.argv\ncommand line arguments based onargv\n. These parameters are similar to those passed to the program\u2019smain()\nfunction with the difference that the first entry should refer to the script file to be executed rather than the executable hosting the Python interpreter. If there isn\u2019t a script that will be run, the first entry inargv\ncan be an empty string.Set\nparse_argv\nto1\nto parseargv\nthe same way the regular Python parses Python command line arguments and then to strip Python arguments fromargv\n.If\nargv\nis empty, an empty string is added to ensure thatsys.argv\nalways exists and is never empty.Default:\nNULL\n.See also the\norig_argv\nmember.\n-\nint safe_path\u00b6\nIf equals to zero,\nPy_RunMain()\nprepends a potentially unsafe path tosys.path\nat startup:If\nargv[0]\nis equal toL\"-m\"\n(python -m module\n), prepend the current working directory.If running a script (\npython script.py\n), prepend the script\u2019s directory. If it\u2019s a symbolic link, resolve symbolic links.Otherwise (\npython -c code\nandpython\n), prepend an empty string, which means the current working directory.\nSet to\n1\nby the-P\ncommand line option and thePYTHONSAFEPATH\nenvironment variable.Default:\n0\nin Python config,1\nin isolated config.Added in version 3.11.\n-\nwchar_t *base_exec_prefix\u00b6\n-\nDefault:\nNULL\n.Part of the Python Path Configuration output.\nSee also\nPyConfig.exec_prefix\n.\n-\nwchar_t *base_executable\u00b6\nPython base executable:\nsys._base_executable\n.Set by the\n__PYVENV_LAUNCHER__\nenvironment variable.Set from\nPyConfig.executable\nifNULL\n.Default:\nNULL\n.Part of the Python Path Configuration output.\nSee also\nPyConfig.executable\n.\n-\nwchar_t *base_prefix\u00b6\n-\nDefault:\nNULL\n.Part of the Python Path Configuration output.\nSee also\nPyConfig.prefix\n.\n-\nint buffered_stdio\u00b6\nIf equals to\n0\nandconfigure_c_stdio\nis non-zero, disable buffering on the C streams stdout and stderr.Set to\n0\nby the-u\ncommand line option and thePYTHONUNBUFFERED\nenvironment variable.stdin is always opened in buffered mode.\nDefault:\n1\n.\n-\nint bytes_warning\u00b6\nIf equals to\n1\n, issue a warning when comparingbytes\norbytearray\nwithstr\n, or comparingbytes\nwithint\n.If equal or greater to\n2\n, raise aBytesWarning\nexception in these cases.Incremented by the\n-b\ncommand line option.Default:\n0\n.\n-\nint warn_default_encoding\u00b6\nIf non-zero, emit a\nEncodingWarning\nwarning whenio.TextIOWrapper\nuses its default encoding. See Opt-in EncodingWarning for details.Default:\n0\n.Added in version 3.10.\n-\nint code_debug_ranges\u00b6\nIf equals to\n0\n, disables the inclusion of the end line and column mappings in code objects. Also disables traceback printing carets to specific error locations.Set to\n0\nby thePYTHONNODEBUGRANGES\nenvironment variable and by the-X no_debug_ranges\ncommand line option.Default:\n1\n.Added in version 3.11.\n-\nwchar_t *check_hash_pycs_mode\u00b6\nControl the validation behavior of hash-based\n.pyc\nfiles: value of the--check-hash-based-pycs\ncommand line option.Valid values:\nL\"always\"\n: Hash the source file for invalidation regardless of value of the \u2018check_source\u2019 flag.L\"never\"\n: Assume that hash-based pycs always are valid.L\"default\"\n: The \u2018check_source\u2019 flag in hash-based pycs determines invalidation.\nDefault:\nL\"default\"\n.See also PEP 552 \u201cDeterministic pycs\u201d.\n-\nint configure_c_stdio\u00b6\nIf non-zero, configure C standard streams:\nOn Windows, set the binary mode (\nO_BINARY\n) on stdin, stdout and stderr.If\nbuffered_stdio\nequals zero, disable buffering of stdin, stdout and stderr streams.If\ninteractive\nis non-zero, enable stream buffering on stdin and stdout (only stdout on Windows).\nDefault:\n1\nin Python config,0\nin isolated config.\n-\nint dev_mode\u00b6\nIf non-zero, enable the Python Development Mode.\nSet to\n1\nby the-X dev\noption and thePYTHONDEVMODE\nenvironment variable.Default:\n-1\nin Python mode,0\nin isolated mode.\n-\nint dump_refs\u00b6\nDump Python references?\nIf non-zero, dump all objects which are still alive at exit.\nSet to\n1\nby thePYTHONDUMPREFS\nenvironment variable.Needs a special build of Python with the\nPy_TRACE_REFS\nmacro defined: see theconfigure --with-trace-refs option\n.Default:\n0\n.\n-\nwchar_t *dump_refs_file\u00b6\nFilename where to dump Python references.\nSet by the\nPYTHONDUMPREFSFILE\nenvironment variable.Default:\nNULL\n.Added in version 3.11.\n-\nwchar_t *exec_prefix\u00b6\nThe site-specific directory prefix where the platform-dependent Python files are installed:\nsys.exec_prefix\n.Default:\nNULL\n.Part of the Python Path Configuration output.\nSee also\nPyConfig.base_exec_prefix\n.\n-\nwchar_t *executable\u00b6\nThe absolute path of the executable binary for the Python interpreter:\nsys.executable\n.Default:\nNULL\n.Part of the Python Path Configuration output.\nSee also\nPyConfig.base_executable\n.\n-\nint faulthandler\u00b6\nEnable faulthandler?\nIf non-zero, call\nfaulthandler.enable()\nat startup.Set to\n1\nby-X faulthandler\nand thePYTHONFAULTHANDLER\nenvironment variable.Default:\n-1\nin Python mode,0\nin isolated mode.\n-\nwchar_t *filesystem_encoding\u00b6\nFilesystem encoding:\nsys.getfilesystemencoding()\n.On macOS, Android and VxWorks: use\n\"utf-8\"\nby default.On Windows: use\n\"utf-8\"\nby default, or\"mbcs\"\niflegacy_windows_fs_encoding\nofPyPreConfig\nis non-zero.Default encoding on other platforms:\n\"utf-8\"\nifPyPreConfig.utf8_mode\nis non-zero.\"ascii\"\nif Python detects thatnl_langinfo(CODESET)\nannounces the ASCII encoding, whereas thembstowcs()\nfunction decodes from a different encoding (usually Latin1).\"utf-8\"\nifnl_langinfo(CODESET)\nreturns an empty string.Otherwise, use the locale encoding:\nnl_langinfo(CODESET)\nresult.\nAt Python startup, the encoding name is normalized to the Python codec name. For example,\n\"ANSI_X3.4-1968\"\nis replaced with\"ascii\"\n.See also the\nfilesystem_errors\nmember.\n-\nwchar_t *filesystem_errors\u00b6\nFilesystem error handler:\nsys.getfilesystemencodeerrors()\n.On Windows: use\n\"surrogatepass\"\nby default, or\"replace\"\niflegacy_windows_fs_encoding\nofPyPreConfig\nis non-zero.On other platforms: use\n\"surrogateescape\"\nby default.Supported error handlers:\n\"strict\"\n\"surrogateescape\"\n\"surrogatepass\"\n(only supported with the UTF-8 encoding)\nSee also the\nfilesystem_encoding\nmember.\n-\nint use_frozen_modules\u00b6\nIf non-zero, use frozen modules.\nSet by the\nPYTHON_FROZEN_MODULES\nenvironment variable.Default:\n1\nin a release build, or0\nin a debug build.\n-\nunsigned long hash_seed\u00b6\n-\nint use_hash_seed\u00b6\nRandomized hash function seed.\nIf\nuse_hash_seed\nis zero, a seed is chosen randomly at Python startup, andhash_seed\nis ignored.Set by the\nPYTHONHASHSEED\nenvironment variable.Default use_hash_seed value:\n-1\nin Python mode,0\nin isolated mode.\n-\nwchar_t *home\u00b6\nSet the default Python \u201chome\u201d directory, that is, the location of the standard Python libraries (see\nPYTHONHOME\n).Set by the\nPYTHONHOME\nenvironment variable.Default:\nNULL\n.Part of the Python Path Configuration input.\n-\nint import_time\u00b6\nIf\n1\n, profile import time. If2\n, include additional output that indicates when an imported module has already been loaded.Set by the\n-X importtime\noption and thePYTHONPROFILEIMPORTTIME\nenvironment variable.Default:\n0\n.Changed in version 3.14: Added support for\nimport_time = 2\n-\nint inspect\u00b6\nEnter interactive mode after executing a script or a command.\nIf greater than\n0\n, enable inspect: when a script is passed as first argument or the -c option is used, enter interactive mode after executing the script or the command, even whensys.stdin\ndoes not appear to be a terminal.Incremented by the\n-i\ncommand line option. Set to1\nif thePYTHONINSPECT\nenvironment variable is non-empty.Default:\n0\n.\n-\nint install_signal_handlers\u00b6\nInstall Python signal handlers?\nDefault:\n1\nin Python mode,0\nin isolated mode.\n-\nint interactive\u00b6\nIf greater than\n0\n, enable the interactive mode (REPL).Incremented by the\n-i\ncommand line option.Default:\n0\n.\n-\nint int_max_str_digits\u00b6\nConfigures the integer string conversion length limitation. An initial value of\n-1\nmeans the value will be taken from the command line or environment or otherwise default to 4300 (sys.int_info.default_max_str_digits\n). A value of0\ndisables the limitation. Values greater than zero but less than 640 (sys.int_info.str_digits_check_threshold\n) are unsupported and will produce an error.Configured by the\n-X int_max_str_digits\ncommand line flag or thePYTHONINTMAXSTRDIGITS\nenvironment variable.Default:\n-1\nin Python mode. 4300 (sys.int_info.default_max_str_digits\n) in isolated mode.Added in version 3.12.\n-\nint cpu_count\u00b6\nIf the value of\ncpu_count\nis not-1\nthen it will override the return values ofos.cpu_count()\n,os.process_cpu_count()\n, andmultiprocessing.cpu_count()\n.Configured by the\n-X cpu_count=n|default\ncommand line flag or thePYTHON_CPU_COUNT\nenvironment variable.Default:\n-1\n.Added in version 3.13.\n-\nint isolated\u00b6\nIf greater than\n0\n, enable isolated mode:Set\nsafe_path\nto1\n: don\u2019t prepend a potentially unsafe path tosys.path\nat Python startup, such as the current directory, the script\u2019s directory or an empty string.Set\nuse_environment\nto0\n: ignorePYTHON\nenvironment variables.Set\nuser_site_directory\nto0\n: don\u2019t add the user site directory tosys.path\n.Python REPL doesn\u2019t import\nreadline\nnor enable default readline configuration on interactive prompts.\nSet to\n1\nby the-I\ncommand line option.Default:\n0\nin Python mode,1\nin isolated mode.See also the Isolated Configuration and\nPyPreConfig.isolated\n.\n-\nint legacy_windows_stdio\u00b6\nIf non-zero, use\nio.FileIO\ninstead ofio._WindowsConsoleIO\nforsys.stdin\n,sys.stdout\nandsys.stderr\n.Set to\n1\nif thePYTHONLEGACYWINDOWSSTDIO\nenvironment variable is set to a non-empty string.Only available on Windows.\n#ifdef MS_WINDOWS\nmacro can be used for Windows specific code.Default:\n0\n.See also the PEP 528 (Change Windows console encoding to UTF-8).\n-\nint malloc_stats\u00b6\nIf non-zero, dump statistics on Python pymalloc memory allocator at exit.\nSet to\n1\nby thePYTHONMALLOCSTATS\nenvironment variable.The option is ignored if Python is\nconfigured using the --without-pymalloc option\n.Default:\n0\n.\n-\nwchar_t *platlibdir\u00b6\nPlatform library directory name:\nsys.platlibdir\n.Set by the\nPYTHONPLATLIBDIR\nenvironment variable.Default: value of the\nPLATLIBDIR\nmacro which is set by theconfigure --with-platlibdir option\n(default:\"lib\"\n, or\"DLLs\"\non Windows).Part of the Python Path Configuration input.\nAdded in version 3.9.\nChanged in version 3.11: This macro is now used on Windows to locate the standard library extension modules, typically under\nDLLs\n. However, for compatibility, note that this value is ignored for any non-standard layouts, including in-tree builds and virtual environments.\n-\nwchar_t *pythonpath_env\u00b6\nModule search paths (\nsys.path\n) as a string separated byDELIM\n(os.pathsep\n).Set by the\nPYTHONPATH\nenvironment variable.Default:\nNULL\n.Part of the Python Path Configuration input.\n-\nPyWideStringList module_search_paths\u00b6\n-\nint module_search_paths_set\u00b6\nModule search paths:\nsys.path\n.If\nmodule_search_paths_set\nis equal to0\n,Py_InitializeFromConfig()\nwill replacemodule_search_paths\nand setsmodule_search_paths_set\nto1\n.Default: empty list (\nmodule_search_paths\n) and0\n(module_search_paths_set\n).Part of the Python Path Configuration output.\n-\nint optimization_level\u00b6\nCompilation optimization level:\n0\n: Peephole optimizer, set__debug__\ntoTrue\n.1\n: Level 0, remove assertions, set__debug__\ntoFalse\n.2\n: Level 1, strip docstrings.\nIncremented by the\n-O\ncommand line option. Set to thePYTHONOPTIMIZE\nenvironment variable value.Default:\n0\n.\n-\nPyWideStringList orig_argv\u00b6\nThe list of the original command line arguments passed to the Python executable:\nsys.orig_argv\n.If\norig_argv\nlist is empty andargv\nis not a list only containing an empty string,PyConfig_Read()\ncopiesargv\nintoorig_argv\nbefore modifyingargv\n(ifparse_argv\nis non-zero).See also the\nargv\nmember and thePy_GetArgcArgv()\nfunction.Default: empty list.\nAdded in version 3.10.\n-\nint parse_argv\u00b6\nParse command line arguments?\nIf equals to\n1\n, parseargv\nthe same way the regular Python parses command line arguments, and strip Python arguments fromargv\n.The\nPyConfig_Read()\nfunction only parsesPyConfig.argv\narguments once:PyConfig.parse_argv\nis set to2\nafter arguments are parsed. Since Python arguments are stripped fromPyConfig.argv\n, parsing arguments twice would parse the application options as Python options.Default:\n1\nin Python mode,0\nin isolated mode.Changed in version 3.10: The\nPyConfig.argv\narguments are now only parsed ifPyConfig.parse_argv\nequals to1\n.\n-\nint parser_debug\u00b6\nParser debug mode. If greater than\n0\n, turn on parser debugging output (for expert only, depending on compilation options).Incremented by the\n-d\ncommand line option. Set to thePYTHONDEBUG\nenvironment variable value.Needs a debug build of Python (the\nPy_DEBUG\nmacro must be defined).Default:\n0\n.\n-\nint pathconfig_warnings\u00b6\nIf non-zero, calculation of path configuration is allowed to log warnings into\nstderr\n. If equals to0\n, suppress these warnings.Default:\n1\nin Python mode,0\nin isolated mode.Part of the Python Path Configuration input.\nChanged in version 3.11: Now also applies on Windows.\n-\nwchar_t *prefix\u00b6\nThe site-specific directory prefix where the platform independent Python files are installed:\nsys.prefix\n.Default:\nNULL\n.Part of the Python Path Configuration output.\nSee also\nPyConfig.base_prefix\n.\n-\nwchar_t *program_name\u00b6\nProgram name used to initialize\nexecutable\nand in early error messages during Python initialization.On macOS, use\nPYTHONEXECUTABLE\nenvironment variable if set.If the\nWITH_NEXT_FRAMEWORK\nmacro is defined, use__PYVENV_LAUNCHER__\nenvironment variable if set.Use\nargv[0]\nofargv\nif available and non-empty.Otherwise, use\nL\"python\"\non Windows, orL\"python3\"\non other platforms.\nDefault:\nNULL\n.Part of the Python Path Configuration input.\n-\nwchar_t *pycache_prefix\u00b6\nDirectory where cached\n.pyc\nfiles are written:sys.pycache_prefix\n.Set by the\n-X pycache_prefix=PATH\ncommand line option and thePYTHONPYCACHEPREFIX\nenvironment variable. The command-line option takes precedence.If\nNULL\n,sys.pycache_prefix\nis set toNone\n.Default:\nNULL\n.\n-\nint quiet\u00b6\nQuiet mode. If greater than\n0\n, don\u2019t display the copyright and version at Python startup in interactive mode.Incremented by the\n-q\ncommand line option.Default:\n0\n.\n-\nwchar_t *run_command\u00b6\nValue of the\n-c\ncommand line option.Used by\nPy_RunMain()\n.Default:\nNULL\n.\n-\nwchar_t *run_filename\u00b6\nFilename passed on the command line: trailing command line argument without\n-c\nor-m\n. It is used by thePy_RunMain()\nfunction.For example, it is set to\nscript.py\nby thepython3 script.py arg\ncommand line.See also the\nPyConfig.skip_source_first_line\noption.Default:\nNULL\n.\n-\nwchar_t *run_module\u00b6\nValue of the\n-m\ncommand line option.Used by\nPy_RunMain()\n.Default:\nNULL\n.\n-\nwchar_t *run_presite\u00b6\npackage.module\npath to module that should be imported beforesite.py\nis run.Set by the\n-X presite=package.module\ncommand-line option and thePYTHON_PRESITE\nenvironment variable. The command-line option takes precedence.Needs a debug build of Python (the\nPy_DEBUG\nmacro must be defined).Default:\nNULL\n.\n-\nint show_ref_count\u00b6\nShow total reference count at exit (excluding immortal objects)?\nSet to\n1\nby-X showrefcount\ncommand line option.Needs a debug build of Python (the\nPy_REF_DEBUG\nmacro must be defined).Default:\n0\n.\n-\nint site_import\u00b6\nImport the\nsite\nmodule at startup?If equal to zero, disable the import of the module site and the site-dependent manipulations of\nsys.path\nthat it entails.Also disable these manipulations if the\nsite\nmodule is explicitly imported later (callsite.main()\nif you want them to be triggered).Set to\n0\nby the-S\ncommand line option.sys.flags.no_site\nis set to the inverted value ofsite_import\n.Default:\n1\n.\n-\nint skip_source_first_line\u00b6\nIf non-zero, skip the first line of the\nPyConfig.run_filename\nsource.It allows the usage of non-Unix forms of\n#!cmd\n. This is intended for a DOS specific hack only.Set to\n1\nby the-x\ncommand line option.Default:\n0\n.\n-\nwchar_t *stdio_encoding\u00b6\n-\nwchar_t *stdio_errors\u00b6\nEncoding and encoding errors of\nsys.stdin\n,sys.stdout\nandsys.stderr\n(butsys.stderr\nalways uses\"backslashreplace\"\nerror handler).Use the\nPYTHONIOENCODING\nenvironment variable if it is non-empty.Default encoding:\n\"UTF-8\"\nifPyPreConfig.utf8_mode\nis non-zero.Otherwise, use the locale encoding.\nDefault error handler:\nOn Windows: use\n\"surrogateescape\"\n.\"surrogateescape\"\nifPyPreConfig.utf8_mode\nis non-zero, or if the LC_CTYPE locale is \u201cC\u201d or \u201cPOSIX\u201d.\"strict\"\notherwise.\nSee also\nPyConfig.legacy_windows_stdio\n.\n-\nint tracemalloc\u00b6\nEnable tracemalloc?\nIf non-zero, call\ntracemalloc.start()\nat startup.Set by\n-X tracemalloc=N\ncommand line option and by thePYTHONTRACEMALLOC\nenvironment variable.Default:\n-1\nin Python mode,0\nin isolated mode.\n-\nint perf_profiling\u00b6\nEnable the Linux\nperf\nprofiler support?If equals to\n1\n, enable support for the Linuxperf\nprofiler.If equals to\n2\n, enable support for the Linuxperf\nprofiler with DWARF JIT support.Set to\n1\nby-X perf\ncommand-line option and thePYTHONPERFSUPPORT\nenvironment variable.Set to\n2\nby the-X perf_jit\ncommand-line option and thePYTHON_PERF_JIT_SUPPORT\nenvironment variable.Default:\n-1\n.See also\nSee Python support for the Linux perf profiler for more information.\nAdded in version 3.12.\n-\nwchar_t *stdlib_dir\u00b6\nDirectory of the Python standard library.\nDefault:\nNULL\n.Added in version 3.11.\n-\nint use_environment\u00b6\n-\nIf equals to zero, ignore the environment variables.\nSet to\n0\nby the-E\nenvironment variable.Default:\n1\nin Python config and0\nin isolated config.\n-\nint use_system_logger\u00b6\nIf non-zero,\nstdout\nandstderr\nwill be redirected to the system log.Only available on macOS 10.12 and later, and on iOS.\nDefault:\n0\n(don\u2019t use the system log) on macOS;1\non iOS (use the system log).Added in version 3.14.\n-\nint user_site_directory\u00b6\nIf non-zero, add the user site directory to\nsys.path\n.Set to\n0\nby the-s\nand-I\ncommand line options.Set to\n0\nby thePYTHONNOUSERSITE\nenvironment variable.Default:\n1\nin Python mode,0\nin isolated mode.\n-\nint verbose\u00b6\nVerbose mode. If greater than\n0\n, print a message each time a module is imported, showing the place (filename or built-in module) from which it is loaded.If greater than or equal to\n2\n, print a message for each file that is checked for when searching for a module. Also provides information on module cleanup at exit.Incremented by the\n-v\ncommand line option.Set by the\nPYTHONVERBOSE\nenvironment variable value.Default:\n0\n.\n-\nPyWideStringList warnoptions\u00b6\nOptions of the\nwarnings\nmodule to build warnings filters, lowest to highest priority:sys.warnoptions\n.The\nwarnings\nmodule addssys.warnoptions\nin the reverse order: the lastPyConfig.warnoptions\nitem becomes the first item ofwarnings.filters\nwhich is checked first (highest priority).The\n-W\ncommand line options adds its value towarnoptions\n, it can be used multiple times.The\nPYTHONWARNINGS\nenvironment variable can also be used to add warning options. Multiple options can be specified, separated by commas (,\n).Default: empty list.\n-\nint write_bytecode\u00b6\nIf equal to\n0\n, Python won\u2019t try to write.pyc\nfiles on the import of source modules.Set to\n0\nby the-B\ncommand line option and thePYTHONDONTWRITEBYTECODE\nenvironment variable.sys.dont_write_bytecode\nis initialized to the inverted value ofwrite_bytecode\n.Default:\n1\n.\n-\nPyWideStringList xoptions\u00b6\nValues of the\n-X\ncommand line options:sys._xoptions\n.Default: empty list.\n-\nint _pystats\u00b6\nIf non-zero, write performance statistics at Python exit.\nNeed a special build with the\nPy_STATS\nmacro: see--enable-pystats\n.Default:\n0\n.\n-\nvoid PyConfig_InitPythonConfig(PyConfig *config)\u00b6\nIf parse_argv\nis non-zero, argv\narguments are parsed the same way the regular Python parses command line\narguments, and Python arguments are stripped from\nargv\n.\nThe xoptions\noptions are parsed to set other options: see\nthe -X\ncommand line option.\nChanged in version 3.9: The show_alloc_count\nfield has been removed.\nInitialization with PyConfig\u00b6\nInitializing the interpreter from a populated configuration struct is handled\nby calling Py_InitializeFromConfig()\n.\nThe caller is responsible to handle exceptions (error or exit) using\nPyStatus_Exception()\nand Py_ExitStatusException()\n.\nIf PyImport_FrozenModules()\n, PyImport_AppendInittab()\nor\nPyImport_ExtendInittab()\nare used, they must be set or called after\nPython preinitialization and before the Python initialization. If Python is\ninitialized multiple times, PyImport_AppendInittab()\nor\nPyImport_ExtendInittab()\nmust be called before each Python\ninitialization.\nThe current configuration (PyConfig\ntype) is stored in\nPyInterpreterState.config\n.\nExample setting the program name:\nvoid init_python(void)\n{\nPyStatus status;\nPyConfig config;\nPyConfig_InitPythonConfig(&config);\n/* Set the program name. Implicitly preinitialize Python. */\nstatus = PyConfig_SetString(&config, &config.program_name,\nL\"/path/to/my_program\");\nif (PyStatus_Exception(status)) {\ngoto exception;\n}\nstatus = Py_InitializeFromConfig(&config);\nif (PyStatus_Exception(status)) {\ngoto exception;\n}\nPyConfig_Clear(&config);\nreturn;\nexception:\nPyConfig_Clear(&config);\nPy_ExitStatusException(status);\n}\nMore complete example modifying the default configuration, read the configuration, and then override some parameters. Note that since 3.11, many parameters are not calculated until initialization, and so values cannot be read from the configuration structure. Any values set before initialize is called will be left unchanged by initialization:\nPyStatus init_python(const char *program_name)\n{\nPyStatus status;\nPyConfig config;\nPyConfig_InitPythonConfig(&config);\n/* Set the program name before reading the configuration\n(decode byte string from the locale encoding).\nImplicitly preinitialize Python. */\nstatus = PyConfig_SetBytesString(&config, &config.program_name,\nprogram_name);\nif (PyStatus_Exception(status)) {\ngoto done;\n}\n/* Read all configuration at once */\nstatus = PyConfig_Read(&config);\nif (PyStatus_Exception(status)) {\ngoto done;\n}\n/* Specify sys.path explicitly */\n/* If you want to modify the default set of paths, finish\ninitialization first and then use PySys_GetObject(\"path\") */\nconfig.module_search_paths_set = 1;\nstatus = PyWideStringList_Append(&config.module_search_paths,\nL\"/path/to/stdlib\");\nif (PyStatus_Exception(status)) {\ngoto done;\n}\nstatus = PyWideStringList_Append(&config.module_search_paths,\nL\"/path/to/more/modules\");\nif (PyStatus_Exception(status)) {\ngoto done;\n}\n/* Override executable computed by PyConfig_Read() */\nstatus = PyConfig_SetString(&config, &config.executable,\nL\"/path/to/my_executable\");\nif (PyStatus_Exception(status)) {\ngoto done;\n}\nstatus = Py_InitializeFromConfig(&config);\ndone:\nPyConfig_Clear(&config);\nreturn status;\n}\nIsolated Configuration\u00b6\nPyPreConfig_InitIsolatedConfig()\nand\nPyConfig_InitIsolatedConfig()\nfunctions create a configuration to\nisolate Python from the system. For example, to embed Python into an\napplication.\nThis configuration ignores global configuration variables, environment\nvariables, command line arguments (PyConfig.argv\nis not parsed)\nand user site directory. The C standard streams (ex: stdout\n) and the\nLC_CTYPE locale are left unchanged. Signal handlers are not installed.\nConfiguration files are still used with this configuration to determine\npaths that are unspecified. Ensure PyConfig.home\nis specified\nto avoid computing the default path configuration.\nPython Configuration\u00b6\nPyPreConfig_InitPythonConfig()\nand PyConfig_InitPythonConfig()\nfunctions create a configuration to build a customized Python which behaves as\nthe regular Python.\nEnvironments variables and command line arguments are used to configure Python, whereas global configuration variables are ignored.\nThis function enables C locale coercion (PEP 538)\nand Python UTF-8 Mode\n(PEP 540) depending on the LC_CTYPE locale, PYTHONUTF8\nand\nPYTHONCOERCECLOCALE\nenvironment variables.\nPython Path Configuration\u00b6\nPyConfig\ncontains multiple fields for the path configuration:\nPath configuration inputs:\ncurrent working directory: to get absolute paths\nPATH\nenvironment variable to get the program full path (fromPyConfig.program_name\n)__PYVENV_LAUNCHER__\nenvironment variable(Windows only) Application paths in the registry under \u201cSoftwarePythonPythonCoreX.YPythonPath\u201d of HKEY_CURRENT_USER and HKEY_LOCAL_MACHINE (where X.Y is the Python version).\nPath configuration output fields:\nIf at least one \u201coutput field\u201d is not set, Python calculates the path\nconfiguration to fill unset fields. If\nmodule_search_paths_set\nis equal to 0\n,\nmodule_search_paths\nis overridden and\nmodule_search_paths_set\nis set to 1\n.\nIt is possible to completely ignore the function calculating the default\npath configuration by setting explicitly all path configuration output\nfields listed above. A string is considered as set even if it is non-empty.\nmodule_search_paths\nis considered as set if\nmodule_search_paths_set\nis set to 1\n. In this case,\nmodule_search_paths\nwill be used without modification.\nSet pathconfig_warnings\nto 0\nto suppress warnings when\ncalculating the path configuration (Unix only, Windows does not log any warning).\nIf base_prefix\nor base_exec_prefix\nfields are not set, they inherit their value from prefix\nand exec_prefix\nrespectively.\nPy_RunMain()\nand Py_Main()\nmodify sys.path\n:\nIf\nrun_filename\nis set and is a directory which contains a__main__.py\nscript, prependrun_filename\ntosys.path\n.If\nisolated\nis zero:If\nrun_module\nis set, prepend the current directory tosys.path\n. Do nothing if the current directory cannot be read.If\nrun_filename\nis set, prepend the directory of the filename tosys.path\n.Otherwise, prepend an empty string to\nsys.path\n.\nIf site_import\nis non-zero, sys.path\ncan be\nmodified by the site\nmodule. If\nuser_site_directory\nis non-zero and the user\u2019s\nsite-package directory exists, the site\nmodule appends the user\u2019s\nsite-package directory to sys.path\n.\nThe following configuration files are used by the path configuration:\npyvenv.cfg\n._pth\nfile (ex:python._pth\n)pybuilddir.txt\n(Unix only)\nIf a ._pth\nfile is present:\nSet\nisolated\nto1\n.Set\nuse_environment\nto0\n.Set\nsite_import\nto0\n.Set\nsafe_path\nto1\n.\nIf home\nis not set and a pyvenv.cfg\nfile is present in\nthe same directory as executable\n, or its parent,\nprefix\nand exec_prefix\nare set that\nlocation. When this happens, base_prefix\nand\nbase_exec_prefix\nstill keep their value, pointing to the\nbase installation. See Virtual Environments for more\ninformation.\nThe __PYVENV_LAUNCHER__\nenvironment variable is used to set\nPyConfig.base_executable\n.\nChanged in version 3.14: prefix\n, and exec_prefix\n, are now\nset to the pyvenv.cfg\ndirectory. This was previously done by site\n,\ntherefore affected by -S\n.\nPy_GetArgcArgv()\u00b6\n-\nvoid Py_GetArgcArgv(int *argc, wchar_t ***argv)\u00b6\nGet the original command line arguments, before Python modified them.\nSee also\nPyConfig.orig_argv\nmember.\nDelaying main module execution\u00b6\nIn some embedding use cases, it may be desirable to separate interpreter initialization from the execution of the main module.\nThis separation can be achieved by setting PyConfig.run_command\nto the empty\nstring during initialization (to prevent the interpreter from dropping into the\ninteractive prompt), and then subsequently executing the desired main module\ncode using __main__.__dict__\nas the global namespace.", "code_snippets": [], "language": "Python", "source": "python.org", "token_count": 12138} +{"url": "https://docs.python.org/3/c-api/tuple.html", "title": "Tuple Objects", "content": "Tuple Objects\u00b6\n-\nPyTypeObject PyTuple_Type\u00b6\n- Part of the Stable ABI.\nThis instance of\nPyTypeObject\nrepresents the Python tuple type; it is the same object astuple\nin the Python layer.\n-\nint PyTuple_Check(PyObject *p)\u00b6\nReturn true if p is a tuple object or an instance of a subtype of the tuple type. This function always succeeds.\n-\nint PyTuple_CheckExact(PyObject *p)\u00b6\nReturn true if p is a tuple object, but not an instance of a subtype of the tuple type. This function always succeeds.\n-\nPyObject *PyTuple_New(Py_ssize_t len)\u00b6\n- Return value: New reference. Part of the Stable ABI.\nReturn a new tuple object of size len, or\nNULL\nwith an exception set on failure.\n-\nPyObject *PyTuple_Pack(Py_ssize_t n, ...)\u00b6\n- Return value: New reference. Part of the Stable ABI.\nReturn a new tuple object of size n, or\nNULL\nwith an exception set on failure. The tuple values are initialized to the subsequent n C arguments pointing to Python objects.PyTuple_Pack(2, a, b)\nis equivalent toPy_BuildValue(\"(OO)\", a, b)\n.\n-\nPy_ssize_t PyTuple_Size(PyObject *p)\u00b6\n- Part of the Stable ABI.\nTake a pointer to a tuple object, and return the size of that tuple. On error, return\n-1\nwith an exception set.\n-\nPy_ssize_t PyTuple_GET_SIZE(PyObject *p)\u00b6\nLike\nPyTuple_Size()\n, but without error checking.\n-\nPyObject *PyTuple_GetItem(PyObject *p, Py_ssize_t pos)\u00b6\n- Return value: Borrowed reference. Part of the Stable ABI.\nReturn the object at position pos in the tuple pointed to by p. If pos is negative or out of bounds, return\nNULL\nand set anIndexError\nexception.The returned reference is borrowed from the tuple p (that is: it is only valid as long as you hold a reference to p). To get a strong reference, use\nPy_NewRef(PyTuple_GetItem(...))\norPySequence_GetItem()\n.\n-\nPyObject *PyTuple_GET_ITEM(PyObject *p, Py_ssize_t pos)\u00b6\n- Return value: Borrowed reference.\nLike\nPyTuple_GetItem()\n, but does no checking of its arguments.\n-\nPyObject *PyTuple_GetSlice(PyObject *p, Py_ssize_t low, Py_ssize_t high)\u00b6\n- Return value: New reference. Part of the Stable ABI.\nReturn the slice of the tuple pointed to by p between low and high, or\nNULL\nwith an exception set on failure.This is the equivalent of the Python expression\np[low:high]\n. Indexing from the end of the tuple is not supported.\n-\nint PyTuple_SetItem(PyObject *p, Py_ssize_t pos, PyObject *o)\u00b6\n- Part of the Stable ABI.\nInsert a reference to object o at position pos of the tuple pointed to by p. Return\n0\non success. If pos is out of bounds, return-1\nand set anIndexError\nexception.Note\nThis function \u201csteals\u201d a reference to o and discards a reference to an item already in the tuple at the affected position.\n-\nvoid PyTuple_SET_ITEM(PyObject *p, Py_ssize_t pos, PyObject *o)\u00b6\nLike\nPyTuple_SetItem()\n, but does no error checking, and should only be used to fill in brand new tuples.Bounds checking is performed as an assertion if Python is built in debug mode or\nwith assertions\n.Note\nThis function \u201csteals\u201d a reference to o, and, unlike\nPyTuple_SetItem()\n, does not discard a reference to any item that is being replaced; any reference in the tuple at position pos will be leaked.Warning\nThis macro should only be used on tuples that are newly created. Using this macro on a tuple that is already in use (or in other words, has a refcount > 1) could lead to undefined behavior.\n-\nint _PyTuple_Resize(PyObject **p, Py_ssize_t newsize)\u00b6\nCan be used to resize a tuple. newsize will be the new length of the tuple. Because tuples are supposed to be immutable, this should only be used if there is only one reference to the object. Do not use this if the tuple may already be known to some other part of the code. The tuple will always grow or shrink at the end. Think of this as destroying the old tuple and creating a new one, only more efficiently. Returns\n0\non success. Client code should never assume that the resulting value of*p\nwill be the same as before calling this function. If the object referenced by*p\nis replaced, the original*p\nis destroyed. On failure, returns-1\nand sets*p\ntoNULL\n, and raisesMemoryError\norSystemError\n.\nStruct Sequence Objects\u00b6\nStruct sequence objects are the C equivalent of namedtuple()\nobjects, i.e. a sequence whose items can also be accessed through attributes.\nTo create a struct sequence, you first have to create a specific struct sequence\ntype.\n-\nPyTypeObject *PyStructSequence_NewType(PyStructSequence_Desc *desc)\u00b6\n- Return value: New reference. Part of the Stable ABI.\nCreate a new struct sequence type from the data in desc, described below. Instances of the resulting type can be created with\nPyStructSequence_New()\n.Return\nNULL\nwith an exception set on failure.\n-\nvoid PyStructSequence_InitType(PyTypeObject *type, PyStructSequence_Desc *desc)\u00b6\nInitializes a struct sequence type type from desc in place.\n-\nint PyStructSequence_InitType2(PyTypeObject *type, PyStructSequence_Desc *desc)\u00b6\nLike\nPyStructSequence_InitType()\n, but returns0\non success and-1\nwith an exception set on failure.Added in version 3.4.\n-\ntype PyStructSequence_Desc\u00b6\n- Part of the Stable ABI (including all members).\nContains the meta information of a struct sequence type to create.\n-\nconst char *name\u00b6\nFully qualified name of the type; null-terminated UTF-8 encoded. The name must contain the module name.\n-\nconst char *doc\u00b6\nPointer to docstring for the type or\nNULL\nto omit.\n-\nPyStructSequence_Field *fields\u00b6\nPointer to\nNULL\n-terminated array with field names of the new type.\n-\nint n_in_sequence\u00b6\nNumber of fields visible to the Python side (if used as tuple).\n-\nconst char *name\u00b6\n-\ntype PyStructSequence_Field\u00b6\n- Part of the Stable ABI (including all members).\nDescribes a field of a struct sequence. As a struct sequence is modeled as a tuple, all fields are typed as PyObject*. The index in the\nfields\narray of thePyStructSequence_Desc\ndetermines which field of the struct sequence is described.-\nconst char *name\u00b6\nName for the field or\nNULL\nto end the list of named fields, set toPyStructSequence_UnnamedField\nto leave unnamed.\n-\nconst char *doc\u00b6\nField docstring or\nNULL\nto omit.\n-\nconst char *name\u00b6\n-\nconst char *const PyStructSequence_UnnamedField\u00b6\n- Part of the Stable ABI since version 3.11.\nSpecial value for a field name to leave it unnamed.\nChanged in version 3.9: The type was changed from\nchar *\n.\n-\nPyObject *PyStructSequence_New(PyTypeObject *type)\u00b6\n- Return value: New reference. Part of the Stable ABI.\nCreates an instance of type, which must have been created with\nPyStructSequence_NewType()\n.Return\nNULL\nwith an exception set on failure.\n-\nPyObject *PyStructSequence_GetItem(PyObject *p, Py_ssize_t pos)\u00b6\n- Return value: Borrowed reference. Part of the Stable ABI.\nReturn the object at position pos in the struct sequence pointed to by p.\nBounds checking is performed as an assertion if Python is built in debug mode or\nwith assertions\n.\n-\nPyObject *PyStructSequence_GET_ITEM(PyObject *p, Py_ssize_t pos)\u00b6\n- Return value: Borrowed reference.\nAlias to\nPyStructSequence_GetItem()\n.Changed in version 3.13: Now implemented as an alias to\nPyStructSequence_GetItem()\n.\n-\nvoid PyStructSequence_SetItem(PyObject *p, Py_ssize_t pos, PyObject *o)\u00b6\n- Part of the Stable ABI.\nSets the field at index pos of the struct sequence p to value o. Like\nPyTuple_SET_ITEM()\n, this should only be used to fill in brand new instances.Bounds checking is performed as an assertion if Python is built in debug mode or\nwith assertions\n.Note\nThis function \u201csteals\u201d a reference to o.\n-\nvoid PyStructSequence_SET_ITEM(PyObject *p, Py_ssize_t *pos, PyObject *o)\u00b6\nAlias to\nPyStructSequence_SetItem()\n.Changed in version 3.13: Now implemented as an alias to\nPyStructSequence_SetItem()\n.", "code_snippets": [], "language": "Python", "source": "python.org", "token_count": 1906} +{"url": "https://docs.python.org/3/howto/argparse.html", "title": "Argparse Tutorial", "content": "Argparse Tutorial\u00b6\n- author:\nTshepang Mbambo\nThis tutorial is intended to be a gentle introduction to argparse\n, the\nrecommended command-line parsing module in the Python standard library.\nNote\nThe standard library includes two other libraries directly related\nto command-line parameter processing: the lower level optparse\nmodule (which may require more code to configure for a given application,\nbut also allows an application to request behaviors that argparse\ndoesn\u2019t support), and the very low level getopt\n(which specifically\nserves as an equivalent to the getopt()\nfamily of functions\navailable to C programmers).\nWhile neither of those modules is covered directly in this guide, many of\nthe core concepts in argparse\nfirst originated in optparse\n, so\nsome aspects of this tutorial will also be relevant to optparse\nusers.\nConcepts\u00b6\nLet\u2019s show the sort of functionality that we are going to explore in this introductory tutorial by making use of the ls command:\n$ ls\ncpython devguide prog.py pypy rm-unused-function.patch\n$ ls pypy\nctypes_configure demo dotviewer include lib_pypy lib-python ...\n$ ls -l\ntotal 20\ndrwxr-xr-x 19 wena wena 4096 Feb 18 18:51 cpython\ndrwxr-xr-x 4 wena wena 4096 Feb 8 12:04 devguide\n-rwxr-xr-x 1 wena wena 535 Feb 19 00:05 prog.py\ndrwxr-xr-x 14 wena wena 4096 Feb 7 00:59 pypy\n-rw-r--r-- 1 wena wena 741 Feb 18 01:01 rm-unused-function.patch\n$ ls --help\nUsage: ls [OPTION]... [FILE]...\nList information about the FILEs (the current directory by default).\nSort entries alphabetically if none of -cftuvSUX nor --sort is specified.\n...\nA few concepts we can learn from the four commands:\nThe ls command is useful when run without any options at all. It defaults to displaying the contents of the current directory.\nIf we want beyond what it provides by default, we tell it a bit more. In this case, we want it to display a different directory,\npypy\n. What we did is specify what is known as a positional argument. It\u2019s named so because the program should know what to do with the value, solely based on where it appears on the command line. This concept is more relevant to a command like cp, whose most basic usage iscp SRC DEST\n. The first position is what you want copied, and the second position is where you want it copied to.Now, say we want to change behaviour of the program. In our example, we display more info for each file instead of just showing the file names. The\n-l\nin that case is known as an optional argument.That\u2019s a snippet of the help text. It\u2019s very useful in that you can come across a program you have never used before, and can figure out how it works simply by reading its help text.\nThe basics\u00b6\nLet us start with a very simple example which does (almost) nothing:\nimport argparse\nparser = argparse.ArgumentParser()\nparser.parse_args()\nFollowing is a result of running the code:\n$ python prog.py\n$ python prog.py --help\nusage: prog.py [-h]\noptions:\n-h, --help show this help message and exit\n$ python prog.py --verbose\nusage: prog.py [-h]\nprog.py: error: unrecognized arguments: --verbose\n$ python prog.py foo\nusage: prog.py [-h]\nprog.py: error: unrecognized arguments: foo\nHere is what is happening:\nRunning the script without any options results in nothing displayed to stdout. Not so useful.\nThe second one starts to display the usefulness of the\nargparse\nmodule. We have done almost nothing, but already we get a nice help message.The\n--help\noption, which can also be shortened to-h\n, is the only option we get for free (i.e. no need to specify it). Specifying anything else results in an error. But even then, we do get a useful usage message, also for free.\nIntroducing Positional arguments\u00b6\nAn example:\nimport argparse\nparser = argparse.ArgumentParser()\nparser.add_argument(\"echo\")\nargs = parser.parse_args()\nprint(args.echo)\nAnd running the code:\n$ python prog.py\nusage: prog.py [-h] echo\nprog.py: error: the following arguments are required: echo\n$ python prog.py --help\nusage: prog.py [-h] echo\npositional arguments:\necho\noptions:\n-h, --help show this help message and exit\n$ python prog.py foo\nfoo\nHere is what\u2019s happening:\nWe\u2019ve added the\nadd_argument()\nmethod, which is what we use to specify which command-line options the program is willing to accept. In this case, I\u2019ve named itecho\nso that it\u2019s in line with its function.Calling our program now requires us to specify an option.\nThe\nparse_args()\nmethod actually returns some data from the options specified, in this case,echo\n.The variable is some form of \u2018magic\u2019 that\nargparse\nperforms for free (i.e. no need to specify which variable that value is stored in). You will also notice that its name matches the string argument given to the method,echo\n.\nNote however that, although the help display looks nice and all, it currently\nis not as helpful as it can be. For example we see that we got echo\nas a\npositional argument, but we don\u2019t know what it does, other than by guessing or\nby reading the source code. So, let\u2019s make it a bit more useful:\nimport argparse\nparser = argparse.ArgumentParser()\nparser.add_argument(\"echo\", help=\"echo the string you use here\")\nargs = parser.parse_args()\nprint(args.echo)\nAnd we get:\n$ python prog.py -h\nusage: prog.py [-h] echo\npositional arguments:\necho echo the string you use here\noptions:\n-h, --help show this help message and exit\nNow, how about doing something even more useful:\nimport argparse\nparser = argparse.ArgumentParser()\nparser.add_argument(\"square\", help=\"display a square of a given number\")\nargs = parser.parse_args()\nprint(args.square**2)\nFollowing is a result of running the code:\n$ python prog.py 4\nTraceback (most recent call last):\nFile \"prog.py\", line 5, in \nprint(args.square**2)\nTypeError: unsupported operand type(s) for ** or pow(): 'str' and 'int'\nThat didn\u2019t go so well. That\u2019s because argparse\ntreats the options we\ngive it as strings, unless we tell it otherwise. So, let\u2019s tell\nargparse\nto treat that input as an integer:\nimport argparse\nparser = argparse.ArgumentParser()\nparser.add_argument(\"square\", help=\"display a square of a given number\",\ntype=int)\nargs = parser.parse_args()\nprint(args.square**2)\nFollowing is a result of running the code:\n$ python prog.py 4\n16\n$ python prog.py four\nusage: prog.py [-h] square\nprog.py: error: argument square: invalid int value: 'four'\nThat went well. The program now even helpfully quits on bad illegal input before proceeding.\nIntroducing Optional arguments\u00b6\nSo far we have been playing with positional arguments. Let us have a look on how to add optional ones:\nimport argparse\nparser = argparse.ArgumentParser()\nparser.add_argument(\"--verbosity\", help=\"increase output verbosity\")\nargs = parser.parse_args()\nif args.verbosity:\nprint(\"verbosity turned on\")\nAnd the output:\n$ python prog.py --verbosity 1\nverbosity turned on\n$ python prog.py\n$ python prog.py --help\nusage: prog.py [-h] [--verbosity VERBOSITY]\noptions:\n-h, --help show this help message and exit\n--verbosity VERBOSITY\nincrease output verbosity\n$ python prog.py --verbosity\nusage: prog.py [-h] [--verbosity VERBOSITY]\nprog.py: error: argument --verbosity: expected one argument\nHere is what is happening:\nThe program is written so as to display something when\n--verbosity\nis specified and display nothing when not.To show that the option is actually optional, there is no error when running the program without it. Note that by default, if an optional argument isn\u2019t used, the relevant variable, in this case\nargs.verbosity\n, is givenNone\nas a value, which is the reason it fails the truth test of theif\nstatement.The help message is a bit different.\nWhen using the\n--verbosity\noption, one must also specify some value, any value.\nThe above example accepts arbitrary integer values for --verbosity\n, but for\nour simple program, only two values are actually useful, True\nor False\n.\nLet\u2019s modify the code accordingly:\nimport argparse\nparser = argparse.ArgumentParser()\nparser.add_argument(\"--verbose\", help=\"increase output verbosity\",\naction=\"store_true\")\nargs = parser.parse_args()\nif args.verbose:\nprint(\"verbosity turned on\")\nAnd the output:\n$ python prog.py --verbose\nverbosity turned on\n$ python prog.py --verbose 1\nusage: prog.py [-h] [--verbose]\nprog.py: error: unrecognized arguments: 1\n$ python prog.py --help\nusage: prog.py [-h] [--verbose]\noptions:\n-h, --help show this help message and exit\n--verbose increase output verbosity\nHere is what is happening:\nThe option is now more of a flag than something that requires a value. We even changed the name of the option to match that idea. Note that we now specify a new keyword,\naction\n, and give it the value\"store_true\"\n. This means that, if the option is specified, assign the valueTrue\ntoargs.verbose\n. Not specifying it impliesFalse\n.It complains when you specify a value, in true spirit of what flags actually are.\nNotice the different help text.\nShort options\u00b6\nIf you are familiar with command line usage, you will notice that I haven\u2019t yet touched on the topic of short versions of the options. It\u2019s quite simple:\nimport argparse\nparser = argparse.ArgumentParser()\nparser.add_argument(\"-v\", \"--verbose\", help=\"increase output verbosity\",\naction=\"store_true\")\nargs = parser.parse_args()\nif args.verbose:\nprint(\"verbosity turned on\")\nAnd here goes:\n$ python prog.py -v\nverbosity turned on\n$ python prog.py --help\nusage: prog.py [-h] [-v]\noptions:\n-h, --help show this help message and exit\n-v, --verbose increase output verbosity\nNote that the new ability is also reflected in the help text.\nCombining Positional and Optional arguments\u00b6\nOur program keeps growing in complexity:\nimport argparse\nparser = argparse.ArgumentParser()\nparser.add_argument(\"square\", type=int,\nhelp=\"display a square of a given number\")\nparser.add_argument(\"-v\", \"--verbose\", action=\"store_true\",\nhelp=\"increase output verbosity\")\nargs = parser.parse_args()\nanswer = args.square**2\nif args.verbose:\nprint(f\"the square of {args.square} equals {answer}\")\nelse:\nprint(answer)\nAnd now the output:\n$ python prog.py\nusage: prog.py [-h] [-v] square\nprog.py: error: the following arguments are required: square\n$ python prog.py 4\n16\n$ python prog.py 4 --verbose\nthe square of 4 equals 16\n$ python prog.py --verbose 4\nthe square of 4 equals 16\nWe\u2019ve brought back a positional argument, hence the complaint.\nNote that the order does not matter.\nHow about we give this program of ours back the ability to have multiple verbosity values, and actually get to use them:\nimport argparse\nparser = argparse.ArgumentParser()\nparser.add_argument(\"square\", type=int,\nhelp=\"display a square of a given number\")\nparser.add_argument(\"-v\", \"--verbosity\", type=int,\nhelp=\"increase output verbosity\")\nargs = parser.parse_args()\nanswer = args.square**2\nif args.verbosity == 2:\nprint(f\"the square of {args.square} equals {answer}\")\nelif args.verbosity == 1:\nprint(f\"{args.square}^2 == {answer}\")\nelse:\nprint(answer)\nAnd the output:\n$ python prog.py 4\n16\n$ python prog.py 4 -v\nusage: prog.py [-h] [-v VERBOSITY] square\nprog.py: error: argument -v/--verbosity: expected one argument\n$ python prog.py 4 -v 1\n4^2 == 16\n$ python prog.py 4 -v 2\nthe square of 4 equals 16\n$ python prog.py 4 -v 3\n16\nThese all look good except the last one, which exposes a bug in our program.\nLet\u2019s fix it by restricting the values the --verbosity\noption can accept:\nimport argparse\nparser = argparse.ArgumentParser()\nparser.add_argument(\"square\", type=int,\nhelp=\"display a square of a given number\")\nparser.add_argument(\"-v\", \"--verbosity\", type=int, choices=[0, 1, 2],\nhelp=\"increase output verbosity\")\nargs = parser.parse_args()\nanswer = args.square**2\nif args.verbosity == 2:\nprint(f\"the square of {args.square} equals {answer}\")\nelif args.verbosity == 1:\nprint(f\"{args.square}^2 == {answer}\")\nelse:\nprint(answer)\nAnd the output:\n$ python prog.py 4 -v 3\nusage: prog.py [-h] [-v {0,1,2}] square\nprog.py: error: argument -v/--verbosity: invalid choice: 3 (choose from 0, 1, 2)\n$ python prog.py 4 -h\nusage: prog.py [-h] [-v {0,1,2}] square\npositional arguments:\nsquare display a square of a given number\noptions:\n-h, --help show this help message and exit\n-v, --verbosity {0,1,2}\nincrease output verbosity\nNote that the change also reflects both in the error message as well as the help string.\nNow, let\u2019s use a different approach of playing with verbosity, which is pretty\ncommon. It also matches the way the CPython executable handles its own\nverbosity argument (check the output of python --help\n):\nimport argparse\nparser = argparse.ArgumentParser()\nparser.add_argument(\"square\", type=int,\nhelp=\"display the square of a given number\")\nparser.add_argument(\"-v\", \"--verbosity\", action=\"count\",\nhelp=\"increase output verbosity\")\nargs = parser.parse_args()\nanswer = args.square**2\nif args.verbosity == 2:\nprint(f\"the square of {args.square} equals {answer}\")\nelif args.verbosity == 1:\nprint(f\"{args.square}^2 == {answer}\")\nelse:\nprint(answer)\nWe have introduced another action, \u201ccount\u201d, to count the number of occurrences of specific options.\n$ python prog.py 4\n16\n$ python prog.py 4 -v\n4^2 == 16\n$ python prog.py 4 -vv\nthe square of 4 equals 16\n$ python prog.py 4 --verbosity --verbosity\nthe square of 4 equals 16\n$ python prog.py 4 -v 1\nusage: prog.py [-h] [-v] square\nprog.py: error: unrecognized arguments: 1\n$ python prog.py 4 -h\nusage: prog.py [-h] [-v] square\npositional arguments:\nsquare display a square of a given number\noptions:\n-h, --help show this help message and exit\n-v, --verbosity increase output verbosity\n$ python prog.py 4 -vvv\n16\nYes, it\u2019s now more of a flag (similar to\naction=\"store_true\"\n) in the previous version of our script. That should explain the complaint.It also behaves similar to \u201cstore_true\u201d action.\nNow here\u2019s a demonstration of what the \u201ccount\u201d action gives. You\u2019ve probably seen this sort of usage before.\nAnd if you don\u2019t specify the\n-v\nflag, that flag is considered to haveNone\nvalue.As should be expected, specifying the long form of the flag, we should get the same output.\nSadly, our help output isn\u2019t very informative on the new ability our script has acquired, but that can always be fixed by improving the documentation for our script (e.g. via the\nhelp\nkeyword argument).That last output exposes a bug in our program.\nLet\u2019s fix:\nimport argparse\nparser = argparse.ArgumentParser()\nparser.add_argument(\"square\", type=int,\nhelp=\"display a square of a given number\")\nparser.add_argument(\"-v\", \"--verbosity\", action=\"count\",\nhelp=\"increase output verbosity\")\nargs = parser.parse_args()\nanswer = args.square**2\n# bugfix: replace == with >=\nif args.verbosity >= 2:\nprint(f\"the square of {args.square} equals {answer}\")\nelif args.verbosity >= 1:\nprint(f\"{args.square}^2 == {answer}\")\nelse:\nprint(answer)\nAnd this is what it gives:\n$ python prog.py 4 -vvv\nthe square of 4 equals 16\n$ python prog.py 4 -vvvv\nthe square of 4 equals 16\n$ python prog.py 4\nTraceback (most recent call last):\nFile \"prog.py\", line 11, in \nif args.verbosity >= 2:\nTypeError: '>=' not supported between instances of 'NoneType' and 'int'\nFirst output went well, and fixes the bug we had before. That is, we want any value >= 2 to be as verbose as possible.\nThird output not so good.\nLet\u2019s fix that bug:\nimport argparse\nparser = argparse.ArgumentParser()\nparser.add_argument(\"square\", type=int,\nhelp=\"display a square of a given number\")\nparser.add_argument(\"-v\", \"--verbosity\", action=\"count\", default=0,\nhelp=\"increase output verbosity\")\nargs = parser.parse_args()\nanswer = args.square**2\nif args.verbosity >= 2:\nprint(f\"the square of {args.square} equals {answer}\")\nelif args.verbosity >= 1:\nprint(f\"{args.square}^2 == {answer}\")\nelse:\nprint(answer)\nWe\u2019ve just introduced yet another keyword, default\n.\nWe\u2019ve set it to 0\nin order to make it comparable to the other int values.\nRemember that by default,\nif an optional argument isn\u2019t specified,\nit gets the None\nvalue, and that cannot be compared to an int value\n(hence the TypeError\nexception).\nAnd:\n$ python prog.py 4\n16\nYou can go quite far just with what we\u2019ve learned so far,\nand we have only scratched the surface.\nThe argparse\nmodule is very powerful,\nand we\u2019ll explore a bit more of it before we end this tutorial.\nGetting a little more advanced\u00b6\nWhat if we wanted to expand our tiny program to perform other powers, not just squares:\nimport argparse\nparser = argparse.ArgumentParser()\nparser.add_argument(\"x\", type=int, help=\"the base\")\nparser.add_argument(\"y\", type=int, help=\"the exponent\")\nparser.add_argument(\"-v\", \"--verbosity\", action=\"count\", default=0)\nargs = parser.parse_args()\nanswer = args.x**args.y\nif args.verbosity >= 2:\nprint(f\"{args.x} to the power {args.y} equals {answer}\")\nelif args.verbosity >= 1:\nprint(f\"{args.x}^{args.y} == {answer}\")\nelse:\nprint(answer)\nOutput:\n$ python prog.py\nusage: prog.py [-h] [-v] x y\nprog.py: error: the following arguments are required: x, y\n$ python prog.py -h\nusage: prog.py [-h] [-v] x y\npositional arguments:\nx the base\ny the exponent\noptions:\n-h, --help show this help message and exit\n-v, --verbosity\n$ python prog.py 4 2 -v\n4^2 == 16\nNotice that so far we\u2019ve been using verbosity level to change the text that gets displayed. The following example instead uses verbosity level to display more text instead:\nimport argparse\nparser = argparse.ArgumentParser()\nparser.add_argument(\"x\", type=int, help=\"the base\")\nparser.add_argument(\"y\", type=int, help=\"the exponent\")\nparser.add_argument(\"-v\", \"--verbosity\", action=\"count\", default=0)\nargs = parser.parse_args()\nanswer = args.x**args.y\nif args.verbosity >= 2:\nprint(f\"Running '{__file__}'\")\nif args.verbosity >= 1:\nprint(f\"{args.x}^{args.y} == \", end=\"\")\nprint(answer)\nOutput:\n$ python prog.py 4 2\n16\n$ python prog.py 4 2 -v\n4^2 == 16\n$ python prog.py 4 2 -vv\nRunning 'prog.py'\n4^2 == 16\nSpecifying ambiguous arguments\u00b6\nWhen there is ambiguity in deciding whether an argument is positional or for an\nargument, --\ncan be used to tell parse_args()\nthat\neverything after that is a positional argument:\n>>> parser = argparse.ArgumentParser(prog='PROG')\n>>> parser.add_argument('-n', nargs='+')\n>>> parser.add_argument('args', nargs='*')\n>>> # ambiguous, so parse_args assumes it's an option\n>>> parser.parse_args(['-f'])\nusage: PROG [-h] [-n N [N ...]] [args ...]\nPROG: error: unrecognized arguments: -f\n>>> parser.parse_args(['--', '-f'])\nNamespace(args=['-f'], n=None)\n>>> # ambiguous, so the -n option greedily accepts arguments\n>>> parser.parse_args(['-n', '1', '2', '3'])\nNamespace(args=[], n=['1', '2', '3'])\n>>> parser.parse_args(['-n', '1', '--', '2', '3'])\nNamespace(args=['2', '3'], n=['1'])\nConflicting options\u00b6\nSo far, we have been working with two methods of an\nargparse.ArgumentParser\ninstance. Let\u2019s introduce a third one,\nadd_mutually_exclusive_group()\n. It allows for us to specify options that\nconflict with each other. Let\u2019s also change the rest of the program so that\nthe new functionality makes more sense:\nwe\u2019ll introduce the --quiet\noption,\nwhich will be the opposite of the --verbose\none:\nimport argparse\nparser = argparse.ArgumentParser()\ngroup = parser.add_mutually_exclusive_group()\ngroup.add_argument(\"-v\", \"--verbose\", action=\"store_true\")\ngroup.add_argument(\"-q\", \"--quiet\", action=\"store_true\")\nparser.add_argument(\"x\", type=int, help=\"the base\")\nparser.add_argument(\"y\", type=int, help=\"the exponent\")\nargs = parser.parse_args()\nanswer = args.x**args.y\nif args.quiet:\nprint(answer)\nelif args.verbose:\nprint(f\"{args.x} to the power {args.y} equals {answer}\")\nelse:\nprint(f\"{args.x}^{args.y} == {answer}\")\nOur program is now simpler, and we\u2019ve lost some functionality for the sake of demonstration. Anyways, here\u2019s the output:\n$ python prog.py 4 2\n4^2 == 16\n$ python prog.py 4 2 -q\n16\n$ python prog.py 4 2 -v\n4 to the power 2 equals 16\n$ python prog.py 4 2 -vq\nusage: prog.py [-h] [-v | -q] x y\nprog.py: error: argument -q/--quiet: not allowed with argument -v/--verbose\n$ python prog.py 4 2 -v --quiet\nusage: prog.py [-h] [-v | -q] x y\nprog.py: error: argument -q/--quiet: not allowed with argument -v/--verbose\nThat should be easy to follow. I\u2019ve added that last output so you can see the sort of flexibility you get, i.e. mixing long form options with short form ones.\nBefore we conclude, you probably want to tell your users the main purpose of your program, just in case they don\u2019t know:\nimport argparse\nparser = argparse.ArgumentParser(description=\"calculate X to the power of Y\")\ngroup = parser.add_mutually_exclusive_group()\ngroup.add_argument(\"-v\", \"--verbose\", action=\"store_true\")\ngroup.add_argument(\"-q\", \"--quiet\", action=\"store_true\")\nparser.add_argument(\"x\", type=int, help=\"the base\")\nparser.add_argument(\"y\", type=int, help=\"the exponent\")\nargs = parser.parse_args()\nanswer = args.x**args.y\nif args.quiet:\nprint(answer)\nelif args.verbose:\nprint(f\"{args.x} to the power {args.y} equals {answer}\")\nelse:\nprint(f\"{args.x}^{args.y} == {answer}\")\nNote that slight difference in the usage text. Note the [-v | -q]\n,\nwhich tells us that we can either use -v\nor -q\n,\nbut not both at the same time:\n$ python prog.py --help\nusage: prog.py [-h] [-v | -q] x y\ncalculate X to the power of Y\npositional arguments:\nx the base\ny the exponent\noptions:\n-h, --help show this help message and exit\n-v, --verbose\n-q, --quiet\nHow to translate the argparse output\u00b6\nThe output of the argparse\nmodule such as its help text and error\nmessages are all made translatable using the gettext\nmodule. This\nallows applications to easily localize messages produced by\nargparse\n. See also Internationalizing your programs and modules.\nFor instance, in this argparse\noutput:\n$ python prog.py --help\nusage: prog.py [-h] [-v | -q] x y\ncalculate X to the power of Y\npositional arguments:\nx the base\ny the exponent\noptions:\n-h, --help show this help message and exit\n-v, --verbose\n-q, --quiet\nThe strings usage:\n, positional arguments:\n, options:\nand\nshow this help message and exit\nare all translatable.\nIn order to translate these strings, they must first be extracted\ninto a .po\nfile. For example, using Babel,\nrun this command:\n$ pybabel extract -o messages.po /usr/lib/python3.12/argparse.py\nThis command will extract all translatable strings from the argparse\nmodule and output them into a file named messages.po\n. This command assumes\nthat your Python installation is in /usr/lib\n.\nYou can find out the location of the argparse\nmodule on your system\nusing this script:\nimport argparse\nprint(argparse.__file__)\nOnce the messages in the .po\nfile are translated and the translations are\ninstalled using gettext\n, argparse\nwill be able to display the\ntranslated messages.\nTo translate your own strings in the argparse\noutput, use gettext\n.\nCustom type converters\u00b6\nThe argparse\nmodule allows you to specify custom type converters for\nyour command-line arguments. This allows you to modify user input before it\u2019s\nstored in the argparse.Namespace\n. This can be useful when you need to\npre-process the input before it is used in your program.\nWhen using a custom type converter, you can use any callable that takes a single string argument (the argument value) and returns the converted value. However, if you need to handle more complex scenarios, you can use a custom action class with the action parameter instead.\nFor example, let\u2019s say you want to handle arguments with different prefixes and process them accordingly:\nimport argparse\nparser = argparse.ArgumentParser(prefix_chars='-+')\nparser.add_argument('-a', metavar='', action='append',\ntype=lambda x: ('-', x))\nparser.add_argument('+a', metavar='', action='append',\ntype=lambda x: ('+', x))\nargs = parser.parse_args()\nprint(args)\nOutput:\n$ python prog.py -a value1 +a value2\nNamespace(a=[('-', 'value1'), ('+', 'value2')])\nIn this example, we:\nCreated a parser with custom prefix characters using the\nprefix_chars\nparameter.Defined two arguments,\n-a\nand+a\n, which used thetype\nparameter to create custom type converters to store the value in a tuple with the prefix.\nWithout the custom type converters, the arguments would have treated the -a\nand +a\nas the same argument, which would have been undesirable. By using custom\ntype converters, we were able to differentiate between the two arguments.\nConclusion\u00b6\nThe argparse\nmodule offers a lot more than shown here.\nIts docs are quite detailed and thorough, and full of examples.\nHaving gone through this tutorial, you should easily digest them\nwithout feeling overwhelmed.", "code_snippets": ["\n", " ", " ", "\n", "\n", "\n", " ", " ", "\n", "\n", " ", " ", "\n", "\n", "\n", " ", " ", "\n", " ", "\n", " ", " ", "\n", "\n", "\n", " ", " ", "\n", " ", "\n", " ", " ", "\n", "\n", "\n", " ", " ", "\n", " ", "\n ", "\n", " ", " ", "\n", "\n", "\n", " ", " ", "\n", " ", "\n", " ", " ", "\n", " ", "\n ", "\n", "\n", " ", " ", "\n", " ", "\n ", "\n", " ", " ", "\n", " ", "\n ", "\n", "\n", " ", " ", "\n", " ", " ", "\n ", "\n", " ", " ", "\n", " ", "\n ", "\n", "\n", " ", " ", "\n", " ", "\n ", "\n", " ", " ", "\n ", "\n", " ", " ", "\n", " ", " ", "\n", " ", "\n ", "\n", "\n ", "\n", "\n", " ", " ", "\n", " ", "\n ", "\n", " ", " ", "\n ", "\n", " ", " ", "\n", " ", " ", "\n", " ", " ", " ", "\n ", "\n", " ", " ", " ", "\n ", "\n", "\n ", "\n", "\n", " ", " ", "\n", " ", "\n ", "\n", " ", " ", " ", " ", " ", "\n ", "\n", " ", " ", "\n", " ", " ", "\n", " ", " ", " ", "\n ", "\n", " ", " ", " ", "\n ", "\n", "\n ", "\n", "\n", " ", " ", "\n", " ", "\n ", "\n", " ", " ", "\n ", "\n", " ", " ", "\n", " ", " ", "\n", " ", " ", " ", "\n ", "\n", " ", " ", " ", "\n ", "\n", "\n ", "\n", "\n", " ", " ", "\n", " ", "\n ", "\n", " ", " ", "\n ", "\n", " ", " ", "\n", " ", " ", "\n\n", "\n", " ", " ", " ", "\n ", "\n", " ", " ", " ", "\n ", "\n", "\n ", "\n", "\n", " ", " ", "\n", " ", "\n ", "\n", " ", " ", " ", "\n ", "\n", " ", " ", "\n", " ", " ", "\n", " ", " ", " ", "\n ", "\n", " ", " ", " ", "\n ", "\n", "\n ", "\n", "\n", " ", " ", "\n", " ", " ", "\n", " ", " ", "\n", " ", " ", " ", "\n", " ", " ", "\n", " ", " ", "\n", " ", " ", " ", "\n ", "\n", " ", " ", " ", "\n ", "\n", "\n ", "\n", "\n", " ", " ", "\n", " ", " ", "\n", " ", " ", "\n", " ", " ", " ", "\n", " ", " ", "\n", " ", " ", "\n", " ", " ", " ", "\n ", "\n", " ", " ", " ", "\n ", " ", "\n", "\n", " ", " ", "\n", " ", "\n", " ", "\n\n", "\n", "\n", "\n", "\n\n", " ", "\n", "\n\n", "\n", " ", " ", " ", "\n", "\n\n", " ", " ", " ", " ", "\n", "\n", "\n\n", " ", " ", "\n", " ", " ", "\n", " ", " ", "\n", " ", " ", "\n", " ", " ", "\n", " ", " ", "\n", " ", " ", "\n", " ", " ", "\n\n", " ", "\n ", "\n", " ", "\n ", "\n", "\n ", "\n", "\n\n", " ", " ", "\n", " ", " ", "\n", " ", " ", "\n", " ", " ", "\n", " ", " ", "\n", " ", " ", "\n", " ", " ", "\n", " ", " ", "\n\n", " ", "\n ", "\n", " ", "\n ", "\n", "\n ", "\n", "\n", "\n", "\n\n", " ", " ", "\n\n", " ", " ", "\n ", " ", " ", " ", "\n", " ", " ", "\n ", " ", " ", " ", "\n\n", " ", " ", "\n", "\n"], "language": "Python", "source": "python.org", "token_count": 6070} +{"url": "https://docs.python.org/3/c-api/function.html", "title": "Function Objects", "content": "Function Objects\u00b6\nThere are a few functions specific to Python functions.\n-\ntype PyFunctionObject\u00b6\nThe C structure used for functions.\n-\nPyTypeObject PyFunction_Type\u00b6\nThis is an instance of\nPyTypeObject\nand represents the Python function type. It is exposed to Python programmers astypes.FunctionType\n.\n-\nint PyFunction_Check(PyObject *o)\u00b6\nReturn true if o is a function object (has type\nPyFunction_Type\n). The parameter must not beNULL\n. This function always succeeds.\n-\nPyObject *PyFunction_New(PyObject *code, PyObject *globals)\u00b6\n- Return value: New reference.\nReturn a new function object associated with the code object code. globals must be a dictionary with the global variables accessible to the function.\nThe function\u2019s docstring and name are retrieved from the code object.\n__module__\nis retrieved from globals. The argument defaults, annotations and closure are set toNULL\n.__qualname__\nis set to the same value as the code object\u2019sco_qualname\nfield.\n-\nPyObject *PyFunction_NewWithQualName(PyObject *code, PyObject *globals, PyObject *qualname)\u00b6\n- Return value: New reference.\nAs\nPyFunction_New()\n, but also allows setting the function object\u2019s__qualname__\nattribute. qualname should be a unicode object orNULL\n; ifNULL\n, the__qualname__\nattribute is set to the same value as the code object\u2019sco_qualname\nfield.Added in version 3.3.\n-\nPyObject *PyFunction_GetCode(PyObject *op)\u00b6\n- Return value: Borrowed reference.\nReturn the code object associated with the function object op.\n-\nPyObject *PyFunction_GetGlobals(PyObject *op)\u00b6\n- Return value: Borrowed reference.\nReturn the globals dictionary associated with the function object op.\n-\nPyObject *PyFunction_GetModule(PyObject *op)\u00b6\n- Return value: Borrowed reference.\nReturn a borrowed reference to the\n__module__\nattribute of the function object op. It can be NULL.This is normally a\nstring\ncontaining the module name, but can be set to any other object by Python code.\n-\nPyObject *PyFunction_GetDefaults(PyObject *op)\u00b6\n- Return value: Borrowed reference.\nReturn the argument default values of the function object op. This can be a tuple of arguments or\nNULL\n.\n-\nint PyFunction_SetDefaults(PyObject *op, PyObject *defaults)\u00b6\nSet the argument default values for the function object op. defaults must be\nPy_None\nor a tuple.Raises\nSystemError\nand returns-1\non failure.\n-\nvoid PyFunction_SetVectorcall(PyFunctionObject *func, vectorcallfunc vectorcall)\u00b6\nSet the vectorcall field of a given function object func.\nWarning: extensions using this API must preserve the behavior of the unaltered (default) vectorcall function!\nAdded in version 3.12.\n-\nPyObject *PyFunction_GetKwDefaults(PyObject *op)\u00b6\n- Return value: Borrowed reference.\nReturn the keyword-only argument default values of the function object op. This can be a dictionary of arguments or\nNULL\n.\n-\nint PyFunction_SetKwDefaults(PyObject *op, PyObject *defaults)\u00b6\nSet the keyword-only argument default values of the function object op. defaults must be a dictionary of keyword-only arguments or\nPy_None\n.This function returns\n0\non success, and returns-1\nwith an exception set on failure.\n-\nPyObject *PyFunction_GetClosure(PyObject *op)\u00b6\n- Return value: Borrowed reference.\nReturn the closure associated with the function object op. This can be\nNULL\nor a tuple of cell objects.\n-\nint PyFunction_SetClosure(PyObject *op, PyObject *closure)\u00b6\nSet the closure associated with the function object op. closure must be\nPy_None\nor a tuple of cell objects.Raises\nSystemError\nand returns-1\non failure.\n-\nPyObject *PyFunction_GetAnnotations(PyObject *op)\u00b6\n- Return value: Borrowed reference.\nReturn the annotations of the function object op. This can be a mutable dictionary or\nNULL\n.\n-\nint PyFunction_SetAnnotations(PyObject *op, PyObject *annotations)\u00b6\nSet the annotations for the function object op. annotations must be a dictionary or\nPy_None\n.Raises\nSystemError\nand returns-1\non failure.\n-\nPyObject *PyFunction_GET_CODE(PyObject *op)\u00b6\n-\nPyObject *PyFunction_GET_GLOBALS(PyObject *op)\u00b6\n-\nPyObject *PyFunction_GET_MODULE(PyObject *op)\u00b6\n-\nPyObject *PyFunction_GET_DEFAULTS(PyObject *op)\u00b6\n-\nPyObject *PyFunction_GET_KW_DEFAULTS(PyObject *op)\u00b6\n-\nPyObject *PyFunction_GET_CLOSURE(PyObject *op)\u00b6\n-\nPyObject *PyFunction_GET_ANNOTATIONS(PyObject *op)\u00b6\n- Return value: Borrowed reference.\nThese functions are similar to their\nPyFunction_Get*\ncounterparts, but do not do type checking. Passing anything other than an instance ofPyFunction_Type\nis undefined behavior.\n-\nint PyFunction_AddWatcher(PyFunction_WatchCallback callback)\u00b6\nRegister callback as a function watcher for the current interpreter. Return an ID which may be passed to\nPyFunction_ClearWatcher()\n. In case of error (e.g. no more watcher IDs available), return-1\nand set an exception.Added in version 3.12.\n-\nint PyFunction_ClearWatcher(int watcher_id)\u00b6\nClear watcher identified by watcher_id previously returned from\nPyFunction_AddWatcher()\nfor the current interpreter. Return0\non success, or-1\nand set an exception on error (e.g. if the given watcher_id was never registered.)Added in version 3.12.\n-\ntype PyFunction_WatchEvent\u00b6\nEnumeration of possible function watcher events:\nPyFunction_EVENT_CREATE\nPyFunction_EVENT_DESTROY\nPyFunction_EVENT_MODIFY_CODE\nPyFunction_EVENT_MODIFY_DEFAULTS\nPyFunction_EVENT_MODIFY_KWDEFAULTS\nAdded in version 3.12.\n-\ntypedef int (*PyFunction_WatchCallback)(PyFunction_WatchEvent event, PyFunctionObject *func, PyObject *new_value)\u00b6\nType of a function watcher callback function.\nIf event is\nPyFunction_EVENT_CREATE\norPyFunction_EVENT_DESTROY\nthen new_value will beNULL\n. Otherwise, new_value will hold a borrowed reference to the new value that is about to be stored in func for the attribute that is being modified.The callback may inspect but must not modify func; doing so could have unpredictable effects, including infinite recursion.\nIf event is\nPyFunction_EVENT_CREATE\n, then the callback is invoked after func has been fully initialized. Otherwise, the callback is invoked before the modification to func takes place, so the prior state of func can be inspected. The runtime is permitted to optimize away the creation of function objects when possible. In such cases no event will be emitted. Although this creates the possibility of an observable difference of runtime behavior depending on optimization decisions, it does not change the semantics of the Python code being executed.If event is\nPyFunction_EVENT_DESTROY\n, Taking a reference in the callback to the about-to-be-destroyed function will resurrect it, preventing it from being freed at this time. When the resurrected object is destroyed later, any watcher callbacks active at that time will be called again.If the callback sets an exception, it must return\n-1\n; this exception will be printed as an unraisable exception usingPyErr_WriteUnraisable()\n. Otherwise it should return0\n.There may already be a pending exception set on entry to the callback. In this case, the callback should return\n0\nwith the same exception still set. This means the callback may not call any other API that can set an exception unless it saves and clears the exception state first, and restores it before returning.Added in version 3.12.", "code_snippets": [], "language": "Python", "source": "python.org", "token_count": 1795} +{"url": "https://docs.python.org/3/library/shlex.html", "title": " \u2014 Simple lexical analysis", "content": "shlex\n\u2014 Simple lexical analysis\u00b6\nSource code: Lib/shlex.py\nThe shlex\nclass makes it easy to write lexical analyzers for\nsimple syntaxes resembling that of the Unix shell. This will often be useful\nfor writing minilanguages, (for example, in run control files for Python\napplications) or for parsing quoted strings.\nThe shlex\nmodule defines the following functions:\n- shlex.split(s, comments=False, posix=True)\u00b6\nSplit the string s using shell-like syntax. If comments is\nFalse\n(the default), the parsing of comments in the given string will be disabled (setting thecommenters\nattribute of theshlex\ninstance to the empty string). This function operates in POSIX mode by default, but uses non-POSIX mode if the posix argument is false.Changed in version 3.12: Passing\nNone\nfor s argument now raises an exception, rather than readingsys.stdin\n.\n- shlex.join(split_command)\u00b6\nConcatenate the tokens of the list split_command and return a string. This function is the inverse of\nsplit()\n.>>> from shlex import join >>> print(join(['echo', '-n', 'Multiple words'])) echo -n 'Multiple words'\nThe returned value is shell-escaped to protect against injection vulnerabilities (see\nquote()\n).Added in version 3.8.\n- shlex.quote(s)\u00b6\nReturn a shell-escaped version of the string s. The returned value is a string that can safely be used as one token in a shell command line, for cases where you cannot use a list.\nWarning\nThe\nshlex\nmodule is only designed for Unix shells.The\nquote()\nfunction is not guaranteed to be correct on non-POSIX compliant shells or shells from other operating systems such as Windows. Executing commands quoted by this module on such shells can open up the possibility of a command injection vulnerability.Consider using functions that pass command arguments with lists such as\nsubprocess.run()\nwithshell=False\n.This idiom would be unsafe:\n>>> filename = 'somefile; rm -rf ~' >>> command = 'ls -l {}'.format(filename) >>> print(command) # executed by a shell: boom! ls -l somefile; rm -rf ~\nquote()\nlets you plug the security hole:>>> from shlex import quote >>> command = 'ls -l {}'.format(quote(filename)) >>> print(command) ls -l 'somefile; rm -rf ~' >>> remote_command = 'ssh home {}'.format(quote(command)) >>> print(remote_command) ssh home 'ls -l '\"'\"'somefile; rm -rf ~'\"'\"''\nThe quoting is compatible with UNIX shells and with\nsplit()\n:>>> from shlex import split >>> remote_command = split(remote_command) >>> remote_command ['ssh', 'home', \"ls -l 'somefile; rm -rf ~'\"] >>> command = split(remote_command[-1]) >>> command ['ls', '-l', 'somefile; rm -rf ~']\nAdded in version 3.3.\nThe shlex\nmodule defines the following class:\n- class shlex.shlex(instream=None, infile=None, posix=False, punctuation_chars=False)\u00b6\nA\nshlex\ninstance or subclass instance is a lexical analyzer object. The initialization argument, if present, specifies where to read characters from. It must be a file-/stream-like object withread()\nandreadline()\nmethods, or a string. If no argument is given, input will be taken fromsys.stdin\n. The second optional argument is a filename string, which sets the initial value of theinfile\nattribute. If the instream argument is omitted or equal tosys.stdin\n, this second argument defaults to \u201cstdin\u201d. The posix argument defines the operational mode: when posix is not true (default), theshlex\ninstance will operate in compatibility mode. When operating in POSIX mode,shlex\nwill try to be as close as possible to the POSIX shell parsing rules. The punctuation_chars argument provides a way to make the behaviour even closer to how real shells parse. This can take a number of values: the default value,False\n, preserves the behaviour seen under Python 3.5 and earlier. If set toTrue\n, then parsing of the characters();<>|&\nis changed: any run of these characters (considered punctuation characters) is returned as a single token. If set to a non-empty string of characters, those characters will be used as the punctuation characters. Any characters in thewordchars\nattribute that appear in punctuation_chars will be removed fromwordchars\n. See Improved Compatibility with Shells for more information. punctuation_chars can be set only uponshlex\ninstance creation and can\u2019t be modified later.Changed in version 3.6: The punctuation_chars parameter was added.\nSee also\n- Module\nconfigparser\nParser for configuration files similar to the Windows\n.ini\nfiles.\nshlex Objects\u00b6\nA shlex\ninstance has the following methods:\n- shlex.get_token()\u00b6\nReturn a token. If tokens have been stacked using\npush_token()\n, pop a token off the stack. Otherwise, read one from the input stream. If reading encounters an immediate end-of-file,eof\nis returned (the empty string (''\n) in non-POSIX mode, andNone\nin POSIX mode).\n- shlex.push_token(str)\u00b6\nPush the argument onto the token stack.\n- shlex.read_token()\u00b6\nRead a raw token. Ignore the pushback stack, and do not interpret source requests. (This is not ordinarily a useful entry point, and is documented here only for the sake of completeness.)\n- shlex.sourcehook(filename)\u00b6\nWhen\nshlex\ndetects a source request (seesource\nbelow) this method is given the following token as argument, and expected to return a tuple consisting of a filename and an open file-like object.Normally, this method first strips any quotes off the argument. If the result is an absolute pathname, or there was no previous source request in effect, or the previous source was a stream (such as\nsys.stdin\n), the result is left alone. Otherwise, if the result is a relative pathname, the directory part of the name of the file immediately before it on the source inclusion stack is prepended (this behavior is like the way the C preprocessor handles#include \"file.h\"\n).The result of the manipulations is treated as a filename, and returned as the first component of the tuple, with\nopen()\ncalled on it to yield the second component. (Note: this is the reverse of the order of arguments in instance initialization!)This hook is exposed so that you can use it to implement directory search paths, addition of file extensions, and other namespace hacks. There is no corresponding \u2018close\u2019 hook, but a shlex instance will call the\nclose()\nmethod of the sourced input stream when it returns EOF.For more explicit control of source stacking, use the\npush_source()\nandpop_source()\nmethods.\n- shlex.push_source(newstream, newfile=None)\u00b6\nPush an input source stream onto the input stack. If the filename argument is specified it will later be available for use in error messages. This is the same method used internally by the\nsourcehook()\nmethod.\n- shlex.pop_source()\u00b6\nPop the last-pushed input source from the input stack. This is the same method used internally when the lexer reaches EOF on a stacked input stream.\n- shlex.error_leader(infile=None, lineno=None)\u00b6\nThis method generates an error message leader in the format of a Unix C compiler error label; the format is\n'\"%s\", line %d: '\n, where the%s\nis replaced with the name of the current source file and the%d\nwith the current input line number (the optional arguments can be used to override these).This convenience is provided to encourage\nshlex\nusers to generate error messages in the standard, parseable format understood by Emacs and other Unix tools.\nInstances of shlex\nsubclasses have some public instance\nvariables which either control lexical analysis or can be used for debugging:\n- shlex.commenters\u00b6\nThe string of characters that are recognized as comment beginners. All characters from the comment beginner to end of line are ignored. Includes just\n'#'\nby default.\n- shlex.wordchars\u00b6\nThe string of characters that will accumulate into multi-character tokens. By default, includes all ASCII alphanumerics and underscore. In POSIX mode, the accented characters in the Latin-1 set are also included. If\npunctuation_chars\nis not empty, the characters~-./*?=\n, which can appear in filename specifications and command line parameters, will also be included in this attribute, and any characters which appear inpunctuation_chars\nwill be removed fromwordchars\nif they are present there. Ifwhitespace_split\nis set toTrue\n, this will have no effect.\n- shlex.whitespace\u00b6\nCharacters that will be considered whitespace and skipped. Whitespace bounds tokens. By default, includes space, tab, linefeed and carriage-return.\n- shlex.escape\u00b6\nCharacters that will be considered as escape. This will be only used in POSIX mode, and includes just\n'\\'\nby default.\n- shlex.quotes\u00b6\nCharacters that will be considered string quotes. The token accumulates until the same quote is encountered again (thus, different quote types protect each other as in the shell.) By default, includes ASCII single and double quotes.\n- shlex.escapedquotes\u00b6\nCharacters in\nquotes\nthat will interpret escape characters defined inescape\n. This is only used in POSIX mode, and includes just'\"'\nby default.\n- shlex.whitespace_split\u00b6\nIf\nTrue\n, tokens will only be split in whitespaces. This is useful, for example, for parsing command lines withshlex\n, getting tokens in a similar way to shell arguments. When used in combination withpunctuation_chars\n, tokens will be split on whitespace in addition to those characters.Changed in version 3.8: The\npunctuation_chars\nattribute was made compatible with thewhitespace_split\nattribute.\n- shlex.infile\u00b6\nThe name of the current input file, as initially set at class instantiation time or stacked by later source requests. It may be useful to examine this when constructing error messages.\n- shlex.source\u00b6\nThis attribute is\nNone\nby default. If you assign a string to it, that string will be recognized as a lexical-level inclusion request similar to thesource\nkeyword in various shells. That is, the immediately following token will be opened as a filename and input will be taken from that stream until EOF, at which point theclose()\nmethod of that stream will be called and the input source will again become the original input stream. Source requests may be stacked any number of levels deep.\n- shlex.debug\u00b6\nIf this attribute is numeric and\n1\nor more, ashlex\ninstance will print verbose progress output on its behavior. If you need to use this, you can read the module source code to learn the details.\n- shlex.lineno\u00b6\nSource line number (count of newlines seen so far plus one).\n- shlex.token\u00b6\nThe token buffer. It may be useful to examine this when catching exceptions.\n- shlex.eof\u00b6\nToken used to determine end of file. This will be set to the empty string (\n''\n), in non-POSIX mode, and toNone\nin POSIX mode.\n- shlex.punctuation_chars\u00b6\nA read-only property. Characters that will be considered punctuation. Runs of punctuation characters will be returned as a single token. However, note that no semantic validity checking will be performed: for example, \u2018>>>\u2019 could be returned as a token, even though it may not be recognised as such by shells.\nAdded in version 3.6.\nParsing Rules\u00b6\nWhen operating in non-POSIX mode, shlex\nwill try to obey the\nfollowing rules.\nQuote characters are not recognized within words (\nDo\"Not\"Separate\nis parsed as the single wordDo\"Not\"Separate\n);Escape characters are not recognized;\nEnclosing characters in quotes preserve the literal value of all characters within the quotes;\nClosing quotes separate words (\n\"Do\"Separate\nis parsed as\"Do\"\nandSeparate\n);If\nwhitespace_split\nisFalse\n, any character not declared to be a word character, whitespace, or a quote will be returned as a single-character token. If it isTrue\n,shlex\nwill only split words in whitespaces;EOF is signaled with an empty string (\n''\n);It\u2019s not possible to parse empty strings, even if quoted.\nWhen operating in POSIX mode, shlex\nwill try to obey the\nfollowing parsing rules.\nQuotes are stripped out, and do not separate words (\n\"Do\"Not\"Separate\"\nis parsed as the single wordDoNotSeparate\n);Non-quoted escape characters (e.g.\n'\\'\n) preserve the literal value of the next character that follows;Enclosing characters in quotes which are not part of\nescapedquotes\n(e.g.\"'\"\n) preserve the literal value of all characters within the quotes;Enclosing characters in quotes which are part of\nescapedquotes\n(e.g.'\"'\n) preserves the literal value of all characters within the quotes, with the exception of the characters mentioned inescape\n. The escape characters retain their special meaning only when followed by the quote in use, or the escape character itself. Otherwise the escape character will be considered a normal character.EOF is signaled with a\nNone\nvalue;Quoted empty strings (\n''\n) are allowed.\nImproved Compatibility with Shells\u00b6\nAdded in version 3.6.\nThe shlex\nclass provides compatibility with the parsing performed by\ncommon Unix shells like bash\n, dash\n, and sh\n. To take advantage of\nthis compatibility, specify the punctuation_chars\nargument in the\nconstructor. This defaults to False\n, which preserves pre-3.6 behaviour.\nHowever, if it is set to True\n, then parsing of the characters ();<>|&\nis changed: any run of these characters is returned as a single token. While\nthis is short of a full parser for shells (which would be out of scope for the\nstandard library, given the multiplicity of shells out there), it does allow\nyou to perform processing of command lines more easily than you could\notherwise. To illustrate, you can see the difference in the following snippet:\n>>> import shlex\n>>> text = \"a && b; c && d || e; f >'abc'; (def \\\"ghi\\\")\"\n>>> s = shlex.shlex(text, posix=True)\n>>> s.whitespace_split = True\n>>> list(s)\n['a', '&&', 'b;', 'c', '&&', 'd', '||', 'e;', 'f', '>abc;', '(def', 'ghi)']\n>>> s = shlex.shlex(text, posix=True, punctuation_chars=True)\n>>> s.whitespace_split = True\n>>> list(s)\n['a', '&&', 'b', ';', 'c', '&&', 'd', '||', 'e', ';', 'f', '>', 'abc', ';',\n'(', 'def', 'ghi', ')']\nOf course, tokens will be returned which are not valid for shells, and you\u2019ll need to implement your own error checks on the returned tokens.\nInstead of passing True\nas the value for the punctuation_chars parameter,\nyou can pass a string with specific characters, which will be used to determine\nwhich characters constitute punctuation. For example:\n>>> import shlex\n>>> s = shlex.shlex(\"a && b || c\", punctuation_chars=\"|\")\n>>> list(s)\n['a', '&', '&', 'b', '||', 'c']\nNote\nWhen punctuation_chars\nis specified, the wordchars\nattribute is augmented with the characters ~-./*?=\n. That is because these\ncharacters can appear in file names (including wildcards) and command-line\narguments (e.g. --color=auto\n). Hence:\n>>> import shlex\n>>> s = shlex.shlex('~/a && b-c --color=auto || d *.py?',\n... punctuation_chars=True)\n>>> list(s)\n['~/a', '&&', 'b-c', '--color=auto', '||', 'd', '*.py?']\nHowever, to match the shell as closely as possible, it is recommended to\nalways use posix\nand whitespace_split\nwhen using\npunctuation_chars\n, which will negate\nwordchars\nentirely.\nFor best effect, punctuation_chars\nshould be set in conjunction with\nposix=True\n. (Note that posix=False\nis the default for\nshlex\n.)", "code_snippets": ["\n", " ", " ", " ", "\n", "\n", "\n", "\n", " ", " ", "\n", " ", "\n", "\n", "\n"], "language": "Python", "source": "python.org", "token_count": 3728} +{"url": "https://docs.python.org/3/library/tokenize.html", "title": " \u2014 Tokenizer for Python source", "content": "tokenize\n\u2014 Tokenizer for Python source\u00b6\nSource code: Lib/tokenize.py\nThe tokenize\nmodule provides a lexical scanner for Python source code,\nimplemented in Python. The scanner in this module returns comments as tokens\nas well, making it useful for implementing \u201cpretty-printers\u201d, including\ncolorizers for on-screen displays.\nTo simplify token stream handling, all operator and\ndelimiter tokens and Ellipsis\nare returned using\nthe generic OP\ntoken type. The exact\ntype can be determined by checking the exact_type\nproperty on the\nnamed tuple returned from tokenize.tokenize()\n.\nWarning\nNote that the functions in this module are only designed to parse\nsyntactically valid Python code (code that does not raise when parsed\nusing ast.parse()\n). The behavior of the functions in this module is\nundefined when providing invalid Python code and it can change at any\npoint.\nTokenizing Input\u00b6\nThe primary entry point is a generator:\n- tokenize.tokenize(readline)\u00b6\nThe\ntokenize()\ngenerator requires one argument, readline, which must be a callable object which provides the same interface as theio.IOBase.readline()\nmethod of file objects. Each call to the function should return one line of input as bytes.The generator produces 5-tuples with these members: the token type; the token string; a 2-tuple\n(srow, scol)\nof ints specifying the row and column where the token begins in the source; a 2-tuple(erow, ecol)\nof ints specifying the row and column where the token ends in the source; and the line on which the token was found. The line passed (the last tuple item) is the physical line. The 5 tuple is returned as a named tuple with the field names:type string start end line\n.The returned named tuple has an additional property named\nexact_type\nthat contains the exact operator type forOP\ntokens. For all other token typesexact_type\nequals the named tupletype\nfield.Changed in version 3.1: Added support for named tuples.\nChanged in version 3.3: Added support for\nexact_type\n.tokenize()\ndetermines the source encoding of the file by looking for a UTF-8 BOM or encoding cookie, according to PEP 263.\n- tokenize.generate_tokens(readline)\u00b6\nTokenize a source reading unicode strings instead of bytes.\nLike\ntokenize()\n, the readline argument is a callable returning a single line of input. However,generate_tokens()\nexpects readline to return a str object rather than bytes.The result is an iterator yielding named tuples, exactly like\ntokenize()\n. It does not yield anENCODING\ntoken.\nAll constants from the token\nmodule are also exported from\ntokenize\n.\nAnother function is provided to reverse the tokenization process. This is useful for creating tools that tokenize a script, modify the token stream, and write back the modified script.\n- tokenize.untokenize(iterable)\u00b6\nConverts tokens back into Python source code. The iterable must return sequences with at least two elements, the token type and the token string. Any additional sequence elements are ignored.\nThe result is guaranteed to tokenize back to match the input so that the conversion is lossless and round-trips are assured. The guarantee applies only to the token type and token string as the spacing between tokens (column positions) may change.\nIt returns bytes, encoded using the\nENCODING\ntoken, which is the first token sequence output bytokenize()\n. If there is no encoding token in the input, it returns a str instead.\ntokenize()\nneeds to detect the encoding of source files it tokenizes. The\nfunction it uses to do this is available:\n- tokenize.detect_encoding(readline)\u00b6\nThe\ndetect_encoding()\nfunction is used to detect the encoding that should be used to decode a Python source file. It requires one argument, readline, in the same way as thetokenize()\ngenerator.It will call readline a maximum of twice, and return the encoding used (as a string) and a list of any lines (not decoded from bytes) it has read in.\nIt detects the encoding from the presence of a UTF-8 BOM or an encoding cookie as specified in PEP 263. If both a BOM and a cookie are present, but disagree, a\nSyntaxError\nwill be raised. Note that if the BOM is found,'utf-8-sig'\nwill be returned as an encoding.If no encoding is specified, then the default of\n'utf-8'\nwill be returned.Use\nopen()\nto open Python source files: it usesdetect_encoding()\nto detect the file encoding.\n- tokenize.open(filename)\u00b6\nOpen a file in read only mode using the encoding detected by\ndetect_encoding()\n.Added in version 3.2.\n- exception tokenize.TokenError\u00b6\nRaised when either a docstring or expression that may be split over several lines is not completed anywhere in the file, for example:\n\"\"\"Beginning of docstring\nor:\n[1, 2, 3\nCommand-Line Usage\u00b6\nAdded in version 3.3.\nThe tokenize\nmodule can be executed as a script from the command line.\nIt is as simple as:\npython -m tokenize [-e] [filename.py]\nThe following options are accepted:\n- -h, --help\u00b6\nshow this help message and exit\n- -e, --exact\u00b6\ndisplay token names using the exact type\nIf filename.py\nis specified its contents are tokenized to stdout.\nOtherwise, tokenization is performed on stdin.\nExamples\u00b6\nExample of a script rewriter that transforms float literals into Decimal objects:\nfrom tokenize import tokenize, untokenize, NUMBER, STRING, NAME, OP\nfrom io import BytesIO\ndef decistmt(s):\n\"\"\"Substitute Decimals for floats in a string of statements.\n>>> from decimal import Decimal\n>>> s = 'print(+21.3e-5*-.1234/81.7)'\n>>> decistmt(s)\n\"print (+Decimal ('21.3e-5')*-Decimal ('.1234')/Decimal ('81.7'))\"\nThe format of the exponent is inherited from the platform C library.\nKnown cases are \"e-007\" (Windows) and \"e-07\" (not Windows). Since\nwe're only showing 12 digits, and the 13th isn't close to 5, the\nrest of the output should be platform-independent.\n>>> exec(s) #doctest: +ELLIPSIS\n-3.21716034272e-0...7\nOutput from calculations with Decimal should be identical across all\nplatforms.\n>>> exec(decistmt(s))\n-3.217160342717258261933904529E-7\n\"\"\"\nresult = []\ng = tokenize(BytesIO(s.encode('utf-8')).readline) # tokenize the string\nfor toknum, tokval, _, _, _ in g:\nif toknum == NUMBER and '.' in tokval: # replace NUMBER tokens\nresult.extend([\n(NAME, 'Decimal'),\n(OP, '('),\n(STRING, repr(tokval)),\n(OP, ')')\n])\nelse:\nresult.append((toknum, tokval))\nreturn untokenize(result).decode('utf-8')\nExample of tokenizing from the command line. The script:\ndef say_hello():\nprint(\"Hello, World!\")\nsay_hello()\nwill be tokenized to the following output where the first column is the range of the line/column coordinates where the token is found, the second column is the name of the token, and the final column is the value of the token (if any)\n$ python -m tokenize hello.py\n0,0-0,0: ENCODING 'utf-8'\n1,0-1,3: NAME 'def'\n1,4-1,13: NAME 'say_hello'\n1,13-1,14: OP '('\n1,14-1,15: OP ')'\n1,15-1,16: OP ':'\n1,16-1,17: NEWLINE '\\n'\n2,0-2,4: INDENT ' '\n2,4-2,9: NAME 'print'\n2,9-2,10: OP '('\n2,10-2,25: STRING '\"Hello, World!\"'\n2,25-2,26: OP ')'\n2,26-2,27: NEWLINE '\\n'\n3,0-3,1: NL '\\n'\n4,0-4,0: DEDENT ''\n4,0-4,9: NAME 'say_hello'\n4,9-4,10: OP '('\n4,10-4,11: OP ')'\n4,11-4,12: NEWLINE '\\n'\n5,0-5,0: ENDMARKER ''\nThe exact token type names can be displayed using the -e\noption:\n$ python -m tokenize -e hello.py\n0,0-0,0: ENCODING 'utf-8'\n1,0-1,3: NAME 'def'\n1,4-1,13: NAME 'say_hello'\n1,13-1,14: LPAR '('\n1,14-1,15: RPAR ')'\n1,15-1,16: COLON ':'\n1,16-1,17: NEWLINE '\\n'\n2,0-2,4: INDENT ' '\n2,4-2,9: NAME 'print'\n2,9-2,10: LPAR '('\n2,10-2,25: STRING '\"Hello, World!\"'\n2,25-2,26: RPAR ')'\n2,26-2,27: NEWLINE '\\n'\n3,0-3,1: NL '\\n'\n4,0-4,0: DEDENT ''\n4,0-4,9: NAME 'say_hello'\n4,9-4,10: LPAR '('\n4,10-4,11: RPAR ')'\n4,11-4,12: NEWLINE '\\n'\n5,0-5,0: ENDMARKER ''\nExample of tokenizing a file programmatically, reading unicode\nstrings instead of bytes with generate_tokens()\n:\nimport tokenize\nwith tokenize.open('hello.py') as f:\ntokens = tokenize.generate_tokens(f.readline)\nfor token in tokens:\nprint(token)\nOr reading bytes directly with tokenize()\n:\nimport tokenize\nwith open('hello.py', 'rb') as f:\ntokens = tokenize.tokenize(f.readline)\nfor token in tokens:\nprint(token)", "code_snippets": ["\n", "\n", "\n ", "\n ", "\n", " ", " ", " ", " ", " ", " ", "\n", " ", "\n\n", "\n", "\n\n", "\n", "\n", "\n", "\n\n", "\n", "\n", "\n", "\n\n", "\n", "\n\n", "\n", "\n\n", "\n", "\n", "\n ", " ", " ", "\n ", " ", " ", " ", "\n ", " ", " ", " ", " ", " ", " ", " ", "\n ", " ", " ", " ", " ", " ", " ", " ", " ", "\n ", "\n ", " ", "\n ", " ", "\n ", " ", "\n ", " ", "\n ", "\n ", "\n ", " ", "\n ", " ", "\n", "\n ", "\n\n", "\n", "\n\n", " ", " ", " ", "\n ", " ", " ", "\n ", " ", " ", " ", "\n ", "\n", "\n\n", " ", " ", " ", " ", "\n ", " ", " ", "\n ", " ", " ", " ", "\n ", "\n"], "language": "Python", "source": "python.org", "token_count": 2007} +{"url": "https://docs.python.org/3/library/keyword.html", "title": " \u2014 Testing for Python keywords", "content": "keyword\n\u2014 Testing for Python keywords\u00b6\nSource code: Lib/keyword.py\nThis module allows a Python program to determine if a string is a keyword or soft keyword.\n- keyword.kwlist\u00b6\nSequence containing all the keywords defined for the interpreter. If any keywords are defined to only be active when particular\n__future__\nstatements are in effect, these will be included as well.\n- keyword.issoftkeyword(s)\u00b6\nReturn\nTrue\nif s is a Python soft keyword.Added in version 3.9.\n- keyword.softkwlist\u00b6\nSequence containing all the soft keywords defined for the interpreter. If any soft keywords are defined to only be active when particular\n__future__\nstatements are in effect, these will be included as well.Added in version 3.9.", "code_snippets": [], "language": "Python", "source": "python.org", "token_count": 178} +{"url": "https://docs.python.org/3/c-api/buffer.html", "title": "Buffer Protocol", "content": "Buffer Protocol\u00b6\nCertain objects available in Python wrap access to an underlying memory\narray or buffer. Such objects include the built-in bytes\nand\nbytearray\n, and some extension types like array.array\n.\nThird-party libraries may define their own types for special purposes, such\nas image processing or numeric analysis.\nWhile each of these types have their own semantics, they share the common characteristic of being backed by a possibly large memory buffer. It is then desirable, in some situations, to access that buffer directly and without intermediate copying.\nPython provides such a facility at the C and Python level in the form of the buffer protocol. This protocol has two sides:\non the producer side, a type can export a \u201cbuffer interface\u201d which allows objects of that type to expose information about their underlying buffer. This interface is described in the section Buffer Object Structures; for Python see Emulating buffer types.\non the consumer side, several means are available to obtain a pointer to the raw underlying data of an object (for example a method parameter). For Python see\nmemoryview\n.\nSimple objects such as bytes\nand bytearray\nexpose their\nunderlying buffer in byte-oriented form. Other forms are possible; for example,\nthe elements exposed by an array.array\ncan be multi-byte values.\nAn example consumer of the buffer interface is the write()\nmethod of file objects: any object that can export a series of bytes through\nthe buffer interface can be written to a file. While write()\nonly\nneeds read-only access to the internal contents of the object passed to it,\nother methods such as readinto()\nneed write access\nto the contents of their argument. The buffer interface allows objects to\nselectively allow or reject exporting of read-write and read-only buffers.\nThere are two ways for a consumer of the buffer interface to acquire a buffer over a target object:\ncall\nPyObject_GetBuffer()\nwith the right parameters;call\nPyArg_ParseTuple()\n(or one of its siblings) with one of they*\n,w*\nors*\nformat codes.\nIn both cases, PyBuffer_Release()\nmust be called when the buffer\nisn\u2019t needed anymore. Failure to do so could lead to various issues such as\nresource leaks.\nAdded in version 3.12: The buffer protocol is now accessible in Python, see\nEmulating buffer types and memoryview\n.\nBuffer structure\u00b6\nBuffer structures (or simply \u201cbuffers\u201d) are useful as a way to expose the binary data from another object to the Python programmer. They can also be used as a zero-copy slicing mechanism. Using their ability to reference a block of memory, it is possible to expose any data to the Python programmer quite easily. The memory could be a large, constant array in a C extension, it could be a raw block of memory for manipulation before passing to an operating system library, or it could be used to pass around structured data in its native, in-memory format.\nContrary to most data types exposed by the Python interpreter, buffers\nare not PyObject\npointers but rather simple C structures. This\nallows them to be created and copied very simply. When a generic wrapper\naround a buffer is needed, a memoryview object\ncan be created.\nFor short instructions how to write an exporting object, see\nBuffer Object Structures. For obtaining\na buffer, see PyObject_GetBuffer()\n.\n-\ntype Py_buffer\u00b6\n- Part of the Stable ABI (including all members) since version 3.11.\n-\nvoid *buf\u00b6\nA pointer to the start of the logical structure described by the buffer fields. This can be any location within the underlying physical memory block of the exporter. For example, with negative\nstrides\nthe value may point to the end of the memory block.For contiguous arrays, the value points to the beginning of the memory block.\n-\nPyObject *obj\u00b6\nA new reference to the exporting object. The reference is owned by the consumer and automatically released (i.e. reference count decremented) and set to\nNULL\nbyPyBuffer_Release()\n. The field is the equivalent of the return value of any standard C-API function.As a special case, for temporary buffers that are wrapped by\nPyMemoryView_FromBuffer()\norPyBuffer_FillInfo()\nthis field isNULL\n. In general, exporting objects MUST NOT use this scheme.\n-\nPy_ssize_t len\u00b6\nproduct(shape) * itemsize\n. For contiguous arrays, this is the length of the underlying memory block. For non-contiguous arrays, it is the length that the logical structure would have if it were copied to a contiguous representation.Accessing\n((char *)buf)[0] up to ((char *)buf)[len-1]\nis only valid if the buffer has been obtained by a request that guarantees contiguity. In most cases such a request will bePyBUF_SIMPLE\norPyBUF_WRITABLE\n.\n-\nint readonly\u00b6\nAn indicator of whether the buffer is read-only. This field is controlled by the\nPyBUF_WRITABLE\nflag.\n-\nPy_ssize_t itemsize\u00b6\nItem size in bytes of a single element. Same as the value of\nstruct.calcsize()\ncalled on non-NULL\nformat\nvalues.Important exception: If a consumer requests a buffer without the\nPyBUF_FORMAT\nflag,format\nwill be set toNULL\n, butitemsize\nstill has the value for the original format.If\nshape\nis present, the equalityproduct(shape) * itemsize == len\nstill holds and the consumer can useitemsize\nto navigate the buffer.If\nshape\nisNULL\nas a result of aPyBUF_SIMPLE\nor aPyBUF_WRITABLE\nrequest, the consumer must disregarditemsize\nand assumeitemsize == 1\n.\n-\nchar *format\u00b6\nA NULL terminated string in\nstruct\nmodule style syntax describing the contents of a single item. If this isNULL\n,\"B\"\n(unsigned bytes) is assumed.This field is controlled by the\nPyBUF_FORMAT\nflag.\n-\nint ndim\u00b6\nThe number of dimensions the memory represents as an n-dimensional array. If it is\n0\n,buf\npoints to a single item representing a scalar. In this case,shape\n,strides\nandsuboffsets\nMUST beNULL\n. The maximum number of dimensions is given byPyBUF_MAX_NDIM\n.\n-\nPy_ssize_t *shape\u00b6\nAn array of\nPy_ssize_t\nof lengthndim\nindicating the shape of the memory as an n-dimensional array. Note thatshape[0] * ... * shape[ndim-1] * itemsize\nMUST be equal tolen\n.Shape values are restricted to\nshape[n] >= 0\n. The caseshape[n] == 0\nrequires special attention. See complex arrays for further information.The shape array is read-only for the consumer.\n-\nPy_ssize_t *strides\u00b6\nAn array of\nPy_ssize_t\nof lengthndim\ngiving the number of bytes to skip to get to a new element in each dimension.Stride values can be any integer. For regular arrays, strides are usually positive, but a consumer MUST be able to handle the case\nstrides[n] <= 0\n. See complex arrays for further information.The strides array is read-only for the consumer.\n-\nPy_ssize_t *suboffsets\u00b6\nAn array of\nPy_ssize_t\nof lengthndim\n. Ifsuboffsets[n] >= 0\n, the values stored along the nth dimension are pointers and the suboffset value dictates how many bytes to add to each pointer after de-referencing. A suboffset value that is negative indicates that no de-referencing should occur (striding in a contiguous memory block).If all suboffsets are negative (i.e. no de-referencing is needed), then this field must be\nNULL\n(the default value).This type of array representation is used by the Python Imaging Library (PIL). See complex arrays for further information how to access elements of such an array.\nThe suboffsets array is read-only for the consumer.\n-\nvoid *internal\u00b6\nThis is for use internally by the exporting object. For example, this might be re-cast as an integer by the exporter and used to store flags about whether or not the shape, strides, and suboffsets arrays must be freed when the buffer is released. The consumer MUST NOT alter this value.\n-\nvoid *buf\u00b6\nConstants:\n-\nPyBUF_MAX_NDIM\u00b6\n- Part of the Stable ABI since version 3.11.\nThe maximum number of dimensions the memory represents. Exporters MUST respect this limit, consumers of multi-dimensional buffers SHOULD be able to handle up to\nPyBUF_MAX_NDIM\ndimensions. Currently set to 64.\nBuffer request types\u00b6\nBuffers are usually obtained by sending a buffer request to an exporting\nobject via PyObject_GetBuffer()\n. Since the complexity of the logical\nstructure of the memory can vary drastically, the consumer uses the flags\nargument to specify the exact buffer type it can handle.\nAll Py_buffer\nfields are unambiguously defined by the request\ntype.\nrequest-independent fields\u00b6\nThe following fields are not influenced by flags and must always be filled in\nwith the correct values: obj\n, buf\n,\nlen\n, itemsize\n, ndim\n.\nreadonly, format\u00b6\n- PyBUF_WRITABLE\u00b6\n- Part of the Stable ABI since version 3.11.\nControls the\nreadonly\nfield. If set, the exporter MUST provide a writable buffer or else report failure. Otherwise, the exporter MAY provide either a read-only or writable buffer, but the choice MUST be consistent for all consumers. For example, PyBUF_SIMPLE | PyBUF_WRITABLE can be used to request a simple writable buffer.\n- PyBUF_WRITEABLE\u00b6\nThis is a soft deprecated alias to\nPyBUF_WRITABLE\n.\n- PyBUF_FORMAT\u00b6\n- Part of the Stable ABI since version 3.11.\nControls the\nformat\nfield. If set, this field MUST be filled in correctly. Otherwise, this field MUST beNULL\n.\nPyBUF_WRITABLE\ncan be |\u2019d to any of the flags in the next section.\nSince PyBUF_SIMPLE\nis defined as 0, PyBUF_WRITABLE\ncan be used as a stand-alone flag to request a simple writable buffer.\nPyBUF_FORMAT\nmust be |\u2019d to any of the flags except PyBUF_SIMPLE\n, because\nthe latter already implies format B\n(unsigned bytes). PyBUF_FORMAT\ncannot be\nused on its own.\nshape, strides, suboffsets\u00b6\nThe flags that control the logical structure of the memory are listed in decreasing order of complexity. Note that each flag contains all bits of the flags below it.\nRequest |\nshape |\nstrides |\nsuboffsets |\n|---|---|---|---|\n|\nyes |\nyes |\nif needed |\n|\nyes |\nyes |\nNULL |\n|\nyes |\nNULL |\nNULL |\n|\nNULL |\nNULL |\nNULL |\ncontiguity requests\u00b6\nC or Fortran contiguity can be explicitly requested, with and without stride information. Without stride information, the buffer must be C-contiguous.\nRequest |\nshape |\nstrides |\nsuboffsets |\ncontig |\n|---|---|---|---|---|\n|\nyes |\nyes |\nNULL |\nC |\n|\nyes |\nyes |\nNULL |\nF |\n|\nyes |\nyes |\nNULL |\nC or F |\nyes |\nNULL |\nNULL |\nC |\ncompound requests\u00b6\nAll possible requests are fully defined by some combination of the flags in the previous section. For convenience, the buffer protocol provides frequently used combinations as single flags.\nIn the following table U stands for undefined contiguity. The consumer would\nhave to call PyBuffer_IsContiguous()\nto determine contiguity.\nRequest |\nshape |\nstrides |\nsuboffsets |\ncontig |\nreadonly |\nformat |\n|---|---|---|---|---|---|---|\n|\nyes |\nyes |\nif needed |\nU |\n0 |\nyes |\n|\nyes |\nyes |\nif needed |\nU |\n1 or 0 |\nyes |\n|\nyes |\nyes |\nNULL |\nU |\n0 |\nyes |\n|\nyes |\nyes |\nNULL |\nU |\n1 or 0 |\nyes |\n|\nyes |\nyes |\nNULL |\nU |\n0 |\nNULL |\n|\nyes |\nyes |\nNULL |\nU |\n1 or 0 |\nNULL |\n|\nyes |\nNULL |\nNULL |\nC |\n0 |\nNULL |\n|\nyes |\nNULL |\nNULL |\nC |\n1 or 0 |\nNULL |\nComplex arrays\u00b6\nNumPy-style: shape and strides\u00b6\nThe logical structure of NumPy-style arrays is defined by itemsize\n,\nndim\n, shape\nand strides\n.\nIf ndim == 0\n, the memory location pointed to by buf\nis\ninterpreted as a scalar of size itemsize\n. In that case,\nboth shape\nand strides\nare NULL\n.\nIf strides\nis NULL\n, the array is interpreted as\na standard n-dimensional C-array. Otherwise, the consumer must access an\nn-dimensional array as follows:\nptr = (char *)buf + indices[0] * strides[0] + ... + indices[n-1] * strides[n-1];\nitem = *((typeof(item) *)ptr);\nAs noted above, buf\ncan point to any location within\nthe actual memory block. An exporter can check the validity of a buffer with\nthis function:\ndef verify_structure(memlen, itemsize, ndim, shape, strides, offset):\n\"\"\"Verify that the parameters represent a valid array within\nthe bounds of the allocated memory:\nchar *mem: start of the physical memory block\nmemlen: length of the physical memory block\noffset: (char *)buf - mem\n\"\"\"\nif offset % itemsize:\nreturn False\nif offset < 0 or offset+itemsize > memlen:\nreturn False\nif any(v % itemsize for v in strides):\nreturn False\nif ndim <= 0:\nreturn ndim == 0 and not shape and not strides\nif 0 in shape:\nreturn True\nimin = sum(strides[j]*(shape[j]-1) for j in range(ndim)\nif strides[j] <= 0)\nimax = sum(strides[j]*(shape[j]-1) for j in range(ndim)\nif strides[j] > 0)\nreturn 0 <= offset+imin and offset+imax+itemsize <= memlen\nPIL-style: shape, strides and suboffsets\u00b6\nIn addition to the regular items, PIL-style arrays can contain pointers\nthat must be followed in order to get to the next element in a dimension.\nFor example, the regular three-dimensional C-array char v[2][2][3]\ncan\nalso be viewed as an array of 2 pointers to 2 two-dimensional arrays:\nchar (*v[2])[2][3]\n. In suboffsets representation, those two pointers\ncan be embedded at the start of buf\n, pointing\nto two char x[2][3]\narrays that can be located anywhere in memory.\nHere is a function that returns a pointer to the element in an N-D array\npointed to by an N-dimensional index when there are both non-NULL\nstrides\nand suboffsets:\nvoid *get_item_pointer(int ndim, void *buf, Py_ssize_t *strides,\nPy_ssize_t *suboffsets, Py_ssize_t *indices) {\nchar *pointer = (char*)buf;\nint i;\nfor (i = 0; i < ndim; i++) {\npointer += strides[i] * indices[i];\nif (suboffsets[i] >=0 ) {\npointer = *((char**)pointer) + suboffsets[i];\n}\n}\nreturn (void*)pointer;\n}", "code_snippets": [], "language": "Python", "source": "python.org", "token_count": 3319} +{"url": "https://docs.python.org/3/library/email.message.html", "title": ": Representing an email message", "content": "email.message\n: Representing an email message\u00b6\nSource code: Lib/email/message.py\nAdded in version 3.6: [1]\nThe central class in the email\npackage is the EmailMessage\nclass, imported from the email.message\nmodule. It is the base class for\nthe email\nobject model. EmailMessage\nprovides the core\nfunctionality for setting and querying header fields, for accessing message\nbodies, and for creating or modifying structured messages.\nAn email message consists of headers and a payload (which is also referred to as the content). Headers are RFC 5322 or RFC 6532 style field names and values, where the field name and value are separated by a colon. The colon is not part of either the field name or the field value. The payload may be a simple text message, or a binary object, or a structured sequence of sub-messages each with their own set of headers and their own payload. The latter type of payload is indicated by the message having a MIME type such as multipart/* or message/rfc822.\nThe conceptual model provided by an EmailMessage\nobject is that of an\nordered dictionary of headers coupled with a payload that represents the\nRFC 5322 body of the message, which might be a list of sub-EmailMessage\nobjects. In addition to the normal dictionary methods for accessing the header\nnames and values, there are methods for accessing specialized information from\nthe headers (for example the MIME content type), for operating on the payload,\nfor generating a serialized version of the message, and for recursively walking\nover the object tree.\nThe EmailMessage\ndictionary-like interface is indexed by the header\nnames, which must be ASCII values. The values of the dictionary are strings\nwith some extra methods. Headers are stored and returned in case-preserving\nform, but field names are matched case-insensitively. The keys are ordered,\nbut unlike a real dict, there can be duplicates. Additional methods are\nprovided for working with headers that have duplicate keys.\nThe payload is either a string or bytes object, in the case of simple message\nobjects, or a list of EmailMessage\nobjects, for MIME container\ndocuments such as multipart/* and message/rfc822\nmessage objects.\n- class email.message.EmailMessage(policy=default)\u00b6\nIf policy is specified use the rules it specifies to update and serialize the representation of the message. If policy is not set, use the\ndefault\npolicy, which follows the rules of the email RFCs except for line endings (instead of the RFC mandated\\r\\n\n, it uses the Python standard\\n\nline endings). For more information see thepolicy\ndocumentation. [2]- as_string(unixfrom=False, maxheaderlen=None, policy=None)\u00b6\nReturn the entire message flattened as a string. When optional unixfrom is true, the envelope header is included in the returned string. unixfrom defaults to\nFalse\n. For backward compatibility with the baseMessage\nclass maxheaderlen is accepted, but defaults toNone\n, which means that by default the line length is controlled by themax_line_length\nof the policy. The policy argument may be used to override the default policy obtained from the message instance. This can be used to control some of the formatting produced by the method, since the specified policy will be passed to theGenerator\n.Flattening the message may trigger changes to the\nEmailMessage\nif defaults need to be filled in to complete the transformation to a string (for example, MIME boundaries may be generated or modified).Note that this method is provided as a convenience and may not be the most useful way to serialize messages in your application, especially if you are dealing with multiple messages. See\nemail.generator.Generator\nfor a more flexible API for serializing messages. Note also that this method is restricted to producing messages serialized as \u201c7 bit clean\u201d whenutf8\nisFalse\n, which is the default.Changed in version 3.6: the default behavior when maxheaderlen is not specified was changed from defaulting to 0 to defaulting to the value of max_line_length from the policy.\n- __str__()\u00b6\nEquivalent to\nas_string(policy=self.policy.clone(utf8=True))\n. Allowsstr(msg)\nto produce a string containing the serialized message in a readable format.Changed in version 3.4: the method was changed to use\nutf8=True\n, thus producing an RFC 6531-like message representation, instead of being a direct alias foras_string()\n.\n- as_bytes(unixfrom=False, policy=None)\u00b6\nReturn the entire message flattened as a bytes object. When optional unixfrom is true, the envelope header is included in the returned string. unixfrom defaults to\nFalse\n. The policy argument may be used to override the default policy obtained from the message instance. This can be used to control some of the formatting produced by the method, since the specified policy will be passed to theBytesGenerator\n.Flattening the message may trigger changes to the\nEmailMessage\nif defaults need to be filled in to complete the transformation to a string (for example, MIME boundaries may be generated or modified).Note that this method is provided as a convenience and may not be the most useful way to serialize messages in your application, especially if you are dealing with multiple messages. See\nemail.generator.BytesGenerator\nfor a more flexible API for serializing messages.\n- __bytes__()\u00b6\nEquivalent to\nas_bytes()\n. Allowsbytes(msg)\nto produce a bytes object containing the serialized message.\n- is_multipart()\u00b6\nReturn\nTrue\nif the message\u2019s payload is a list of sub-EmailMessage\nobjects, otherwise returnFalse\n. Whenis_multipart()\nreturnsFalse\n, the payload should be a string object (which might be a CTE encoded binary payload). Note thatis_multipart()\nreturningTrue\ndoes not necessarily mean that \u201cmsg.get_content_maintype() == \u2018multipart\u2019\u201d will return theTrue\n. For example,is_multipart\nwill returnTrue\nwhen theEmailMessage\nis of typemessage/rfc822\n.\n- set_unixfrom(unixfrom)\u00b6\nSet the message\u2019s envelope header to unixfrom, which should be a string. (See\nmboxMessage\nfor a brief description of this header.)\n- get_unixfrom()\u00b6\nReturn the message\u2019s envelope header. Defaults to\nNone\nif the envelope header was never set.\nThe following methods implement the mapping-like interface for accessing the message\u2019s headers. Note that there are some semantic differences between these methods and a normal mapping (i.e. dictionary) interface. For example, in a dictionary there are no duplicate keys, but here there may be duplicate message headers. Also, in dictionaries there is no guaranteed order to the keys returned by\nkeys()\n, but in anEmailMessage\nobject, headers are always returned in the order they appeared in the original message, or in which they were added to the message later. Any header deleted and then re-added is always appended to the end of the header list.These semantic differences are intentional and are biased toward convenience in the most common use cases.\nNote that in all cases, any envelope header present in the message is not included in the mapping interface.\n- __len__()\u00b6\nReturn the total number of headers, including duplicates.\n- __contains__(name)\u00b6\nReturn\nTrue\nif the message object has a field named name. Matching is done without regard to case and name does not include the trailing colon. Used for thein\noperator. For example:if 'message-id' in myMessage: print('Message-ID:', myMessage['message-id'])\n- __getitem__(name)\u00b6\nReturn the value of the named header field. name does not include the colon field separator. If the header is missing,\nNone\nis returned; aKeyError\nis never raised.Note that if the named field appears more than once in the message\u2019s headers, exactly which of those field values will be returned is undefined. Use the\nget_all()\nmethod to get the values of all the extant headers named name.Using the standard (non-\ncompat32\n) policies, the returned value is an instance of a subclass ofemail.headerregistry.BaseHeader\n.\n- __setitem__(name, val)\u00b6\nAdd a header to the message with field name name and value val. The field is appended to the end of the message\u2019s existing headers.\nNote that this does not overwrite or delete any existing header with the same name. If you want to ensure that the new header is the only one present in the message with field name name, delete the field first, e.g.:\ndel msg['subject'] msg['subject'] = 'Python roolz!'\nIf the\npolicy\ndefines certain headers to be unique (as the standard policies do), this method may raise aValueError\nwhen an attempt is made to assign a value to such a header when one already exists. This behavior is intentional for consistency\u2019s sake, but do not depend on it as we may choose to make such assignments do an automatic deletion of the existing header in the future.\n- __delitem__(name)\u00b6\nDelete all occurrences of the field with name name from the message\u2019s headers. No exception is raised if the named field isn\u2019t present in the headers.\n- keys()\u00b6\nReturn a list of all the message\u2019s header field names.\n- values()\u00b6\nReturn a list of all the message\u2019s field values.\n- items()\u00b6\nReturn a list of 2-tuples containing all the message\u2019s field headers and values.\n- get(name, failobj=None)\u00b6\nReturn the value of the named header field. This is identical to\n__getitem__()\nexcept that optional failobj is returned if the named header is missing (failobj defaults toNone\n).\nHere are some additional useful header related methods:\n- get_all(name, failobj=None)\u00b6\nReturn a list of all the values for the field named name. If there are no such named headers in the message, failobj is returned (defaults to\nNone\n).\n- add_header(_name, _value, **_params)\u00b6\nExtended header setting. This method is similar to\n__setitem__()\nexcept that additional header parameters can be provided as keyword arguments. _name is the header field to add and _value is the primary value for the header.For each item in the keyword argument dictionary _params, the key is taken as the parameter name, with underscores converted to dashes (since dashes are illegal in Python identifiers). Normally, the parameter will be added as\nkey=\"value\"\nunless the value isNone\n, in which case only the key will be added.If the value contains non-ASCII characters, the charset and language may be explicitly controlled by specifying the value as a three tuple in the format\n(CHARSET, LANGUAGE, VALUE)\n, whereCHARSET\nis a string naming the charset to be used to encode the value,LANGUAGE\ncan usually be set toNone\nor the empty string (see RFC 2231 for other possibilities), andVALUE\nis the string value containing non-ASCII code points. If a three tuple is not passed and the value contains non-ASCII characters, it is automatically encoded in RFC 2231 format using aCHARSET\nofutf-8\nand aLANGUAGE\nofNone\n.Here is an example:\nmsg.add_header('Content-Disposition', 'attachment', filename='bud.gif')\nThis will add a header that looks like\nContent-Disposition: attachment; filename=\"bud.gif\"\nAn example of the extended interface with non-ASCII characters:\nmsg.add_header('Content-Disposition', 'attachment', filename=('iso-8859-1', '', 'Fu\u00dfballer.ppt'))\n- replace_header(_name, _value)\u00b6\nReplace a header. Replace the first header found in the message that matches _name, retaining header order and field name case of the original header. If no matching header is found, raise a\nKeyError\n.\n- get_content_type()\u00b6\nReturn the message\u2019s content type, coerced to lower case of the form maintype/subtype. If there is no Content-Type header in the message return the value returned by\nget_default_type()\n. If the Content-Type header is invalid, returntext/plain\n.(According to RFC 2045, messages always have a default type,\nget_content_type()\nwill always return a value. RFC 2045 defines a message\u2019s default type to be text/plain unless it appears inside a multipart/digest container, in which case it would be message/rfc822. If the Content-Type header has an invalid type specification, RFC 2045 mandates that the default type be text/plain.)\n- get_content_maintype()\u00b6\nReturn the message\u2019s main content type. This is the maintype part of the string returned by\nget_content_type()\n.\n- get_content_subtype()\u00b6\nReturn the message\u2019s sub-content type. This is the subtype part of the string returned by\nget_content_type()\n.\n- get_default_type()\u00b6\nReturn the default content type. Most messages have a default content type of text/plain, except for messages that are subparts of multipart/digest containers. Such subparts have a default content type of message/rfc822.\n- set_default_type(ctype)\u00b6\nSet the default content type. ctype should either be text/plain or message/rfc822, although this is not enforced. The default content type is not stored in the Content-Type header, so it only affects the return value of the\nget_content_type\nmethods when no Content-Type header is present in the message.\n- set_param(param, value, header='Content-Type', requote=True, charset=None, language='', replace=False)\u00b6\nSet a parameter in the Content-Type header. If the parameter already exists in the header, replace its value with value. When header is\nContent-Type\n(the default) and the header does not yet exist in the message, add it, set its value to text/plain, and append the new parameter value. Optional header specifies an alternative header to Content-Type.If the value contains non-ASCII characters, the charset and language may be explicitly specified using the optional charset and language parameters. Optional language specifies the RFC 2231 language, defaulting to the empty string. Both charset and language should be strings. The default is to use the\nutf8\ncharset andNone\nfor the language.If replace is\nFalse\n(the default) the header is moved to the end of the list of headers. If replace isTrue\n, the header will be updated in place.Use of the requote parameter with\nEmailMessage\nobjects is deprecated.Note that existing parameter values of headers may be accessed through the\nparams\nattribute of the header value (for example,msg['Content-Type'].params['charset']\n).Changed in version 3.4:\nreplace\nkeyword was added.\n- del_param(param, header='content-type', requote=True)\u00b6\nRemove the given parameter completely from the Content-Type header. The header will be re-written in place without the parameter or its value. Optional header specifies an alternative to Content-Type.\nUse of the requote parameter with\nEmailMessage\nobjects is deprecated.\n- get_filename(failobj=None)\u00b6\nReturn the value of the\nfilename\nparameter of the Content-Disposition header of the message. If the header does not have afilename\nparameter, this method falls back to looking for thename\nparameter on the Content-Type header. If neither is found, or the header is missing, then failobj is returned. The returned string will always be unquoted as peremail.utils.unquote()\n.\n- get_boundary(failobj=None)\u00b6\nReturn the value of the\nboundary\nparameter of the Content-Type header of the message, or failobj if either the header is missing, or has noboundary\nparameter. The returned string will always be unquoted as peremail.utils.unquote()\n.\n- set_boundary(boundary)\u00b6\nSet the\nboundary\nparameter of the Content-Type header to boundary.set_boundary()\nwill always quote boundary if necessary. AHeaderParseError\nis raised if the message object has no Content-Type header.Note that using this method is subtly different from deleting the old Content-Type header and adding a new one with the new boundary via\nadd_header()\n, becauseset_boundary()\npreserves the order of the Content-Type header in the list of headers.\n- get_content_charset(failobj=None)\u00b6\nReturn the\ncharset\nparameter of the Content-Type header, coerced to lower case. If there is no Content-Type header, or if that header has nocharset\nparameter, failobj is returned.\n- get_charsets(failobj=None)\u00b6\nReturn a list containing the character set names in the message. If the message is a multipart, then the list will contain one element for each subpart in the payload, otherwise, it will be a list of length 1.\nEach item in the list will be a string which is the value of the\ncharset\nparameter in the Content-Type header for the represented subpart. If the subpart has no Content-Type header, nocharset\nparameter, or is not of the text main MIME type, then that item in the returned list will be failobj.\n- is_attachment()\u00b6\nReturn\nTrue\nif there is a Content-Disposition header and its (case insensitive) value isattachment\n,False\notherwise.Changed in version 3.4.2: is_attachment is now a method instead of a property, for consistency with\nis_multipart()\n.\n- get_content_disposition()\u00b6\nReturn the lowercased value (without parameters) of the message\u2019s Content-Disposition header if it has one, or\nNone\n. The possible values for this method are inline, attachment orNone\nif the message follows RFC 2183.Added in version 3.5.\nThe following methods relate to interrogating and manipulating the content (payload) of the message.\n- walk()\u00b6\nThe\nwalk()\nmethod is an all-purpose generator which can be used to iterate over all the parts and subparts of a message object tree, in depth-first traversal order. You will typically usewalk()\nas the iterator in afor\nloop; each iteration returns the next subpart.Here\u2019s an example that prints the MIME type of every part of a multipart message structure:\n>>> for part in msg.walk(): ... print(part.get_content_type()) multipart/report text/plain message/delivery-status text/plain text/plain message/rfc822 text/plain\nwalk\niterates over the subparts of any part whereis_multipart()\nreturnsTrue\n, even thoughmsg.get_content_maintype() == 'multipart'\nmay returnFalse\n. We can see this in our example by making use of the_structure\ndebug helper function:>>> from email.iterators import _structure >>> for part in msg.walk(): ... print(part.get_content_maintype() == 'multipart', ... part.is_multipart()) True True False False False True False False False False False True False False >>> _structure(msg) multipart/report text/plain message/delivery-status text/plain text/plain message/rfc822 text/plain\nHere the\nmessage\nparts are notmultiparts\n, but they do contain subparts.is_multipart()\nreturnsTrue\nandwalk\ndescends into the subparts.\n- get_body(preferencelist=('related', 'html', 'plain'))\u00b6\nReturn the MIME part that is the best candidate to be the \u201cbody\u201d of the message.\npreferencelist must be a sequence of strings from the set\nrelated\n,html\n, andplain\n, and indicates the order of preference for the content type of the part returned.Start looking for candidate matches with the object on which the\nget_body\nmethod is called.If\nrelated\nis not included in preferencelist, consider the root part (or subpart of the root part) of any related encountered as a candidate if the (sub-)part matches a preference.When encountering a\nmultipart/related\n, check thestart\nparameter and if a part with a matching Content-ID is found, consider only it when looking for candidate matches. Otherwise consider only the first (default root) part of themultipart/related\n.If a part has a Content-Disposition header, only consider the part a candidate match if the value of the header is\ninline\n.If none of the candidates matches any of the preferences in preferencelist, return\nNone\n.Notes: (1) For most applications the only preferencelist combinations that really make sense are\n('plain',)\n,('html', 'plain')\n, and the default('related', 'html', 'plain')\n. (2) Because matching starts with the object on whichget_body\nis called, callingget_body\non amultipart/related\nwill return the object itself unless preferencelist has a non-default value. (3) Messages (or message parts) that do not specify a Content-Type or whose Content-Type header is invalid will be treated as if they are of typetext/plain\n, which may occasionally causeget_body\nto return unexpected results.\n- iter_attachments()\u00b6\nReturn an iterator over all of the immediate sub-parts of the message that are not candidate \u201cbody\u201d parts. That is, skip the first occurrence of each of\ntext/plain\n,text/html\n,multipart/related\n, ormultipart/alternative\n(unless they are explicitly marked as attachments via Content-Disposition: attachment), and return all remaining parts. When applied directly to amultipart/related\n, return an iterator over the all the related parts except the root part (ie: the part pointed to by thestart\nparameter, or the first part if there is nostart\nparameter or thestart\nparameter doesn\u2019t match the Content-ID of any of the parts). When applied directly to amultipart/alternative\nor a non-multipart\n, return an empty iterator.\n- iter_parts()\u00b6\nReturn an iterator over all of the immediate sub-parts of the message, which will be empty for a non-\nmultipart\n. (See alsowalk()\n.)\n- get_content(*args, content_manager=None, **kw)\u00b6\nCall the\nget_content()\nmethod of the content_manager, passing self as the message object, and passing along any other arguments or keywords as additional arguments. If content_manager is not specified, use thecontent_manager\nspecified by the currentpolicy\n.\n- set_content(*args, content_manager=None, **kw)\u00b6\nCall the\nset_content()\nmethod of the content_manager, passing self as the message object, and passing along any other arguments or keywords as additional arguments. If content_manager is not specified, use thecontent_manager\nspecified by the currentpolicy\n.\nConvert a non-\nmultipart\nmessage into amultipart/related\nmessage, moving any existing Content- headers and payload into a (new) first part of themultipart\n. If boundary is specified, use it as the boundary string in the multipart, otherwise leave the boundary to be automatically created when it is needed (for example, when the message is serialized).\n- make_alternative(boundary=None)\u00b6\nConvert a non-\nmultipart\nor amultipart/related\ninto amultipart/alternative\n, moving any existing Content- headers and payload into a (new) first part of themultipart\n. If boundary is specified, use it as the boundary string in the multipart, otherwise leave the boundary to be automatically created when it is needed (for example, when the message is serialized).\n- make_mixed(boundary=None)\u00b6\nConvert a non-\nmultipart\n, amultipart/related\n, or amultipart-alternative\ninto amultipart/mixed\n, moving any existing Content- headers and payload into a (new) first part of themultipart\n. If boundary is specified, use it as the boundary string in the multipart, otherwise leave the boundary to be automatically created when it is needed (for example, when the message is serialized).\nIf the message is a\nmultipart/related\n, create a new message object, pass all of the arguments to itsset_content()\nmethod, andattach()\nit to themultipart\n. If the message is a non-multipart\n, callmake_related()\nand then proceed as above. If the message is any other type ofmultipart\n, raise aTypeError\n. If content_manager is not specified, use thecontent_manager\nspecified by the currentpolicy\n. If the added part has no Content-Disposition header, add one with the valueinline\n.\n- add_alternative(*args, content_manager=None, **kw)\u00b6\nIf the message is a\nmultipart/alternative\n, create a new message object, pass all of the arguments to itsset_content()\nmethod, andattach()\nit to themultipart\n. If the message is a non-multipart\normultipart/related\n, callmake_alternative()\nand then proceed as above. If the message is any other type ofmultipart\n, raise aTypeError\n. If content_manager is not specified, use thecontent_manager\nspecified by the currentpolicy\n.\n- add_attachment(*args, content_manager=None, **kw)\u00b6\nIf the message is a\nmultipart/mixed\n, create a new message object, pass all of the arguments to itsset_content()\nmethod, andattach()\nit to themultipart\n. If the message is a non-multipart\n,multipart/related\n, ormultipart/alternative\n, callmake_mixed()\nand then proceed as above. If content_manager is not specified, use thecontent_manager\nspecified by the currentpolicy\n. If the added part has no Content-Disposition header, add one with the valueattachment\n. This method can be used both for explicit attachments (Content-Disposition: attachment) andinline\nattachments (Content-Disposition: inline), by passing appropriate options to thecontent_manager\n.\n- clear()\u00b6\nRemove the payload and all of the headers.\n- clear_content()\u00b6\nRemove the payload and all of the !Content- headers, leaving all other headers intact and in their original order.\nEmailMessage\nobjects have the following instance attributes:- preamble\u00b6\nThe format of a MIME document allows for some text between the blank line following the headers, and the first multipart boundary string. Normally, this text is never visible in a MIME-aware mail reader because it falls outside the standard MIME armor. However, when viewing the raw text of the message, or when viewing the message in a non-MIME aware reader, this text can become visible.\nThe preamble attribute contains this leading extra-armor text for MIME documents. When the\nParser\ndiscovers some text after the headers but before the first boundary string, it assigns this text to the message\u2019s preamble attribute. When theGenerator\nis writing out the plain text representation of a MIME message, and it finds the message has a preamble attribute, it will write this text in the area between the headers and the first boundary. Seeemail.parser\nandemail.generator\nfor details.Note that if the message object has no preamble, the preamble attribute will be\nNone\n.\n- epilogue\u00b6\nThe epilogue attribute acts the same way as the preamble attribute, except that it contains text that appears between the last boundary and the end of the message. As with the\npreamble\n, if there is no epilog text this attribute will beNone\n.\n- defects\u00b6\nThe defects attribute contains a list of all the problems found when parsing this message. See\nemail.errors\nfor a detailed description of the possible parsing defects.\n- class email.message.MIMEPart(policy=default)\u00b6\nThis class represents a subpart of a MIME message. It is identical to\nEmailMessage\n, except that no MIME-Version headers are added whenset_content()\nis called, since sub-parts do not need their own MIME-Version headers.\nFootnotes", "code_snippets": [" ", " ", " ", "\n ", " ", "\n", " ", "\n", " ", " ", "\n", " ", " ", "\n", " ", " ", "\n", " ", "\n ", " ", " ", "\n"], "language": "Python", "source": "python.org", "token_count": 6487} +{"url": "https://docs.python.org/3/c-api/bytearray.html", "title": "Byte Array Objects", "content": "Byte Array Objects\u00b6\n-\nPyTypeObject PyByteArray_Type\u00b6\n- Part of the Stable ABI.\nThis instance of\nPyTypeObject\nrepresents the Python bytearray type; it is the same object asbytearray\nin the Python layer.\nType check macros\u00b6\nDirect API functions\u00b6\n-\nPyObject *PyByteArray_FromObject(PyObject *o)\u00b6\n- Return value: New reference. Part of the Stable ABI.\nReturn a new bytearray object from any object, o, that implements the buffer protocol.\nOn failure, return\nNULL\nwith an exception set.\n-\nPyObject *PyByteArray_FromStringAndSize(const char *string, Py_ssize_t len)\u00b6\n- Return value: New reference. Part of the Stable ABI.\nCreate a new bytearray object from string and its length, len.\nOn failure, return\nNULL\nwith an exception set.\n-\nPyObject *PyByteArray_Concat(PyObject *a, PyObject *b)\u00b6\n- Return value: New reference. Part of the Stable ABI.\nConcat bytearrays a and b and return a new bytearray with the result.\nOn failure, return\nNULL\nwith an exception set.\n-\nPy_ssize_t PyByteArray_Size(PyObject *bytearray)\u00b6\n- Part of the Stable ABI.\nReturn the size of bytearray after checking for a\nNULL\npointer.\n-\nchar *PyByteArray_AsString(PyObject *bytearray)\u00b6\n- Part of the Stable ABI.\nReturn the contents of bytearray as a char array after checking for a\nNULL\npointer. The returned array always has an extra null byte appended.\n-\nint PyByteArray_Resize(PyObject *bytearray, Py_ssize_t len)\u00b6\n- Part of the Stable ABI.\nResize the internal buffer of bytearray to len. Failure is a\n-1\nreturn with an exception set.Changed in version 3.14: A negative len will now result in an exception being set and -1 returned.\nMacros\u00b6\nThese macros trade safety for speed and they don\u2019t check pointers.\n-\nchar *PyByteArray_AS_STRING(PyObject *bytearray)\u00b6\nSimilar to\nPyByteArray_AsString()\n, but without error checking.\n-\nPy_ssize_t PyByteArray_GET_SIZE(PyObject *bytearray)\u00b6\nSimilar to\nPyByteArray_Size()\n, but without error checking.", "code_snippets": [], "language": "Python", "source": "python.org", "token_count": 476} +{"url": "https://docs.python.org/3/library/email.mime.html", "title": ": Creating email and MIME objects from scratch", "content": "email.mime\n: Creating email and MIME objects from scratch\u00b6\nSource code: Lib/email/mime/\nThis module is part of the legacy (Compat32\n) email API. Its functionality\nis partially replaced by the contentmanager\nin the new API, but\nin certain applications these classes may still be useful, even in non-legacy\ncode.\nOrdinarily, you get a message object structure by passing a file or some text to\na parser, which parses the text and returns the root message object. However\nyou can also build a complete message structure from scratch, or even individual\nMessage\nobjects by hand. In fact, you can also take an\nexisting structure and add new Message\nobjects, move them\naround, etc. This makes a very convenient interface for slicing-and-dicing MIME\nmessages.\nYou can create a new object structure by creating Message\ninstances, adding attachments and all the appropriate headers manually. For MIME\nmessages though, the email\npackage provides some convenient subclasses to\nmake things easier.\nHere are the classes:\n- class email.mime.base.MIMEBase(_maintype, _subtype, *, policy=compat32, **_params)\u00b6\nModule:\nemail.mime.base\nThis is the base class for all the MIME-specific subclasses of\nMessage\n. Ordinarily you won\u2019t create instances specifically ofMIMEBase\n, although you could.MIMEBase\nis provided primarily as a convenient base class for more specific MIME-aware subclasses._maintype is the Content-Type major type (e.g. text or image), and _subtype is the Content-Type minor type (e.g. plain or gif). _params is a parameter key/value dictionary and is passed directly to\nMessage.add_header\n.If policy is specified, (defaults to the\ncompat32\npolicy) it will be passed toMessage\n.The\nMIMEBase\nclass always adds a Content-Type header (based on _maintype, _subtype, and _params), and a MIME-Version header (always set to1.0\n).Changed in version 3.6: Added policy keyword-only parameter.\n- class email.mime.nonmultipart.MIMENonMultipart\u00b6\nModule:\nemail.mime.nonmultipart\nA subclass of\nMIMEBase\n, this is an intermediate base class for MIME messages that are not multipart. The primary purpose of this class is to prevent the use of theattach()\nmethod, which only makes sense for multipart messages. Ifattach()\nis called, aMultipartConversionError\nexception is raised.\n- class email.mime.multipart.MIMEMultipart(_subtype='mixed', boundary=None, _subparts=None, *, policy=compat32, **_params)\u00b6\nModule:\nemail.mime.multipart\nA subclass of\nMIMEBase\n, this is an intermediate base class for MIME messages that are multipart. Optional _subtype defaults to mixed, but can be used to specify the subtype of the message. A Content-Type header of multipart/_subtype will be added to the message object. A MIME-Version header will also be added.Optional boundary is the multipart boundary string. When\nNone\n(the default), the boundary is calculated when needed (for example, when the message is serialized)._subparts is a sequence of initial subparts for the payload. It must be possible to convert this sequence to a list. You can always attach new subparts to the message by using the\nMessage.attach\nmethod.Optional policy argument defaults to\ncompat32\n.Additional parameters for the Content-Type header are taken from the keyword arguments, or passed into the _params argument, which is a keyword dictionary.\nChanged in version 3.6: Added policy keyword-only parameter.\n- class email.mime.application.MIMEApplication(_data, _subtype='octet-stream', _encoder=email.encoders.encode_base64, *, policy=compat32, **_params)\u00b6\nModule:\nemail.mime.application\nA subclass of\nMIMENonMultipart\n, theMIMEApplication\nclass is used to represent MIME message objects of major type application. _data contains the bytes for the raw application data. Optional _subtype specifies the MIME subtype and defaults to octet-stream.Optional _encoder is a callable (i.e. function) which will perform the actual encoding of the data for transport. This callable takes one argument, which is the\nMIMEApplication\ninstance. It should useget_payload()\nandset_payload()\nto change the payload to encoded form. It should also add any Content-Transfer-Encoding or other headers to the message object as necessary. The default encoding is base64. See theemail.encoders\nmodule for a list of the built-in encoders.Optional policy argument defaults to\ncompat32\n._params are passed straight through to the base class constructor.\nChanged in version 3.6: Added policy keyword-only parameter.\n- class email.mime.audio.MIMEAudio(_audiodata, _subtype=None, _encoder=email.encoders.encode_base64, *, policy=compat32, **_params)\u00b6\nModule:\nemail.mime.audio\nA subclass of\nMIMENonMultipart\n, theMIMEAudio\nclass is used to create MIME message objects of major type audio. _audiodata contains the bytes for the raw audio data. If this data can be decoded as au, wav, aiff, or aifc, then the subtype will be automatically included in the Content-Type header. Otherwise you can explicitly specify the audio subtype via the _subtype argument. If the minor type could not be guessed and _subtype was not given, thenTypeError\nis raised.Optional _encoder is a callable (i.e. function) which will perform the actual encoding of the audio data for transport. This callable takes one argument, which is the\nMIMEAudio\ninstance. It should useget_payload()\nandset_payload()\nto change the payload to encoded form. It should also add any Content-Transfer-Encoding or other headers to the message object as necessary. The default encoding is base64. See theemail.encoders\nmodule for a list of the built-in encoders.Optional policy argument defaults to\ncompat32\n._params are passed straight through to the base class constructor.\nChanged in version 3.6: Added policy keyword-only parameter.\n- class email.mime.image.MIMEImage(_imagedata, _subtype=None, _encoder=email.encoders.encode_base64, *, policy=compat32, **_params)\u00b6\nModule:\nemail.mime.image\nA subclass of\nMIMENonMultipart\n, theMIMEImage\nclass is used to create MIME message objects of major type image. _imagedata contains the bytes for the raw image data. If this data type can be detected (jpeg, png, gif, tiff, rgb, pbm, pgm, ppm, rast, xbm, bmp, webp, and exr attempted), then the subtype will be automatically included in the Content-Type header. Otherwise you can explicitly specify the image subtype via the _subtype argument. If the minor type could not be guessed and _subtype was not given, thenTypeError\nis raised.Optional _encoder is a callable (i.e. function) which will perform the actual encoding of the image data for transport. This callable takes one argument, which is the\nMIMEImage\ninstance. It should useget_payload()\nandset_payload()\nto change the payload to encoded form. It should also add any Content-Transfer-Encoding or other headers to the message object as necessary. The default encoding is base64. See theemail.encoders\nmodule for a list of the built-in encoders.Optional policy argument defaults to\ncompat32\n._params are passed straight through to the\nMIMEBase\nconstructor.Changed in version 3.6: Added policy keyword-only parameter.\n- class email.mime.message.MIMEMessage(_msg, _subtype='rfc822', *, policy=compat32)\u00b6\nModule:\nemail.mime.message\nA subclass of\nMIMENonMultipart\n, theMIMEMessage\nclass is used to create MIME objects of main type message. _msg is used as the payload, and must be an instance of classMessage\n(or a subclass thereof), otherwise aTypeError\nis raised.Optional _subtype sets the subtype of the message; it defaults to rfc822.\nOptional policy argument defaults to\ncompat32\n.Changed in version 3.6: Added policy keyword-only parameter.\n- class email.mime.text.MIMEText(_text, _subtype='plain', _charset=None, *, policy=compat32)\u00b6\nModule:\nemail.mime.text\nA subclass of\nMIMENonMultipart\n, theMIMEText\nclass is used to create MIME objects of major type text. _text is the string for the payload. _subtype is the minor type and defaults to plain. _charset is the character set of the text and is passed as an argument to theMIMENonMultipart\nconstructor; it defaults tous-ascii\nif the string contains onlyascii\ncode points, andutf-8\notherwise. The _charset parameter accepts either a string or aCharset\ninstance.Unless the _charset argument is explicitly set to\nNone\n, the MIMEText object created will have both a Content-Type header with acharset\nparameter, and a Content-Transfer-Encoding header. This means that a subsequentset_payload\ncall will not result in an encoded payload, even if a charset is passed in theset_payload\ncommand. You can \u201creset\u201d this behavior by deleting theContent-Transfer-Encoding\nheader, after which aset_payload\ncall will automatically encode the new payload (and add a new Content-Transfer-Encoding header).Optional policy argument defaults to\ncompat32\n.Changed in version 3.5: _charset also accepts\nCharset\ninstances.Changed in version 3.6: Added policy keyword-only parameter.", "code_snippets": [], "language": "Python", "source": "python.org", "token_count": 2208} +{"url": "https://docs.python.org/3/library/syslog.html", "title": " \u2014 Unix syslog library routines", "content": "syslog\n\u2014 Unix syslog library routines\u00b6\nThis module provides an interface to the Unix syslog\nlibrary routines.\nRefer to the Unix manual pages for a detailed description of the syslog\nfacility.\nAvailability: Unix, not WASI, not iOS.\nThis module wraps the system syslog\nfamily of routines. A pure Python\nlibrary that can speak to a syslog server is available in the\nlogging.handlers\nmodule as SysLogHandler\n.\nThe module defines the following functions:\n- syslog.syslog(message)\u00b6\n- syslog.syslog(priority, message)\nSend the string message to the system logger. A trailing newline is added if necessary. Each message is tagged with a priority composed of a facility and a level. The optional priority argument, which defaults to\nLOG_INFO\n, determines the message priority. If the facility is not encoded in priority using logical-or (LOG_INFO | LOG_USER\n), the value given in theopenlog()\ncall is used.If\nopenlog()\nhas not been called prior to the call tosyslog()\n,openlog()\nwill be called with no arguments.Raises an auditing event\nsyslog.syslog\nwith argumentspriority\n,message\n.Changed in version 3.2: In previous versions,\nopenlog()\nwould not be called automatically if it wasn\u2019t called prior to the call tosyslog()\n, deferring to the syslog implementation to callopenlog()\n.Changed in version 3.12: This function is restricted in subinterpreters. (Only code that runs in multiple interpreters is affected and the restriction is not relevant for most users.)\nopenlog()\nmust be called in the main interpreter beforesyslog()\nmay be used in a subinterpreter. Otherwise it will raiseRuntimeError\n.\n- syslog.openlog([ident[, logoption[, facility]]])\u00b6\nLogging options of subsequent\nsyslog()\ncalls can be set by callingopenlog()\n.syslog()\nwill callopenlog()\nwith no arguments if the log is not currently open.The optional ident keyword argument is a string which is prepended to every message, and defaults to\nsys.argv[0]\nwith leading path components stripped. The optional logoption keyword argument (default is 0) is a bit field \u2013 see below for possible values to combine. The optional facility keyword argument (default isLOG_USER\n) sets the default facility for messages which do not have a facility explicitly encoded.Raises an auditing event\nsyslog.openlog\nwith argumentsident\n,logoption\n,facility\n.Changed in version 3.2: In previous versions, keyword arguments were not allowed, and ident was required.\nChanged in version 3.12: This function is restricted in subinterpreters. (Only code that runs in multiple interpreters is affected and the restriction is not relevant for most users.) This may only be called in the main interpreter. It will raise\nRuntimeError\nif called in a subinterpreter.\n- syslog.closelog()\u00b6\nReset the syslog module values and call the system library\ncloselog()\n.This causes the module to behave as it does when initially imported. For example,\nopenlog()\nwill be called on the firstsyslog()\ncall (ifopenlog()\nhasn\u2019t already been called), and ident and otheropenlog()\nparameters are reset to defaults.Raises an auditing event\nsyslog.closelog\nwith no arguments.Changed in version 3.12: This function is restricted in subinterpreters. (Only code that runs in multiple interpreters is affected and the restriction is not relevant for most users.) This may only be called in the main interpreter. It will raise\nRuntimeError\nif called in a subinterpreter.\n- syslog.setlogmask(maskpri)\u00b6\nSet the priority mask to maskpri and return the previous mask value. Calls to\nsyslog()\nwith a priority level not set in maskpri are ignored. The default is to log all priorities. The functionLOG_MASK(pri)\ncalculates the mask for the individual priority pri. The functionLOG_UPTO(pri)\ncalculates the mask for all priorities up to and including pri.Raises an auditing event\nsyslog.setlogmask\nwith argumentmaskpri\n.\nThe module defines the following constants:\n- syslog.LOG_EMERG\u00b6\n- syslog.LOG_ALERT\u00b6\n- syslog.LOG_CRIT\u00b6\n- syslog.LOG_ERR\u00b6\n- syslog.LOG_WARNING\u00b6\n- syslog.LOG_NOTICE\u00b6\n- syslog.LOG_INFO\u00b6\n- syslog.LOG_DEBUG\u00b6\nPriority levels (high to low).\n- syslog.LOG_AUTH\u00b6\n- syslog.LOG_AUTHPRIV\u00b6\n- syslog.LOG_CRON\u00b6\n- syslog.LOG_DAEMON\u00b6\n- syslog.LOG_FTP\u00b6\n- syslog.LOG_INSTALL\u00b6\n- syslog.LOG_KERN\u00b6\n- syslog.LOG_LAUNCHD\u00b6\n- syslog.LOG_LPR\u00b6\n- syslog.LOG_MAIL\u00b6\n- syslog.LOG_NETINFO\u00b6\n- syslog.LOG_NEWS\u00b6\n- syslog.LOG_RAS\u00b6\n- syslog.LOG_REMOTEAUTH\u00b6\n- syslog.LOG_SYSLOG\u00b6\n- syslog.LOG_USER\u00b6\n- syslog.LOG_UUCP\u00b6\n- syslog.LOG_LOCAL0\u00b6\n- syslog.LOG_LOCAL1\u00b6\n- syslog.LOG_LOCAL2\u00b6\n- syslog.LOG_LOCAL3\u00b6\n- syslog.LOG_LOCAL4\u00b6\n- syslog.LOG_LOCAL5\u00b6\n- syslog.LOG_LOCAL6\u00b6\n- syslog.LOG_LOCAL7\u00b6\nFacilities, depending on availability in\n\nforLOG_AUTHPRIV\n,LOG_FTP\n,LOG_NETINFO\n,LOG_REMOTEAUTH\n,LOG_INSTALL\nandLOG_RAS\n.Changed in version 3.13: Added\nLOG_FTP\n,LOG_NETINFO\n,LOG_REMOTEAUTH\n,LOG_INSTALL\n,LOG_RAS\n, andLOG_LAUNCHD\n.\n- syslog.LOG_PID\u00b6\n- syslog.LOG_CONS\u00b6\n- syslog.LOG_NDELAY\u00b6\n- syslog.LOG_ODELAY\u00b6\n- syslog.LOG_NOWAIT\u00b6\n- syslog.LOG_PERROR\u00b6\nLog options, depending on availability in\n\nforLOG_ODELAY\n,LOG_NOWAIT\nandLOG_PERROR\n.\nExamples\u00b6\nSimple example\u00b6\nA simple set of examples:\nimport syslog\nsyslog.syslog('Processing started')\nif error:\nsyslog.syslog(syslog.LOG_ERR, 'Processing started')\nAn example of setting some log options, these would include the process ID in logged messages, and write the messages to the destination facility used for mail logging:\nsyslog.openlog(logoption=syslog.LOG_PID, facility=syslog.LOG_MAIL)\nsyslog.syslog('E-mail processing initiated...')", "code_snippets": ["\n\n", "\n", " ", "\n ", " ", "\n", " ", "\n", "\n"], "language": "Python", "source": "python.org", "token_count": 1366} +{"url": "https://docs.python.org/3/library/email.charset.html", "title": ": Representing character sets", "content": "email.charset\n: Representing character sets\u00b6\nSource code: Lib/email/charset.py\nThis module is part of the legacy (Compat32\n) email API. In the new\nAPI only the aliases table is used.\nThe remaining text in this section is the original documentation of the module.\nThis module provides a class Charset\nfor representing character sets\nand character set conversions in email messages, as well as a character set\nregistry and several convenience methods for manipulating this registry.\nInstances of Charset\nare used in several other modules within the\nemail\npackage.\nImport this class from the email.charset\nmodule.\n- class email.charset.Charset(input_charset=DEFAULT_CHARSET)\u00b6\nMap character sets to their email properties.\nThis class provides information about the requirements imposed on email for a specific character set. It also provides convenience routines for converting between character sets, given the availability of the applicable codecs. Given a character set, it will do its best to provide information on how to use that character set in an email message in an RFC-compliant way.\nCertain character sets must be encoded with quoted-printable or base64 when used in email headers or bodies. Certain character sets must be converted outright, and are not allowed in email.\nOptional input_charset is as described below; it is always coerced to lower case. After being alias normalized it is also used as a lookup into the registry of character sets to find out the header encoding, body encoding, and output conversion codec to be used for the character set. For example, if input_charset is\niso-8859-1\n, then headers and bodies will be encoded using quoted-printable and no output conversion codec is necessary. If input_charset iseuc-jp\n, then headers will be encoded with base64, bodies will not be encoded, but output text will be converted from theeuc-jp\ncharacter set to theiso-2022-jp\ncharacter set.Charset\ninstances have the following data attributes:- input_charset\u00b6\nThe initial character set specified. Common aliases are converted to their official email names (e.g.\nlatin_1\nis converted toiso-8859-1\n). Defaults to 7-bitus-ascii\n.\n- header_encoding\u00b6\nIf the character set must be encoded before it can be used in an email header, this attribute will be set to\ncharset.QP\n(for quoted-printable),charset.BASE64\n(for base64 encoding), orcharset.SHORTEST\nfor the shortest of QP or BASE64 encoding. Otherwise, it will beNone\n.\n- body_encoding\u00b6\nSame as header_encoding, but describes the encoding for the mail message\u2019s body, which indeed may be different than the header encoding.\ncharset.SHORTEST\nis not allowed for body_encoding.\n- output_charset\u00b6\nSome character sets must be converted before they can be used in email headers or bodies. If the input_charset is one of them, this attribute will contain the name of the character set output will be converted to. Otherwise, it will be\nNone\n.\n- input_codec\u00b6\nThe name of the Python codec used to convert the input_charset to Unicode. If no conversion codec is necessary, this attribute will be\nNone\n.\n- output_codec\u00b6\nThe name of the Python codec used to convert Unicode to the output_charset. If no conversion codec is necessary, this attribute will have the same value as the input_codec.\nCharset\ninstances also have the following methods:- get_body_encoding()\u00b6\nReturn the content transfer encoding used for body encoding.\nThis is either the string\nquoted-printable\norbase64\ndepending on the encoding used, or it is a function, in which case you should call the function with a single argument, the Message object being encoded. The function should then set the Content-Transfer-Encoding header itself to whatever is appropriate.Returns the string\nquoted-printable\nif body_encoding isQP\n, returns the stringbase64\nif body_encoding isBASE64\n, and returns the string7bit\notherwise.\n- get_output_charset()\u00b6\nReturn the output character set.\nThis is the output_charset attribute if that is not\nNone\n, otherwise it is input_charset.\n- header_encode(string)\u00b6\nHeader-encode the string string.\nThe type of encoding (base64 or quoted-printable) will be based on the header_encoding attribute.\n- header_encode_lines(string, maxlengths)\u00b6\nHeader-encode a string by converting it first to bytes.\nThis is similar to\nheader_encode()\nexcept that the string is fit into maximum line lengths as given by the argument maxlengths, which must be an iterator: each element returned from this iterator will provide the next maximum line length.\n- body_encode(string)\u00b6\nBody-encode the string string.\nThe type of encoding (base64 or quoted-printable) will be based on the body_encoding attribute.\nThe\nCharset\nclass also provides a number of methods to support standard operations and built-in functions.- __str__()\u00b6\nReturns input_charset as a string coerced to lower case.\n__repr__()\nis an alias for__str__()\n.\nThe email.charset\nmodule also provides the following functions for adding\nnew entries to the global character set, alias, and codec registries:\n- email.charset.add_charset(charset, header_enc=None, body_enc=None, output_charset=None)\u00b6\nAdd character properties to the global registry.\ncharset is the input character set, and must be the canonical name of a character set.\nOptional header_enc and body_enc is either\ncharset.QP\nfor quoted-printable,charset.BASE64\nfor base64 encoding,charset.SHORTEST\nfor the shortest of quoted-printable or base64 encoding, orNone\nfor no encoding.SHORTEST\nis only valid for header_enc. The default isNone\nfor no encoding.Optional output_charset is the character set that the output should be in. Conversions will proceed from input charset, to Unicode, to the output charset when the method\nCharset.convert()\nis called. The default is to output in the same character set as the input.Both input_charset and output_charset must have Unicode codec entries in the module\u2019s character set-to-codec mapping; use\nadd_codec()\nto add codecs the module does not know about. See thecodecs\nmodule\u2019s documentation for more information.The global character set registry is kept in the module global dictionary\nCHARSETS\n.\n- email.charset.add_alias(alias, canonical)\u00b6\nAdd a character set alias. alias is the alias name, e.g.\nlatin-1\n. canonical is the character set\u2019s canonical name, e.g.iso-8859-1\n.The global charset alias registry is kept in the module global dictionary\nALIASES\n.", "code_snippets": [], "language": "Python", "source": "python.org", "token_count": 1586} +{"url": "https://docs.python.org/3/extending/embedding.html", "title": "Embedding Python in Another Application", "content": "1. Embedding Python in Another Application\u00b6\nThe previous chapters discussed how to extend Python, that is, how to extend the functionality of Python by attaching a library of C functions to it. It is also possible to do it the other way around: enrich your C/C++ application by embedding Python in it. Embedding provides your application with the ability to implement some of the functionality of your application in Python rather than C or C++. This can be used for many purposes; one example would be to allow users to tailor the application to their needs by writing some scripts in Python. You can also use it yourself if some of the functionality can be written in Python more easily.\nEmbedding Python is similar to extending it, but not quite. The difference is that when you extend Python, the main program of the application is still the Python interpreter, while if you embed Python, the main program may have nothing to do with Python \u2014 instead, some parts of the application occasionally call the Python interpreter to run some Python code.\nSo if you are embedding Python, you are providing your own main program. One of\nthe things this main program has to do is initialize the Python interpreter. At\nthe very least, you have to call the function Py_Initialize()\n. There are\noptional calls to pass command line arguments to Python. Then later you can\ncall the interpreter from any part of the application.\nThere are several different ways to call the interpreter: you can pass a string\ncontaining Python statements to PyRun_SimpleString()\n, or you can pass a\nstdio file pointer and a file name (for identification in error messages only)\nto PyRun_SimpleFile()\n. You can also call the lower-level operations\ndescribed in the previous chapters to construct and use Python objects.\nSee also\n- Python/C API Reference Manual\nThe details of Python\u2019s C interface are given in this manual. A great deal of necessary information can be found here.\n1.1. Very High Level Embedding\u00b6\nThe simplest form of embedding Python is the use of the very high level interface. This interface is intended to execute a Python script without needing to interact with the application directly. This can for example be used to perform some operation on a file.\n#define PY_SSIZE_T_CLEAN\n#include \nint\nmain(int argc, char *argv[])\n{\nPyStatus status;\nPyConfig config;\nPyConfig_InitPythonConfig(&config);\n/* optional but recommended */\nstatus = PyConfig_SetBytesString(&config, &config.program_name, argv[0]);\nif (PyStatus_Exception(status)) {\ngoto exception;\n}\nstatus = Py_InitializeFromConfig(&config);\nif (PyStatus_Exception(status)) {\ngoto exception;\n}\nPyConfig_Clear(&config);\nPyRun_SimpleString(\"from time import time,ctime\\n\"\n\"print('Today is', ctime(time()))\\n\");\nif (Py_FinalizeEx() < 0) {\nexit(120);\n}\nreturn 0;\nexception:\nPyConfig_Clear(&config);\nPy_ExitStatusException(status);\n}\nNote\n#define PY_SSIZE_T_CLEAN\nwas used to indicate that Py_ssize_t\nshould be\nused in some APIs instead of int\n.\nIt is not necessary since Python 3.13, but we keep it here for backward compatibility.\nSee Strings and buffers for a description of this macro.\nSetting PyConfig.program_name\nshould be called before\nPy_InitializeFromConfig()\nto inform the interpreter about paths to Python run-time\nlibraries. Next, the Python interpreter is initialized with\nPy_Initialize()\n, followed by the execution of a hard-coded Python script\nthat prints the date and time. Afterwards, the Py_FinalizeEx()\ncall shuts\nthe interpreter down, followed by the end of the program. In a real program,\nyou may want to get the Python script from another source, perhaps a text-editor\nroutine, a file, or a database. Getting the Python code from a file can better\nbe done by using the PyRun_SimpleFile()\nfunction, which saves you the\ntrouble of allocating memory space and loading the file contents.\n1.2. Beyond Very High Level Embedding: An overview\u00b6\nThe high level interface gives you the ability to execute arbitrary pieces of Python code from your application, but exchanging data values is quite cumbersome to say the least. If you want that, you should use lower level calls. At the cost of having to write more C code, you can achieve almost anything.\nIt should be noted that extending Python and embedding Python is quite the same activity, despite the different intent. Most topics discussed in the previous chapters are still valid. To show this, consider what the extension code from Python to C really does:\nConvert data values from Python to C,\nPerform a function call to a C routine using the converted values, and\nConvert the data values from the call from C to Python.\nWhen embedding Python, the interface code does:\nConvert data values from C to Python,\nPerform a function call to a Python interface routine using the converted values, and\nConvert the data values from the call from Python to C.\nAs you can see, the data conversion steps are simply swapped to accommodate the different direction of the cross-language transfer. The only difference is the routine that you call between both data conversions. When extending, you call a C routine, when embedding, you call a Python routine.\nThis chapter will not discuss how to convert data from Python to C and vice versa. Also, proper use of references and dealing with errors is assumed to be understood. Since these aspects do not differ from extending the interpreter, you can refer to earlier chapters for the required information.\n1.3. Pure Embedding\u00b6\nThe first program aims to execute a function in a Python script. Like in the section about the very high level interface, the Python interpreter does not directly interact with the application (but that will change in the next section).\nThe code to run a function defined in a Python script is:\n#define PY_SSIZE_T_CLEAN\n#include \nint\nmain(int argc, char *argv[])\n{\nPyObject *pName, *pModule, *pFunc;\nPyObject *pArgs, *pValue;\nint i;\nif (argc < 3) {\nfprintf(stderr,\"Usage: call pythonfile funcname [args]\\n\");\nreturn 1;\n}\nPy_Initialize();\npName = PyUnicode_DecodeFSDefault(argv[1]);\n/* Error checking of pName left out */\npModule = PyImport_Import(pName);\nPy_DECREF(pName);\nif (pModule != NULL) {\npFunc = PyObject_GetAttrString(pModule, argv[2]);\n/* pFunc is a new reference */\nif (pFunc && PyCallable_Check(pFunc)) {\npArgs = PyTuple_New(argc - 3);\nfor (i = 0; i < argc - 3; ++i) {\npValue = PyLong_FromLong(atoi(argv[i + 3]));\nif (!pValue) {\nPy_DECREF(pArgs);\nPy_DECREF(pModule);\nfprintf(stderr, \"Cannot convert argument\\n\");\nreturn 1;\n}\n/* pValue reference stolen here: */\nPyTuple_SetItem(pArgs, i, pValue);\n}\npValue = PyObject_CallObject(pFunc, pArgs);\nPy_DECREF(pArgs);\nif (pValue != NULL) {\nprintf(\"Result of call: %ld\\n\", PyLong_AsLong(pValue));\nPy_DECREF(pValue);\n}\nelse {\nPy_DECREF(pFunc);\nPy_DECREF(pModule);\nPyErr_Print();\nfprintf(stderr,\"Call failed\\n\");\nreturn 1;\n}\n}\nelse {\nif (PyErr_Occurred())\nPyErr_Print();\nfprintf(stderr, \"Cannot find function \\\"%s\\\"\\n\", argv[2]);\n}\nPy_XDECREF(pFunc);\nPy_DECREF(pModule);\n}\nelse {\nPyErr_Print();\nfprintf(stderr, \"Failed to load \\\"%s\\\"\\n\", argv[1]);\nreturn 1;\n}\nif (Py_FinalizeEx() < 0) {\nreturn 120;\n}\nreturn 0;\n}\nThis code loads a Python script using argv[1]\n, and calls the function named\nin argv[2]\n. Its integer arguments are the other values of the argv\narray. If you compile and link this program (let\u2019s call\nthe finished executable call), and use it to execute a Python\nscript, such as:\ndef multiply(a,b):\nprint(\"Will compute\", a, \"times\", b)\nc = 0\nfor i in range(0, a):\nc = c + b\nreturn c\nthen the result should be:\n$ call multiply multiply 3 2\nWill compute 3 times 2\nResult of call: 6\nAlthough the program is quite large for its functionality, most of the code is for data conversion between Python and C, and for error reporting. The interesting part with respect to embedding Python starts with\nPy_Initialize();\npName = PyUnicode_DecodeFSDefault(argv[1]);\n/* Error checking of pName left out */\npModule = PyImport_Import(pName);\nAfter initializing the interpreter, the script is loaded using\nPyImport_Import()\n. This routine needs a Python string as its argument,\nwhich is constructed using the PyUnicode_DecodeFSDefault()\ndata\nconversion routine.\npFunc = PyObject_GetAttrString(pModule, argv[2]);\n/* pFunc is a new reference */\nif (pFunc && PyCallable_Check(pFunc)) {\n...\n}\nPy_XDECREF(pFunc);\nOnce the script is loaded, the name we\u2019re looking for is retrieved using\nPyObject_GetAttrString()\n. If the name exists, and the object returned is\ncallable, you can safely assume that it is a function. The program then\nproceeds by constructing a tuple of arguments as normal. The call to the Python\nfunction is then made with:\npValue = PyObject_CallObject(pFunc, pArgs);\nUpon return of the function, pValue\nis either NULL\nor it contains a\nreference to the return value of the function. Be sure to release the reference\nafter examining the value.\n1.4. Extending Embedded Python\u00b6\nUntil now, the embedded Python interpreter had no access to functionality from the application itself. The Python API allows this by extending the embedded interpreter. That is, the embedded interpreter gets extended with routines provided by the application. While it sounds complex, it is not so bad. Simply forget for a while that the application starts the Python interpreter. Instead, consider the application to be a set of subroutines, and write some glue code that gives Python access to those routines, just like you would write a normal Python extension. For example:\nstatic int numargs=0;\n/* Return the number of arguments of the application command line */\nstatic PyObject*\nemb_numargs(PyObject *self, PyObject *args)\n{\nif(!PyArg_ParseTuple(args, \":numargs\"))\nreturn NULL;\nreturn PyLong_FromLong(numargs);\n}\nstatic PyMethodDef emb_module_methods[] = {\n{\"numargs\", emb_numargs, METH_VARARGS,\n\"Return the number of arguments received by the process.\"},\n{NULL, NULL, 0, NULL}\n};\nstatic struct PyModuleDef emb_module = {\n.m_base = PyModuleDef_HEAD_INIT,\n.m_name = \"emb\",\n.m_size = 0,\n.m_methods = emb_module_methods,\n};\nstatic PyObject*\nPyInit_emb(void)\n{\nreturn PyModuleDef_Init(&emb_module);\n}\nInsert the above code just above the main()\nfunction. Also, insert the\nfollowing two statements before the call to Py_Initialize()\n:\nnumargs = argc;\nPyImport_AppendInittab(\"emb\", &PyInit_emb);\nThese two lines initialize the numargs\nvariable, and make the\nemb.numargs()\nfunction accessible to the embedded Python interpreter.\nWith these extensions, the Python script can do things like\nimport emb\nprint(\"Number of arguments\", emb.numargs())\nIn a real application, the methods will expose an API of the application to Python.\n1.5. Embedding Python in C++\u00b6\nIt is also possible to embed Python in a C++ program; precisely how this is done will depend on the details of the C++ system used; in general you will need to write the main program in C++, and use the C++ compiler to compile and link your program. There is no need to recompile Python itself using C++.\n1.6. Compiling and Linking under Unix-like systems\u00b6\nIt is not necessarily trivial to find the right flags to pass to your\ncompiler (and linker) in order to embed the Python interpreter into your\napplication, particularly because Python needs to load library modules\nimplemented as C dynamic extensions (.so\nfiles) linked against\nit.\nTo find out the required compiler and linker flags, you can execute the\npythonX.Y-config\nscript which is generated as part of the\ninstallation process (a python3-config\nscript may also be\navailable). This script has several options, of which the following will\nbe directly useful to you:\npythonX.Y-config --cflags\nwill give you the recommended flags when compiling:$ /opt/bin/python3.11-config --cflags -I/opt/include/python3.11 -I/opt/include/python3.11 -Wsign-compare -DNDEBUG -g -fwrapv -O3 -Wall\npythonX.Y-config --ldflags --embed\nwill give you the recommended flags when linking:$ /opt/bin/python3.11-config --ldflags --embed -L/opt/lib/python3.11/config-3.11-x86_64-linux-gnu -L/opt/lib -lpython3.11 -lpthread -ldl -lutil -lm\nNote\nTo avoid confusion between several Python installations (and especially\nbetween the system Python and your own compiled Python), it is recommended\nthat you use the absolute path to pythonX.Y-config\n, as in the above\nexample.\nIf this procedure doesn\u2019t work for you (it is not guaranteed to work for\nall Unix-like platforms; however, we welcome bug reports)\nyou will have to read your system\u2019s documentation about dynamic linking and/or\nexamine Python\u2019s Makefile\n(use sysconfig.get_makefile_filename()\nto find its location) and compilation\noptions. In this case, the sysconfig\nmodule is a useful tool to\nprogrammatically extract the configuration values that you will want to\ncombine together. For example:\n>>> import sysconfig\n>>> sysconfig.get_config_var('LIBS')\n'-lpthread -ldl -lutil'\n>>> sysconfig.get_config_var('LINKFORSHARED')\n'-Xlinker -export-dynamic'", "code_snippets": [], "language": "Python", "source": "python.org", "token_count": 3228} +{"url": "https://docs.python.org/3/c-api/list.html", "title": "List Objects", "content": "List Objects\u00b6\n-\nPyTypeObject PyList_Type\u00b6\n- Part of the Stable ABI.\nThis instance of\nPyTypeObject\nrepresents the Python list type. This is the same object aslist\nin the Python layer.\n-\nint PyList_Check(PyObject *p)\u00b6\nReturn true if p is a list object or an instance of a subtype of the list type. This function always succeeds.\n-\nint PyList_CheckExact(PyObject *p)\u00b6\nReturn true if p is a list object, but not an instance of a subtype of the list type. This function always succeeds.\n-\nPyObject *PyList_New(Py_ssize_t len)\u00b6\n- Return value: New reference. Part of the Stable ABI.\nReturn a new list of length len on success, or\nNULL\non failure.Note\nIf len is greater than zero, the returned list object\u2019s items are set to\nNULL\n. Thus you cannot use abstract API functions such asPySequence_SetItem()\nor expose the object to Python code before setting all items to a real object withPyList_SetItem()\norPyList_SET_ITEM()\n. The following APIs are safe APIs before the list is fully initialized:PyList_SetItem()\nandPyList_SET_ITEM()\n.\n-\nPy_ssize_t PyList_Size(PyObject *list)\u00b6\n- Part of the Stable ABI.\nReturn the length of the list object in list; this is equivalent to\nlen(list)\non a list object.\n-\nPy_ssize_t PyList_GET_SIZE(PyObject *list)\u00b6\nSimilar to\nPyList_Size()\n, but without error checking.\n-\nPyObject *PyList_GetItemRef(PyObject *list, Py_ssize_t index)\u00b6\n- Return value: New reference. Part of the Stable ABI since version 3.13.\nReturn the object at position index in the list pointed to by list. The position must be non-negative; indexing from the end of the list is not supported. If index is out of bounds (\n<0 or >=len(list)\n), returnNULL\nand set anIndexError\nexception.Added in version 3.13.\n-\nPyObject *PyList_GetItem(PyObject *list, Py_ssize_t index)\u00b6\n- Return value: Borrowed reference. Part of the Stable ABI.\nLike\nPyList_GetItemRef()\n, but returns a borrowed reference instead of a strong reference.\n-\nPyObject *PyList_GET_ITEM(PyObject *list, Py_ssize_t i)\u00b6\n- Return value: Borrowed reference.\nSimilar to\nPyList_GetItem()\n, but without error checking.\n-\nint PyList_SetItem(PyObject *list, Py_ssize_t index, PyObject *item)\u00b6\n- Part of the Stable ABI.\nSet the item at index index in list to item. Return\n0\non success. If index is out of bounds, return-1\nand set anIndexError\nexception.Note\nThis function \u201csteals\u201d a reference to item and discards a reference to an item already in the list at the affected position.\n-\nvoid PyList_SET_ITEM(PyObject *list, Py_ssize_t i, PyObject *o)\u00b6\nMacro form of\nPyList_SetItem()\nwithout error checking. This is normally only used to fill in new lists where there is no previous content.Bounds checking is performed as an assertion if Python is built in debug mode or\nwith assertions\n.Note\nThis macro \u201csteals\u201d a reference to item, and, unlike\nPyList_SetItem()\n, does not discard a reference to any item that is being replaced; any reference in list at position i will be leaked.\n-\nint PyList_Insert(PyObject *list, Py_ssize_t index, PyObject *item)\u00b6\n- Part of the Stable ABI.\nInsert the item item into list list in front of index index. Return\n0\nif successful; return-1\nand set an exception if unsuccessful. Analogous tolist.insert(index, item)\n.\n-\nint PyList_Append(PyObject *list, PyObject *item)\u00b6\n- Part of the Stable ABI.\nAppend the object item at the end of list list. Return\n0\nif successful; return-1\nand set an exception if unsuccessful. Analogous tolist.append(item)\n.\n-\nPyObject *PyList_GetSlice(PyObject *list, Py_ssize_t low, Py_ssize_t high)\u00b6\n- Return value: New reference. Part of the Stable ABI.\nReturn a list of the objects in list containing the objects between low and high. Return\nNULL\nand set an exception if unsuccessful. Analogous tolist[low:high]\n. Indexing from the end of the list is not supported.\n-\nint PyList_SetSlice(PyObject *list, Py_ssize_t low, Py_ssize_t high, PyObject *itemlist)\u00b6\n- Part of the Stable ABI.\nSet the slice of list between low and high to the contents of itemlist. Analogous to\nlist[low:high] = itemlist\n. The itemlist may beNULL\n, indicating the assignment of an empty list (slice deletion). Return0\non success,-1\non failure. Indexing from the end of the list is not supported.\n-\nint PyList_Extend(PyObject *list, PyObject *iterable)\u00b6\nExtend list with the contents of iterable. This is the same as\nPyList_SetSlice(list, PY_SSIZE_T_MAX, PY_SSIZE_T_MAX, iterable)\nand analogous tolist.extend(iterable)\norlist += iterable\n.Raise an exception and return\n-1\nif list is not alist\nobject. Return 0 on success.Added in version 3.13.\n-\nint PyList_Clear(PyObject *list)\u00b6\nRemove all items from list. This is the same as\nPyList_SetSlice(list, 0, PY_SSIZE_T_MAX, NULL)\nand analogous tolist.clear()\nordel list[:]\n.Raise an exception and return\n-1\nif list is not alist\nobject. Return 0 on success.Added in version 3.13.\n-\nint PyList_Sort(PyObject *list)\u00b6\n- Part of the Stable ABI.\nSort the items of list in place. Return\n0\non success,-1\non failure. This is equivalent tolist.sort()\n.\n-\nint PyList_Reverse(PyObject *list)\u00b6\n- Part of the Stable ABI.\nReverse the items of list in place. Return\n0\non success,-1\non failure. This is the equivalent oflist.reverse()\n.\n-\nPyObject *PyList_AsTuple(PyObject *list)\u00b6\n- Return value: New reference. Part of the Stable ABI.\nReturn a new tuple object containing the contents of list; equivalent to\ntuple(list)\n.", "code_snippets": [], "language": "Python", "source": "python.org", "token_count": 1331} +{"url": "https://docs.python.org/3/using/editors.html", "title": "Editors and IDEs", "content": "8. Editors and IDEs\u00b6\nThere are a number of IDEs that support Python programming language. Many editors and IDEs provide syntax highlighting, debugging tools, and PEP 8 checks.\n8.1. IDLE \u2014 Python editor and shell\u00b6\nIDLE is Python\u2019s Integrated Development and Learning Environment and is generally bundled with Python installs. If you are on Linux and do not have IDLE installed see Installing IDLE on Linux. For more information see the IDLE docs.\n8.2. Other Editors and IDEs\u00b6\nPython\u2019s community wiki has information submitted by the community on Editors and IDEs. Please go to Python Editors and Integrated Development Environments for a comprehensive list.", "code_snippets": [], "language": "Python", "source": "python.org", "token_count": 164} +{"url": "https://docs.python.org/3/library/pickletools.html", "title": " \u2014 Tools for pickle developers", "content": "pickletools\n\u2014 Tools for pickle developers\u00b6\nSource code: Lib/pickletools.py\nThis module contains various constants relating to the intimate details of the\npickle\nmodule, some lengthy comments about the implementation, and a\nfew useful functions for analyzing pickled data. The contents of this module\nare useful for Python core developers who are working on the pickle\n;\nordinary users of the pickle\nmodule probably won\u2019t find the\npickletools\nmodule relevant.\nCommand-line usage\u00b6\nAdded in version 3.2.\nWhen invoked from the command line, python -m pickletools\nwill\ndisassemble the contents of one or more pickle files. Note that if\nyou want to see the Python object stored in the pickle rather than the\ndetails of pickle format, you may want to use -m pickle\ninstead.\nHowever, when the pickle file that you want to examine comes from an\nuntrusted source, -m pickletools\nis a safer option because it does\nnot execute pickle bytecode.\nFor example, with a tuple (1, 2)\npickled in file x.pickle\n:\n$ python -m pickle x.pickle\n(1, 2)\n$ python -m pickletools x.pickle\n0: \\x80 PROTO 3\n2: K BININT1 1\n4: K BININT1 2\n6: \\x86 TUPLE2\n7: q BINPUT 0\n9: . STOP\nhighest protocol among opcodes = 2\nCommand-line options\u00b6\n- -a, --annotate\u00b6\nAnnotate each line with a short opcode description.\n- -o, --output=\u00b6\nName of a file where the output should be written.\n- -l, --indentlevel=\u00b6\nThe number of blanks by which to indent a new MARK level.\n- -m, --memo\u00b6\nWhen multiple objects are disassembled, preserve memo between disassemblies.\n- -p, --preamble=\u00b6\nWhen more than one pickle file is specified, print given preamble before each disassembly.\n- pickle_file\u00b6\nA pickle file to read, or\n-\nto indicate reading from standard input.\nProgrammatic interface\u00b6\n- pickletools.dis(pickle, out=None, memo=None, indentlevel=4, annotate=0)\u00b6\nOutputs a symbolic disassembly of the pickle to the file-like object out, defaulting to\nsys.stdout\n. pickle can be a string or a file-like object. memo can be a Python dictionary that will be used as the pickle\u2019s memo; it can be used to perform disassemblies across multiple pickles created by the same pickler. Successive levels, indicated byMARK\nopcodes in the stream, are indented by indentlevel spaces. If a nonzero value is given to annotate, each opcode in the output is annotated with a short description. The value of annotate is used as a hint for the column where annotation should start.Changed in version 3.2: Added the annotate parameter.\n- pickletools.genops(pickle)\u00b6\nProvides an iterator over all of the opcodes in a pickle, returning a sequence of\n(opcode, arg, pos)\ntriples. opcode is an instance of anOpcodeInfo\nclass; arg is the decoded value, as a Python object, of the opcode\u2019s argument; pos is the position at which this opcode is located. pickle can be a string or a file-like object.\n- pickletools.optimize(picklestring)\u00b6\nReturns a new equivalent pickle string after eliminating unused\nPUT\nopcodes. The optimized pickle is shorter, takes less transmission time, requires less storage space, and unpickles more efficiently.", "code_snippets": [], "language": "Python", "source": "python.org", "token_count": 768} +{"url": "https://docs.python.org/3/license.html", "title": "History and License", "content": "History and License\u00b6\nHistory of the software\u00b6\nPython was created in the early 1990s by Guido van Rossum at Stichting Mathematisch Centrum (CWI, see https://www.cwi.nl) in the Netherlands as a successor of a language called ABC. Guido remains Python\u2019s principal author, although it includes many contributions from others.\nIn 1995, Guido continued his work on Python at the Corporation for National Research Initiatives (CNRI, see https://www.cnri.reston.va.us) in Reston, Virginia where he released several versions of the software.\nIn May 2000, Guido and the Python core development team moved to BeOpen.com to form the BeOpen PythonLabs team. In October of the same year, the PythonLabs team moved to Digital Creations, which became Zope Corporation. In 2001, the Python Software Foundation (PSF, see https://www.python.org/psf/) was formed, a non-profit organization created specifically to own Python-related Intellectual Property. Zope Corporation was a sponsoring member of the PSF.\nAll Python releases are Open Source (see https://opensource.org for the Open Source Definition). Historically, most, but not all, Python releases have also been GPL-compatible; the table below summarizes the various releases.\nRelease |\nDerived from |\nYear |\nOwner |\nGPL-compatible? (1) |\n|---|---|---|---|---|\n0.9.0 thru 1.2 |\nn/a |\n1991-1995 |\nCWI |\nyes |\n1.3 thru 1.5.2 |\n1.2 |\n1995-1999 |\nCNRI |\nyes |\n1.6 |\n1.5.2 |\n2000 |\nCNRI |\nno |\n2.0 |\n1.6 |\n2000 |\nBeOpen.com |\nno |\n1.6.1 |\n1.6 |\n2001 |\nCNRI |\nyes (2) |\n2.1 |\n2.0+1.6.1 |\n2001 |\nPSF |\nno |\n2.0.1 |\n2.0+1.6.1 |\n2001 |\nPSF |\nyes |\n2.1.1 |\n2.1+2.0.1 |\n2001 |\nPSF |\nyes |\n2.1.2 |\n2.1.1 |\n2002 |\nPSF |\nyes |\n2.1.3 |\n2.1.2 |\n2002 |\nPSF |\nyes |\n2.2 and above |\n2.1.1 |\n2001-now |\nPSF |\nyes |\nNote\nGPL-compatible doesn\u2019t mean that we\u2019re distributing Python under the GPL. All Python licenses, unlike the GPL, let you distribute a modified version without making your changes open source. The GPL-compatible licenses make it possible to combine Python with other software that is released under the GPL; the others don\u2019t.\nAccording to Richard Stallman, 1.6.1 is not GPL-compatible, because its license has a choice of law clause. According to CNRI, however, Stallman\u2019s lawyer has told CNRI\u2019s lawyer that 1.6.1 is \u201cnot incompatible\u201d with the GPL.\nThanks to the many outside volunteers who have worked under Guido\u2019s direction to make these releases possible.\nTerms and conditions for accessing or otherwise using Python\u00b6\nPython software and documentation are licensed under the Python Software Foundation License Version 2.\nStarting with Python 3.8.6, examples, recipes, and other code in the documentation are dual licensed under the PSF License Version 2 and the Zero-Clause BSD license.\nSome software incorporated into Python is under different licenses. The licenses are listed with code falling under that license. See Licenses and Acknowledgements for Incorporated Software for an incomplete list of these licenses.\nPYTHON SOFTWARE FOUNDATION LICENSE VERSION 2\u00b6\n1. This LICENSE AGREEMENT is between the Python Software Foundation (\"PSF\"), and\nthe Individual or Organization (\"Licensee\") accessing and otherwise using this\nsoftware (\"Python\") in source or binary form and its associated documentation.\n2. Subject to the terms and conditions of this License Agreement, PSF hereby\ngrants Licensee a nonexclusive, royalty-free, world-wide license to reproduce,\nanalyze, test, perform and/or display publicly, prepare derivative works,\ndistribute, and otherwise use Python alone or in any derivative\nversion, provided, however, that PSF's License Agreement and PSF's notice of\ncopyright, i.e., \"Copyright \u00a9 2001 Python Software Foundation; All Rights\nReserved\" are retained in Python alone or in any derivative version\nprepared by Licensee.\n3. In the event Licensee prepares a derivative work that is based on or\nincorporates Python or any part thereof, and wants to make the\nderivative work available to others as provided herein, then Licensee hereby\nagrees to include in any such work a brief summary of the changes made to Python.\n4. PSF is making Python available to Licensee on an \"AS IS\" basis.\nPSF MAKES NO REPRESENTATIONS OR WARRANTIES, EXPRESS OR IMPLIED. BY WAY OF\nEXAMPLE, BUT NOT LIMITATION, PSF MAKES NO AND DISCLAIMS ANY REPRESENTATION OR\nWARRANTY OF MERCHANTABILITY OR FITNESS FOR ANY PARTICULAR PURPOSE OR THAT THE\nUSE OF PYTHON WILL NOT INFRINGE ANY THIRD PARTY RIGHTS.\n5. PSF SHALL NOT BE LIABLE TO LICENSEE OR ANY OTHER USERS OF PYTHON\nFOR ANY INCIDENTAL, SPECIAL, OR CONSEQUENTIAL DAMAGES OR LOSS AS A RESULT OF\nMODIFYING, DISTRIBUTING, OR OTHERWISE USING PYTHON, OR ANY DERIVATIVE\nTHEREOF, EVEN IF ADVISED OF THE POSSIBILITY THEREOF.\n6. This License Agreement will automatically terminate upon a material breach of\nits terms and conditions.\n7. Nothing in this License Agreement shall be deemed to create any relationship\nof agency, partnership, or joint venture between PSF and Licensee. This License\nAgreement does not grant permission to use PSF trademarks or trade name in a\ntrademark sense to endorse or promote products or services of Licensee, or any\nthird party.\n8. By copying, installing or otherwise using Python, Licensee agrees\nto be bound by the terms and conditions of this License Agreement.\nBEOPEN.COM LICENSE AGREEMENT FOR PYTHON 2.0\u00b6\nBEOPEN PYTHON OPEN SOURCE LICENSE AGREEMENT VERSION 1\n1. This LICENSE AGREEMENT is between BeOpen.com (\"BeOpen\"), having an office at\n160 Saratoga Avenue, Santa Clara, CA 95051, and the Individual or Organization\n(\"Licensee\") accessing and otherwise using this software in source or binary\nform and its associated documentation (\"the Software\").\n2. Subject to the terms and conditions of this BeOpen Python License Agreement,\nBeOpen hereby grants Licensee a non-exclusive, royalty-free, world-wide license\nto reproduce, analyze, test, perform and/or display publicly, prepare derivative\nworks, distribute, and otherwise use the Software alone or in any derivative\nversion, provided, however, that the BeOpen Python License is retained in the\nSoftware, alone or in any derivative version prepared by Licensee.\n3. BeOpen is making the Software available to Licensee on an \"AS IS\" basis.\nBEOPEN MAKES NO REPRESENTATIONS OR WARRANTIES, EXPRESS OR IMPLIED. BY WAY OF\nEXAMPLE, BUT NOT LIMITATION, BEOPEN MAKES NO AND DISCLAIMS ANY REPRESENTATION OR\nWARRANTY OF MERCHANTABILITY OR FITNESS FOR ANY PARTICULAR PURPOSE OR THAT THE\nUSE OF THE SOFTWARE WILL NOT INFRINGE ANY THIRD PARTY RIGHTS.\n4. BEOPEN SHALL NOT BE LIABLE TO LICENSEE OR ANY OTHER USERS OF THE SOFTWARE FOR\nANY INCIDENTAL, SPECIAL, OR CONSEQUENTIAL DAMAGES OR LOSS AS A RESULT OF USING,\nMODIFYING OR DISTRIBUTING THE SOFTWARE, OR ANY DERIVATIVE THEREOF, EVEN IF\nADVISED OF THE POSSIBILITY THEREOF.\n5. This License Agreement will automatically terminate upon a material breach of\nits terms and conditions.\n6. This License Agreement shall be governed by and interpreted in all respects\nby the law of the State of California, excluding conflict of law provisions.\nNothing in this License Agreement shall be deemed to create any relationship of\nagency, partnership, or joint venture between BeOpen and Licensee. This License\nAgreement does not grant permission to use BeOpen trademarks or trade names in a\ntrademark sense to endorse or promote products or services of Licensee, or any\nthird party. As an exception, the \"BeOpen Python\" logos available at\nhttp://www.pythonlabs.com/logos.html may be used according to the permissions\ngranted on that web page.\n7. By copying, installing or otherwise using the software, Licensee agrees to be\nbound by the terms and conditions of this License Agreement.\nCNRI LICENSE AGREEMENT FOR PYTHON 1.6.1\u00b6\n1. This LICENSE AGREEMENT is between the Corporation for National Research\nInitiatives, having an office at 1895 Preston White Drive, Reston, VA 20191\n(\"CNRI\"), and the Individual or Organization (\"Licensee\") accessing and\notherwise using Python 1.6.1 software in source or binary form and its\nassociated documentation.\n2. Subject to the terms and conditions of this License Agreement, CNRI hereby\ngrants Licensee a nonexclusive, royalty-free, world-wide license to reproduce,\nanalyze, test, perform and/or display publicly, prepare derivative works,\ndistribute, and otherwise use Python 1.6.1 alone or in any derivative version,\nprovided, however, that CNRI's License Agreement and CNRI's notice of copyright,\ni.e., \"Copyright \u00a9 1995-2001 Corporation for National Research Initiatives; All\nRights Reserved\" are retained in Python 1.6.1 alone or in any derivative version\nprepared by Licensee. Alternately, in lieu of CNRI's License Agreement,\nLicensee may substitute the following text (omitting the quotes): \"Python 1.6.1\nis made available subject to the terms and conditions in CNRI's License\nAgreement. This Agreement together with Python 1.6.1 may be located on the\ninternet using the following unique, persistent identifier (known as a handle):\n1895.22/1013. This Agreement may also be obtained from a proxy server on the\ninternet using the following URL: http://hdl.handle.net/1895.22/1013\".\n3. In the event Licensee prepares a derivative work that is based on or\nincorporates Python 1.6.1 or any part thereof, and wants to make the derivative\nwork available to others as provided herein, then Licensee hereby agrees to\ninclude in any such work a brief summary of the changes made to Python 1.6.1.\n4. CNRI is making Python 1.6.1 available to Licensee on an \"AS IS\" basis. CNRI\nMAKES NO REPRESENTATIONS OR WARRANTIES, EXPRESS OR IMPLIED. BY WAY OF EXAMPLE,\nBUT NOT LIMITATION, CNRI MAKES NO AND DISCLAIMS ANY REPRESENTATION OR WARRANTY\nOF MERCHANTABILITY OR FITNESS FOR ANY PARTICULAR PURPOSE OR THAT THE USE OF\nPYTHON 1.6.1 WILL NOT INFRINGE ANY THIRD PARTY RIGHTS.\n5. CNRI SHALL NOT BE LIABLE TO LICENSEE OR ANY OTHER USERS OF PYTHON 1.6.1 FOR\nANY INCIDENTAL, SPECIAL, OR CONSEQUENTIAL DAMAGES OR LOSS AS A RESULT OF\nMODIFYING, DISTRIBUTING, OR OTHERWISE USING PYTHON 1.6.1, OR ANY DERIVATIVE\nTHEREOF, EVEN IF ADVISED OF THE POSSIBILITY THEREOF.\n6. This License Agreement will automatically terminate upon a material breach of\nits terms and conditions.\n7. This License Agreement shall be governed by the federal intellectual property\nlaw of the United States, including without limitation the federal copyright\nlaw, and, to the extent such U.S. federal law does not apply, by the law of the\nCommonwealth of Virginia, excluding Virginia's conflict of law provisions.\nNotwithstanding the foregoing, with regard to derivative works based on Python\n1.6.1 that incorporate non-separable material that was previously distributed\nunder the GNU General Public License (GPL), the law of the Commonwealth of\nVirginia shall govern this License Agreement only as to issues arising under or\nwith respect to Paragraphs 4, 5, and 7 of this License Agreement. Nothing in\nthis License Agreement shall be deemed to create any relationship of agency,\npartnership, or joint venture between CNRI and Licensee. This License Agreement\ndoes not grant permission to use CNRI trademarks or trade name in a trademark\nsense to endorse or promote products or services of Licensee, or any third\nparty.\n8. By clicking on the \"ACCEPT\" button where indicated, or by copying, installing\nor otherwise using Python 1.6.1, Licensee agrees to be bound by the terms and\nconditions of this License Agreement.\nCWI LICENSE AGREEMENT FOR PYTHON 0.9.0 THROUGH 1.2\u00b6\nCopyright \u00a9 1991 - 1995, Stichting Mathematisch Centrum Amsterdam, The\nNetherlands. All rights reserved.\nPermission to use, copy, modify, and distribute this software and its\ndocumentation for any purpose and without fee is hereby granted, provided that\nthe above copyright notice appear in all copies and that both that copyright\nnotice and this permission notice appear in supporting documentation, and that\nthe name of Stichting Mathematisch Centrum or CWI not be used in advertising or\npublicity pertaining to distribution of the software without specific, written\nprior permission.\nSTICHTING MATHEMATISCH CENTRUM DISCLAIMS ALL WARRANTIES WITH REGARD TO THIS\nSOFTWARE, INCLUDING ALL IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS, IN NO\nEVENT SHALL STICHTING MATHEMATISCH CENTRUM BE LIABLE FOR ANY SPECIAL, INDIRECT\nOR CONSEQUENTIAL DAMAGES OR ANY DAMAGES WHATSOEVER RESULTING FROM LOSS OF USE,\nDATA OR PROFITS, WHETHER IN AN ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS\nACTION, ARISING OUT OF OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS\nSOFTWARE.\nZERO-CLAUSE BSD LICENSE FOR CODE IN THE PYTHON DOCUMENTATION\u00b6\nPermission to use, copy, modify, and/or distribute this software for any\npurpose with or without fee is hereby granted.\nTHE SOFTWARE IS PROVIDED \"AS IS\" AND THE AUTHOR DISCLAIMS ALL WARRANTIES WITH\nREGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF MERCHANTABILITY\nAND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY SPECIAL, DIRECT,\nINDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES WHATSOEVER RESULTING FROM\nLOSS OF USE, DATA OR PROFITS, WHETHER IN AN ACTION OF CONTRACT, NEGLIGENCE OR\nOTHER TORTIOUS ACTION, ARISING OUT OF OR IN CONNECTION WITH THE USE OR\nPERFORMANCE OF THIS SOFTWARE.\nLicenses and Acknowledgements for Incorporated Software\u00b6\nThis section is an incomplete, but growing list of licenses and acknowledgements for third-party software incorporated in the Python distribution.\nMersenne Twister\u00b6\nThe _random\nC extension underlying the random\nmodule\nincludes code based on a download from\nhttp://www.math.sci.hiroshima-u.ac.jp/~m-mat/MT/MT2002/emt19937ar.html. The following are\nthe verbatim comments from the original code:\nA C-program for MT19937, with initialization improved 2002/1/26.\nCoded by Takuji Nishimura and Makoto Matsumoto.\nBefore using, initialize the state by using init_genrand(seed)\nor init_by_array(init_key, key_length).\nCopyright (C) 1997 - 2002, Makoto Matsumoto and Takuji Nishimura,\nAll rights reserved.\nRedistribution and use in source and binary forms, with or without\nmodification, are permitted provided that the following conditions\nare met:\n1. Redistributions of source code must retain the above copyright\nnotice, this list of conditions and the following disclaimer.\n2. Redistributions in binary form must reproduce the above copyright\nnotice, this list of conditions and the following disclaimer in the\ndocumentation and/or other materials provided with the distribution.\n3. The names of its contributors may not be used to endorse or promote\nproducts derived from this software without specific prior written\npermission.\nTHIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS\n\"AS IS\" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT\nLIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR\nA PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR\nCONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL,\nEXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,\nPROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR\nPROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF\nLIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING\nNEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS\nSOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.\nAny feedback is very welcome.\nhttp://www.math.sci.hiroshima-u.ac.jp/~m-mat/MT/emt.html\nemail: m-mat @ math.sci.hiroshima-u.ac.jp (remove space)\nSockets\u00b6\nThe socket\nmodule uses the functions, getaddrinfo()\n, and\ngetnameinfo()\n, which are coded in separate source files from the WIDE\nProject, https://www.wide.ad.jp/.\nCopyright (C) 1995, 1996, 1997, and 1998 WIDE Project.\nAll rights reserved.\nRedistribution and use in source and binary forms, with or without\nmodification, are permitted provided that the following conditions\nare met:\n1. Redistributions of source code must retain the above copyright\nnotice, this list of conditions and the following disclaimer.\n2. Redistributions in binary form must reproduce the above copyright\nnotice, this list of conditions and the following disclaimer in the\ndocumentation and/or other materials provided with the distribution.\n3. Neither the name of the project nor the names of its contributors\nmay be used to endorse or promote products derived from this software\nwithout specific prior written permission.\nTHIS SOFTWARE IS PROVIDED BY THE PROJECT AND CONTRIBUTORS \"AS IS\" AND\nANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE\nIMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE\nARE DISCLAIMED. IN NO EVENT SHALL THE PROJECT OR CONTRIBUTORS BE LIABLE\nFOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL\nDAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS\nOR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION)\nHOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT\nLIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY\nOUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF\nSUCH DAMAGE.\nAsynchronous socket services\u00b6\nThe test.support.asynchat\nand test.support.asyncore\nmodules contain the following notice:\nCopyright 1996 by Sam Rushing\nAll Rights Reserved\nPermission to use, copy, modify, and distribute this software and\nits documentation for any purpose and without fee is hereby\ngranted, provided that the above copyright notice appear in all\ncopies and that both that copyright notice and this permission\nnotice appear in supporting documentation, and that the name of Sam\nRushing not be used in advertising or publicity pertaining to\ndistribution of the software without specific, written prior\npermission.\nSAM RUSHING DISCLAIMS ALL WARRANTIES WITH REGARD TO THIS SOFTWARE,\nINCLUDING ALL IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS, IN\nNO EVENT SHALL SAM RUSHING BE LIABLE FOR ANY SPECIAL, INDIRECT OR\nCONSEQUENTIAL DAMAGES OR ANY DAMAGES WHATSOEVER RESULTING FROM LOSS\nOF USE, DATA OR PROFITS, WHETHER IN AN ACTION OF CONTRACT,\nNEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF OR IN\nCONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.\nExecution tracing\u00b6\nThe trace\nmodule contains the following notice:\nportions copyright 2001, Autonomous Zones Industries, Inc., all rights...\nerr... reserved and offered to the public under the terms of the\nPython 2.2 license.\nAuthor: Zooko O'Whielacronx\nhttp://zooko.com/\nmailto:zooko@zooko.com\nCopyright 2000, Mojam Media, Inc., all rights reserved.\nAuthor: Skip Montanaro\nCopyright 1999, Bioreason, Inc., all rights reserved.\nAuthor: Andrew Dalke\nCopyright 1995-1997, Automatrix, Inc., all rights reserved.\nAuthor: Skip Montanaro\nCopyright 1991-1995, Stichting Mathematisch Centrum, all rights reserved.\nPermission to use, copy, modify, and distribute this Python software and\nits associated documentation for any purpose without fee is hereby\ngranted, provided that the above copyright notice appears in all copies,\nand that both that copyright notice and this permission notice appear in\nsupporting documentation, and that the name of neither Automatrix,\nBioreason or Mojam Media be used in advertising or publicity pertaining to\ndistribution of the software without specific, written prior permission.\nUUencode and UUdecode functions\u00b6\nThe uu\ncodec contains the following notice:\nCopyright 1994 by Lance Ellinghouse\nCathedral City, California Republic, United States of America.\nAll Rights Reserved\nPermission to use, copy, modify, and distribute this software and its\ndocumentation for any purpose and without fee is hereby granted,\nprovided that the above copyright notice appear in all copies and that\nboth that copyright notice and this permission notice appear in\nsupporting documentation, and that the name of Lance Ellinghouse\nnot be used in advertising or publicity pertaining to distribution\nof the software without specific, written prior permission.\nLANCE ELLINGHOUSE DISCLAIMS ALL WARRANTIES WITH REGARD TO\nTHIS SOFTWARE, INCLUDING ALL IMPLIED WARRANTIES OF MERCHANTABILITY AND\nFITNESS, IN NO EVENT SHALL LANCE ELLINGHOUSE CENTRUM BE LIABLE\nFOR ANY SPECIAL, INDIRECT OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES\nWHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN\nACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT\nOF OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.\nModified by Jack Jansen, CWI, July 1995:\n- Use binascii module to do the actual line-by-line conversion\nbetween ascii and binary. This results in a 1000-fold speedup. The C\nversion is still 5 times faster, though.\n- Arguments more compliant with Python standard\nXML Remote Procedure Calls\u00b6\nThe xmlrpc.client\nmodule contains the following notice:\nThe XML-RPC client interface is\nCopyright (c) 1999-2002 by Secret Labs AB\nCopyright (c) 1999-2002 by Fredrik Lundh\nBy obtaining, using, and/or copying this software and/or its\nassociated documentation, you agree that you have read, understood,\nand will comply with the following terms and conditions:\nPermission to use, copy, modify, and distribute this software and\nits associated documentation for any purpose and without fee is\nhereby granted, provided that the above copyright notice appears in\nall copies, and that both that copyright notice and this permission\nnotice appear in supporting documentation, and that the name of\nSecret Labs AB or the author not be used in advertising or publicity\npertaining to distribution of the software without specific, written\nprior permission.\nSECRET LABS AB AND THE AUTHOR DISCLAIMS ALL WARRANTIES WITH REGARD\nTO THIS SOFTWARE, INCLUDING ALL IMPLIED WARRANTIES OF MERCHANT-\nABILITY AND FITNESS. IN NO EVENT SHALL SECRET LABS AB OR THE AUTHOR\nBE LIABLE FOR ANY SPECIAL, INDIRECT OR CONSEQUENTIAL DAMAGES OR ANY\nDAMAGES WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS,\nWHETHER IN AN ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS\nACTION, ARISING OUT OF OR IN CONNECTION WITH THE USE OR PERFORMANCE\nOF THIS SOFTWARE.\ntest_epoll\u00b6\nThe test.test_epoll\nmodule contains the following notice:\nCopyright (c) 2001-2006 Twisted Matrix Laboratories.\nPermission is hereby granted, free of charge, to any person obtaining\na copy of this software and associated documentation files (the\n\"Software\"), to deal in the Software without restriction, including\nwithout limitation the rights to use, copy, modify, merge, publish,\ndistribute, sublicense, and/or sell copies of the Software, and to\npermit persons to whom the Software is furnished to do so, subject to\nthe following conditions:\nThe above copyright notice and this permission notice shall be\nincluded in all copies or substantial portions of the Software.\nTHE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND,\nEXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF\nMERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND\nNONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE\nLIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION\nOF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION\nWITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.\nSelect kqueue\u00b6\nThe select\nmodule contains the following notice for the kqueue\ninterface:\nCopyright (c) 2000 Doug White, 2006 James Knight, 2007 Christian Heimes\nAll rights reserved.\nRedistribution and use in source and binary forms, with or without\nmodification, are permitted provided that the following conditions\nare met:\n1. Redistributions of source code must retain the above copyright\nnotice, this list of conditions and the following disclaimer.\n2. Redistributions in binary form must reproduce the above copyright\nnotice, this list of conditions and the following disclaimer in the\ndocumentation and/or other materials provided with the distribution.\nTHIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS \"AS IS\" AND\nANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE\nIMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE\nARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR OR CONTRIBUTORS BE LIABLE\nFOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL\nDAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS\nOR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION)\nHOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT\nLIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY\nOUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF\nSUCH DAMAGE.\nSipHash24\u00b6\nThe file Python/pyhash.c\ncontains Marek Majkowski\u2019 implementation of\nDan Bernstein\u2019s SipHash24 algorithm. It contains the following note:\n\nCopyright (c) 2013 Marek Majkowski \nPermission is hereby granted, free of charge, to any person obtaining a copy\nof this software and associated documentation files (the \"Software\"), to deal\nin the Software without restriction, including without limitation the rights\nto use, copy, modify, merge, publish, distribute, sublicense, and/or sell\ncopies of the Software, and to permit persons to whom the Software is\nfurnished to do so, subject to the following conditions:\nThe above copyright notice and this permission notice shall be included in\nall copies or substantial portions of the Software.\n\nOriginal location:\nhttps://github.com/majek/csiphash/\nSolution inspired by code from:\nSamuel Neves (supercop/crypto_auth/siphash24/little)\ndjb (supercop/crypto_auth/siphash24/little2)\nJean-Philippe Aumasson (https://131002.net/siphash/siphash24.c)\nstrtod and dtoa\u00b6\nThe file Python/dtoa.c\n, which supplies C functions dtoa and\nstrtod for conversion of C doubles to and from strings, is derived\nfrom the file of the same name by David M. Gay, currently available\nfrom https://web.archive.org/web/20220517033456/http://www.netlib.org/fp/dtoa.c.\nThe original file, as retrieved on March 16, 2009, contains the following\ncopyright and licensing notice:\n/****************************************************************\n*\n* The author of this software is David M. Gay.\n*\n* Copyright (c) 1991, 2000, 2001 by Lucent Technologies.\n*\n* Permission to use, copy, modify, and distribute this software for any\n* purpose without fee is hereby granted, provided that this entire notice\n* is included in all copies of any software which is or includes a copy\n* or modification of this software and in all copies of the supporting\n* documentation for such software.\n*\n* THIS SOFTWARE IS BEING PROVIDED \"AS IS\", WITHOUT ANY EXPRESS OR IMPLIED\n* WARRANTY. IN PARTICULAR, NEITHER THE AUTHOR NOR LUCENT MAKES ANY\n* REPRESENTATION OR WARRANTY OF ANY KIND CONCERNING THE MERCHANTABILITY\n* OF THIS SOFTWARE OR ITS FITNESS FOR ANY PARTICULAR PURPOSE.\n*\n***************************************************************/\nOpenSSL\u00b6\nThe modules hashlib\n, posix\nand ssl\nuse\nthe OpenSSL library for added performance if made available by the\noperating system. Additionally, the Windows and macOS installers for\nPython may include a copy of the OpenSSL libraries, so we include a copy\nof the OpenSSL license here. For the OpenSSL 3.0 release,\nand later releases derived from that, the Apache License v2 applies:\nApache License\nVersion 2.0, January 2004\nhttps://www.apache.org/licenses/\nTERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION\n1. Definitions.\n\"License\" shall mean the terms and conditions for use, reproduction,\nand distribution as defined by Sections 1 through 9 of this document.\n\"Licensor\" shall mean the copyright owner or entity authorized by\nthe copyright owner that is granting the License.\n\"Legal Entity\" shall mean the union of the acting entity and all\nother entities that control, are controlled by, or are under common\ncontrol with that entity. For the purposes of this definition,\n\"control\" means (i) the power, direct or indirect, to cause the\ndirection or management of such entity, whether by contract or\notherwise, or (ii) ownership of fifty percent (50%) or more of the\noutstanding shares, or (iii) beneficial ownership of such entity.\n\"You\" (or \"Your\") shall mean an individual or Legal Entity\nexercising permissions granted by this License.\n\"Source\" form shall mean the preferred form for making modifications,\nincluding but not limited to software source code, documentation\nsource, and configuration files.\n\"Object\" form shall mean any form resulting from mechanical\ntransformation or translation of a Source form, including but\nnot limited to compiled object code, generated documentation,\nand conversions to other media types.\n\"Work\" shall mean the work of authorship, whether in Source or\nObject form, made available under the License, as indicated by a\ncopyright notice that is included in or attached to the work\n(an example is provided in the Appendix below).\n\"Derivative Works\" shall mean any work, whether in Source or Object\nform, that is based on (or derived from) the Work and for which the\neditorial revisions, annotations, elaborations, or other modifications\nrepresent, as a whole, an original work of authorship. For the purposes\nof this License, Derivative Works shall not include works that remain\nseparable from, or merely link (or bind by name) to the interfaces of,\nthe Work and Derivative Works thereof.\n\"Contribution\" shall mean any work of authorship, including\nthe original version of the Work and any modifications or additions\nto that Work or Derivative Works thereof, that is intentionally\nsubmitted to Licensor for inclusion in the Work by the copyright owner\nor by an individual or Legal Entity authorized to submit on behalf of\nthe copyright owner. For the purposes of this definition, \"submitted\"\nmeans any form of electronic, verbal, or written communication sent\nto the Licensor or its representatives, including but not limited to\ncommunication on electronic mailing lists, source code control systems,\nand issue tracking systems that are managed by, or on behalf of, the\nLicensor for the purpose of discussing and improving the Work, but\nexcluding communication that is conspicuously marked or otherwise\ndesignated in writing by the copyright owner as \"Not a Contribution.\"\n\"Contributor\" shall mean Licensor and any individual or Legal Entity\non behalf of whom a Contribution has been received by Licensor and\nsubsequently incorporated within the Work.\n2. Grant of Copyright License. Subject to the terms and conditions of\nthis License, each Contributor hereby grants to You a perpetual,\nworldwide, non-exclusive, no-charge, royalty-free, irrevocable\ncopyright license to reproduce, prepare Derivative Works of,\npublicly display, publicly perform, sublicense, and distribute the\nWork and such Derivative Works in Source or Object form.\n3. Grant of Patent License. Subject to the terms and conditions of\nthis License, each Contributor hereby grants to You a perpetual,\nworldwide, non-exclusive, no-charge, royalty-free, irrevocable\n(except as stated in this section) patent license to make, have made,\nuse, offer to sell, sell, import, and otherwise transfer the Work,\nwhere such license applies only to those patent claims licensable\nby such Contributor that are necessarily infringed by their\nContribution(s) alone or by combination of their Contribution(s)\nwith the Work to which such Contribution(s) was submitted. If You\ninstitute patent litigation against any entity (including a\ncross-claim or counterclaim in a lawsuit) alleging that the Work\nor a Contribution incorporated within the Work constitutes direct\nor contributory patent infringement, then any patent licenses\ngranted to You under this License for that Work shall terminate\nas of the date such litigation is filed.\n4. Redistribution. You may reproduce and distribute copies of the\nWork or Derivative Works thereof in any medium, with or without\nmodifications, and in Source or Object form, provided that You\nmeet the following conditions:\n(a) You must give any other recipients of the Work or\nDerivative Works a copy of this License; and\n(b) You must cause any modified files to carry prominent notices\nstating that You changed the files; and\n(c) You must retain, in the Source form of any Derivative Works\nthat You distribute, all copyright, patent, trademark, and\nattribution notices from the Source form of the Work,\nexcluding those notices that do not pertain to any part of\nthe Derivative Works; and\n(d) If the Work includes a \"NOTICE\" text file as part of its\ndistribution, then any Derivative Works that You distribute must\ninclude a readable copy of the attribution notices contained\nwithin such NOTICE file, excluding those notices that do not\npertain to any part of the Derivative Works, in at least one\nof the following places: within a NOTICE text file distributed\nas part of the Derivative Works; within the Source form or\ndocumentation, if provided along with the Derivative Works; or,\nwithin a display generated by the Derivative Works, if and\nwherever such third-party notices normally appear. The contents\nof the NOTICE file are for informational purposes only and\ndo not modify the License. You may add Your own attribution\nnotices within Derivative Works that You distribute, alongside\nor as an addendum to the NOTICE text from the Work, provided\nthat such additional attribution notices cannot be construed\nas modifying the License.\nYou may add Your own copyright statement to Your modifications and\nmay provide additional or different license terms and conditions\nfor use, reproduction, or distribution of Your modifications, or\nfor any such Derivative Works as a whole, provided Your use,\nreproduction, and distribution of the Work otherwise complies with\nthe conditions stated in this License.\n5. Submission of Contributions. Unless You explicitly state otherwise,\nany Contribution intentionally submitted for inclusion in the Work\nby You to the Licensor shall be under the terms and conditions of\nthis License, without any additional terms or conditions.\nNotwithstanding the above, nothing herein shall supersede or modify\nthe terms of any separate license agreement you may have executed\nwith Licensor regarding such Contributions.\n6. Trademarks. This License does not grant permission to use the trade\nnames, trademarks, service marks, or product names of the Licensor,\nexcept as required for reasonable and customary use in describing the\norigin of the Work and reproducing the content of the NOTICE file.\n7. Disclaimer of Warranty. Unless required by applicable law or\nagreed to in writing, Licensor provides the Work (and each\nContributor provides its Contributions) on an \"AS IS\" BASIS,\nWITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or\nimplied, including, without limitation, any warranties or conditions\nof TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A\nPARTICULAR PURPOSE. You are solely responsible for determining the\nappropriateness of using or redistributing the Work and assume any\nrisks associated with Your exercise of permissions under this License.\n8. Limitation of Liability. In no event and under no legal theory,\nwhether in tort (including negligence), contract, or otherwise,\nunless required by applicable law (such as deliberate and grossly\nnegligent acts) or agreed to in writing, shall any Contributor be\nliable to You for damages, including any direct, indirect, special,\nincidental, or consequential damages of any character arising as a\nresult of this License or out of the use or inability to use the\nWork (including but not limited to damages for loss of goodwill,\nwork stoppage, computer failure or malfunction, or any and all\nother commercial damages or losses), even if such Contributor\nhas been advised of the possibility of such damages.\n9. Accepting Warranty or Additional Liability. While redistributing\nthe Work or Derivative Works thereof, You may choose to offer,\nand charge a fee for, acceptance of support, warranty, indemnity,\nor other liability obligations and/or rights consistent with this\nLicense. However, in accepting such obligations, You may act only\non Your own behalf and on Your sole responsibility, not on behalf\nof any other Contributor, and only if You agree to indemnify,\ndefend, and hold each Contributor harmless for any liability\nincurred by, or claims asserted against, such Contributor by reason\nof your accepting any such warranty or additional liability.\nEND OF TERMS AND CONDITIONS\nexpat\u00b6\nThe pyexpat\nextension is built using an included copy of the expat\nsources unless the build is configured --with-system-expat\n:\nCopyright (c) 1998, 1999, 2000 Thai Open Source Software Center Ltd\nand Clark Cooper\nPermission is hereby granted, free of charge, to any person obtaining\na copy of this software and associated documentation files (the\n\"Software\"), to deal in the Software without restriction, including\nwithout limitation the rights to use, copy, modify, merge, publish,\ndistribute, sublicense, and/or sell copies of the Software, and to\npermit persons to whom the Software is furnished to do so, subject to\nthe following conditions:\nThe above copyright notice and this permission notice shall be included\nin all copies or substantial portions of the Software.\nTHE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND,\nEXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF\nMERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.\nIN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY\nCLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT,\nTORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE\nSOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.\nlibffi\u00b6\nThe _ctypes\nC extension underlying the ctypes\nmodule\nis built using an included copy of the libffi\nsources unless the build is configured --with-system-libffi\n:\nCopyright (c) 1996-2008 Red Hat, Inc and others.\nPermission is hereby granted, free of charge, to any person obtaining\na copy of this software and associated documentation files (the\n\"Software\"), to deal in the Software without restriction, including\nwithout limitation the rights to use, copy, modify, merge, publish,\ndistribute, sublicense, and/or sell copies of the Software, and to\npermit persons to whom the Software is furnished to do so, subject to\nthe following conditions:\nThe above copyright notice and this permission notice shall be included\nin all copies or substantial portions of the Software.\nTHE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND,\nEXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF\nMERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND\nNONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT\nHOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY,\nWHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,\nOUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER\nDEALINGS IN THE SOFTWARE.\nzlib\u00b6\nThe zlib\nextension is built using an included copy of the zlib\nsources if the zlib version found on the system is too old to be\nused for the build:\nCopyright (C) 1995-2011 Jean-loup Gailly and Mark Adler\nThis software is provided 'as-is', without any express or implied\nwarranty. In no event will the authors be held liable for any damages\narising from the use of this software.\nPermission is granted to anyone to use this software for any purpose,\nincluding commercial applications, and to alter it and redistribute it\nfreely, subject to the following restrictions:\n1. The origin of this software must not be misrepresented; you must not\nclaim that you wrote the original software. If you use this software\nin a product, an acknowledgment in the product documentation would be\nappreciated but is not required.\n2. Altered source versions must be plainly marked as such, and must not be\nmisrepresented as being the original software.\n3. This notice may not be removed or altered from any source distribution.\nJean-loup Gailly Mark Adler\njloup@gzip.org madler@alumni.caltech.edu\ncfuhash\u00b6\nThe implementation of the hash table used by the tracemalloc\nis based\non the cfuhash project:\nCopyright (c) 2005 Don Owens\nAll rights reserved.\nThis code is released under the BSD license:\nRedistribution and use in source and binary forms, with or without\nmodification, are permitted provided that the following conditions\nare met:\n* Redistributions of source code must retain the above copyright\nnotice, this list of conditions and the following disclaimer.\n* Redistributions in binary form must reproduce the above\ncopyright notice, this list of conditions and the following\ndisclaimer in the documentation and/or other materials provided\nwith the distribution.\n* Neither the name of the author nor the names of its\ncontributors may be used to endorse or promote products derived\nfrom this software without specific prior written permission.\nTHIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS\n\"AS IS\" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT\nLIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS\nFOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE\nCOPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT,\nINCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES\n(INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR\nSERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION)\nHOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT,\nSTRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)\nARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED\nOF THE POSSIBILITY OF SUCH DAMAGE.\nlibmpdec\u00b6\nThe _decimal\nC extension underlying the decimal\nmodule\nis built using an included copy of the libmpdec\nlibrary unless the build is configured --with-system-libmpdec\n:\nCopyright (c) 2008-2020 Stefan Krah. All rights reserved.\nRedistribution and use in source and binary forms, with or without\nmodification, are permitted provided that the following conditions\nare met:\n1. Redistributions of source code must retain the above copyright\nnotice, this list of conditions and the following disclaimer.\n2. Redistributions in binary form must reproduce the above copyright\nnotice, this list of conditions and the following disclaimer in the\ndocumentation and/or other materials provided with the distribution.\nTHIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS \"AS IS\" AND\nANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE\nIMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE\nARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR OR CONTRIBUTORS BE LIABLE\nFOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL\nDAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS\nOR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION)\nHOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT\nLIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY\nOUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF\nSUCH DAMAGE.\nW3C C14N test suite\u00b6\nThe C14N 2.0 test suite in the test\npackage\n(Lib/test/xmltestdata/c14n-20/\n) was retrieved from the W3C website at\nhttps://www.w3.org/TR/xml-c14n2-testcases/ and is distributed under the\n3-clause BSD license:\nCopyright (c) 2013 W3C(R) (MIT, ERCIM, Keio, Beihang),\nAll Rights Reserved.\nRedistribution and use in source and binary forms, with or without\nmodification, are permitted provided that the following conditions\nare met:\n* Redistributions of works must retain the original copyright notice,\nthis list of conditions and the following disclaimer.\n* Redistributions in binary form must reproduce the original copyright\nnotice, this list of conditions and the following disclaimer in the\ndocumentation and/or other materials provided with the distribution.\n* Neither the name of the W3C nor the names of its contributors may be\nused to endorse or promote products derived from this work without\nspecific prior written permission.\nTHIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS\n\"AS IS\" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT\nLIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR\nA PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT\nOWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,\nSPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT\nLIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,\nDATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY\nTHEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT\n(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE\nOF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.\nmimalloc\u00b6\nMIT License:\nCopyright (c) 2018-2021 Microsoft Corporation, Daan Leijen\nPermission is hereby granted, free of charge, to any person obtaining a copy\nof this software and associated documentation files (the \"Software\"), to deal\nin the Software without restriction, including without limitation the rights\nto use, copy, modify, merge, publish, distribute, sublicense, and/or sell\ncopies of the Software, and to permit persons to whom the Software is\nfurnished to do so, subject to the following conditions:\nThe above copyright notice and this permission notice shall be included in all\ncopies or substantial portions of the Software.\nTHE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\nIMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\nFITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE\nAUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\nLIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,\nOUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE\nSOFTWARE.\nasyncio\u00b6\nParts of the asyncio\nmodule are incorporated from\nuvloop 0.16,\nwhich is distributed under the MIT license:\nCopyright (c) 2015-2021 MagicStack Inc. http://magic.io\nPermission is hereby granted, free of charge, to any person obtaining\na copy of this software and associated documentation files (the\n\"Software\"), to deal in the Software without restriction, including\nwithout limitation the rights to use, copy, modify, merge, publish,\ndistribute, sublicense, and/or sell copies of the Software, and to\npermit persons to whom the Software is furnished to do so, subject to\nthe following conditions:\nThe above copyright notice and this permission notice shall be\nincluded in all copies or substantial portions of the Software.\nTHE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND,\nEXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF\nMERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND\nNONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE\nLIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION\nOF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION\nWITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.\nGlobal Unbounded Sequences (GUS)\u00b6\nThe file Python/qsbr.c\nis adapted from FreeBSD\u2019s \u201cGlobal Unbounded\nSequences\u201d safe memory reclamation scheme in\nsubr_smr.c.\nThe file is distributed under the 2-Clause BSD License:\nCopyright (c) 2019,2020 Jeffrey Roberson \nRedistribution and use in source and binary forms, with or without\nmodification, are permitted provided that the following conditions\nare met:\n1. Redistributions of source code must retain the above copyright\nnotice unmodified, this list of conditions, and the following\ndisclaimer.\n2. Redistributions in binary form must reproduce the above copyright\nnotice, this list of conditions and the following disclaimer in the\ndocumentation and/or other materials provided with the distribution.\nTHIS SOFTWARE IS PROVIDED BY THE AUTHOR \"AS IS\" AND ANY EXPRESS OR\nIMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES\nOF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED.\nIN NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY DIRECT, INDIRECT,\nINCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT\nNOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,\nDATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY\nTHEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT\n(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF\nTHIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.\nZstandard bindings\u00b6\nZstandard bindings in Modules/_zstd\nand Lib/compression/zstd\nare based on code from the\npyzstd library, copyright Ma Lin and\ncontributors. The pyzstd code is distributed under the 3-Clause BSD License:\nCopyright (c) 2020-present, Ma Lin and contributors.\nAll rights reserved.\nRedistribution and use in source and binary forms, with or without\nmodification, are permitted provided that the following conditions are met:\n1. Redistributions of source code must retain the above copyright notice, this\nlist of conditions and the following disclaimer.\n2. Redistributions in binary form must reproduce the above copyright notice,\nthis list of conditions and the following disclaimer in the documentation\nand/or other materials provided with the distribution.\n3. Neither the name of the copyright holder nor the names of its\ncontributors may be used to endorse or promote products derived from\nthis software without specific prior written permission.\nTHIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS \"AS IS\"\nAND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE\nIMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE\nDISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE\nFOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL\nDAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR\nSERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER\nCAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,\nOR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE\nOF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.", "code_snippets": [], "language": "Python", "source": "python.org", "token_count": 12537} +{"url": "https://docs.python.org/3/c-api/perfmaps.html", "title": "Support for Perf Maps", "content": "Support for Perf Maps\u00b6\nOn supported platforms (as of this writing, only Linux), the runtime can take\nadvantage of perf map files to make Python functions visible to an external\nprofiling tool (such as perf).\nA running process may create a file in the /tmp\ndirectory, which contains entries\nthat can map a section of executable code to a name. This interface is described in the\ndocumentation of the Linux Perf tool.\nIn Python, these helper APIs can be used by libraries and features that rely on generating machine code on the fly.\nNote that holding an attached thread state is not required for these APIs.\n-\nint PyUnstable_PerfMapState_Init(void)\u00b6\n- This is Unstable API. It may change without warning in minor releases.\nOpen the\n/tmp/perf-$pid.map\nfile, unless it\u2019s already opened, and create a lock to ensure thread-safe writes to the file (provided the writes are done throughPyUnstable_WritePerfMapEntry()\n). Normally, there\u2019s no need to call this explicitly; just usePyUnstable_WritePerfMapEntry()\nand it will initialize the state on first call.Returns\n0\non success,-1\non failure to create/open the perf map file, or-2\non failure to create a lock. Checkerrno\nfor more information about the cause of a failure.\n-\nint PyUnstable_WritePerfMapEntry(const void *code_addr, unsigned int code_size, const char *entry_name)\u00b6\n- This is Unstable API. It may change without warning in minor releases.\nWrite one single entry to the\n/tmp/perf-$pid.map\nfile. This function is thread safe. Here is what an example entry looks like:# address size name 7f3529fcf759 b py::bar:/run/t.py\nWill call\nPyUnstable_PerfMapState_Init()\nbefore writing the entry, if the perf map file is not already opened. Returns0\non success, or the same error codes asPyUnstable_PerfMapState_Init()\non failure.\n-\nvoid PyUnstable_PerfMapState_Fini(void)\u00b6\n- This is Unstable API. It may change without warning in minor releases.\nClose the perf map file opened by\nPyUnstable_PerfMapState_Init()\n. This is called by the runtime itself during interpreter shut-down. In general, there shouldn\u2019t be a reason to explicitly call this, except to handle specific scenarios such as forking.", "code_snippets": [], "language": "Python", "source": "python.org", "token_count": 536} +{"url": "https://docs.python.org/3/whatsnew/3.8.html", "title": "What\u2019s New In Python 3.8", "content": "What\u2019s New In Python 3.8\u00b6\n- Editor:\nRaymond Hettinger\nThis article explains the new features in Python 3.8, compared to 3.7. Python 3.8 was released on October 14, 2019. For full details, see the changelog.\nSummary \u2013 Release highlights\u00b6\nNew Features\u00b6\nAssignment expressions\u00b6\nThere is new syntax :=\nthat assigns values to variables as part of a larger\nexpression. It is affectionately known as \u201cthe walrus operator\u201d due to\nits resemblance to the eyes and tusks of a walrus.\nIn this example, the assignment expression helps avoid calling\nlen()\ntwice:\nif (n := len(a)) > 10:\nprint(f\"List is too long ({n} elements, expected <= 10)\")\nA similar benefit arises during regular expression matching where match objects are needed twice, once to test whether a match occurred and another to extract a subgroup:\ndiscount = 0.0\nif (mo := re.search(r'(\\d+)% discount', advertisement)):\ndiscount = float(mo.group(1)) / 100.0\nThe operator is also useful with while-loops that compute a value to test loop termination and then need that same value again in the body of the loop:\n# Loop over fixed length blocks\nwhile (block := f.read(256)) != '':\nprocess(block)\nAnother motivating use case arises in list comprehensions where a value computed in a filtering condition is also needed in the expression body:\n[clean_name.title() for name in names\nif (clean_name := normalize('NFC', name)) in allowed_names]\nTry to limit use of the walrus operator to clean cases that reduce complexity and improve readability.\nSee PEP 572 for a full description.\n(Contributed by Emily Morehouse in bpo-35224.)\nPositional-only parameters\u00b6\nThere is a new function parameter syntax /\nto indicate that some\nfunction parameters must be specified positionally and cannot be used as\nkeyword arguments. This is the same notation shown by help()\nfor C\nfunctions annotated with Larry Hastings\u2019\nArgument Clinic tool.\nIn the following example, parameters a and b are positional-only, while c or d can be positional or keyword, and e or f are required to be keywords:\ndef f(a, b, /, c, d, *, e, f):\nprint(a, b, c, d, e, f)\nThe following is a valid call:\nf(10, 20, 30, d=40, e=50, f=60)\nHowever, these are invalid calls:\nf(10, b=20, c=30, d=40, e=50, f=60) # b cannot be a keyword argument\nf(10, 20, 30, 40, 50, f=60) # e must be a keyword argument\nOne use case for this notation is that it allows pure Python functions\nto fully emulate behaviors of existing C coded functions. For example,\nthe built-in divmod()\nfunction does not accept keyword arguments:\ndef divmod(a, b, /):\n\"Emulate the built in divmod() function\"\nreturn (a // b, a % b)\nAnother use case is to preclude keyword arguments when the parameter\nname is not helpful. For example, the builtin len()\nfunction has\nthe signature len(obj, /)\n. This precludes awkward calls such as:\nlen(obj='hello') # The \"obj\" keyword argument impairs readability\nA further benefit of marking a parameter as positional-only is that it\nallows the parameter name to be changed in the future without risk of\nbreaking client code. For example, in the statistics\nmodule, the\nparameter name dist may be changed in the future. This was made\npossible with the following function specification:\ndef quantiles(dist, /, *, n=4, method='exclusive')\n...\nSince the parameters to the left of /\nare not exposed as possible\nkeywords, the parameters names remain available for use in **kwargs\n:\n>>> def f(a, b, /, **kwargs):\n... print(a, b, kwargs)\n...\n>>> f(10, 20, a=1, b=2, c=3) # a and b are used in two ways\n10 20 {'a': 1, 'b': 2, 'c': 3}\nThis greatly simplifies the implementation of functions and methods\nthat need to accept arbitrary keyword arguments. For example, here\nis an excerpt from code in the collections\nmodule:\nclass Counter(dict):\ndef __init__(self, iterable=None, /, **kwds):\n# Note \"iterable\" is a possible keyword argument\nSee PEP 570 for a full description.\n(Contributed by Pablo Galindo in bpo-36540.)\nParallel filesystem cache for compiled bytecode files\u00b6\nThe new PYTHONPYCACHEPREFIX\nsetting (also available as\n-X\npycache_prefix\n) configures the implicit bytecode\ncache to use a separate parallel filesystem tree, rather than\nthe default __pycache__\nsubdirectories within each source\ndirectory.\nThe location of the cache is reported in sys.pycache_prefix\n(None\nindicates the default location in __pycache__\nsubdirectories).\n(Contributed by Carl Meyer in bpo-33499.)\nDebug build uses the same ABI as release build\u00b6\nPython now uses the same ABI whether it\u2019s built in release or debug mode. On Unix, when Python is built in debug mode, it is now possible to load C extensions built in release mode and C extensions built using the stable ABI.\nRelease builds and debug builds are now ABI compatible: defining the\nPy_DEBUG\nmacro no longer implies the Py_TRACE_REFS\nmacro, which\nintroduces the only ABI incompatibility. The Py_TRACE_REFS\nmacro, which\nadds the sys.getobjects()\nfunction and the PYTHONDUMPREFS\nenvironment variable, can be set using the new ./configure\n--with-trace-refs\nbuild option.\n(Contributed by Victor Stinner in bpo-36465.)\nOn Unix, C extensions are no longer linked to libpython except on Android and Cygwin. It is now possible for a statically linked Python to load a C extension built using a shared library Python. (Contributed by Victor Stinner in bpo-21536.)\nOn Unix, when Python is built in debug mode, import now also looks for C extensions compiled in release mode and for C extensions compiled with the stable ABI. (Contributed by Victor Stinner in bpo-36722.)\nTo embed Python into an application, a new --embed\noption must be passed to\npython3-config --libs --embed\nto get -lpython3.8\n(link the application\nto libpython). To support both 3.8 and older, try python3-config --libs\n--embed\nfirst and fallback to python3-config --libs\n(without --embed\n)\nif the previous command fails.\nAdd a pkg-config python-3.8-embed\nmodule to embed Python into an\napplication: pkg-config python-3.8-embed --libs\nincludes -lpython3.8\n.\nTo support both 3.8 and older, try pkg-config python-X.Y-embed --libs\nfirst\nand fallback to pkg-config python-X.Y --libs\n(without --embed\n) if the\nprevious command fails (replace X.Y\nwith the Python version).\nOn the other hand, pkg-config python3.8 --libs\nno longer contains\n-lpython3.8\n. C extensions must not be linked to libpython (except on\nAndroid and Cygwin, whose cases are handled by the script);\nthis change is backward incompatible on purpose.\n(Contributed by Victor Stinner in bpo-36721.)\nf-strings support =\nfor self-documenting expressions and debugging\u00b6\nAdded an =\nspecifier to f-strings. An f-string such as\nf'{expr=}'\nwill expand to the text of the expression, an equal sign,\nthen the representation of the evaluated expression. For example:\n>>> user = 'eric_idle'\n>>> member_since = date(1975, 7, 31)\n>>> f'{user=} {member_since=}'\n\"user='eric_idle' member_since=datetime.date(1975, 7, 31)\"\nThe usual f-string format specifiers allow more control over how the result of the expression is displayed:\n>>> delta = date.today() - member_since\n>>> f'{user=!s} {delta.days=:,d}'\n'user=eric_idle delta.days=16,075'\nThe =\nspecifier will display the whole expression so that\ncalculations can be shown:\n>>> print(f'{theta=} {cos(radians(theta))=:.3f}')\ntheta=30 cos(radians(theta))=0.866\n(Contributed by Eric V. Smith and Larry Hastings in bpo-36817.)\nPEP 578: Python Runtime Audit Hooks\u00b6\nThe PEP adds an Audit Hook and Verified Open Hook. Both are available from Python and native code, allowing applications and frameworks written in pure Python code to take advantage of extra notifications, while also allowing embedders or system administrators to deploy builds of Python where auditing is always enabled.\nSee PEP 578 for full details.\nPEP 587: Python Initialization Configuration\u00b6\nThe PEP 587 adds a new C API to configure the Python Initialization providing finer control on the whole configuration and better error reporting.\nNew structures:\nNew functions:\nThis PEP also adds _PyRuntimeState.preconfig\n(PyPreConfig\ntype)\nand PyInterpreterState.config\n(PyConfig\ntype) fields to these\ninternal structures. PyInterpreterState.config\nbecomes the new\nreference configuration, replacing global configuration variables and\nother private variables.\nSee Python Initialization Configuration for the documentation.\nSee PEP 587 for a full description.\n(Contributed by Victor Stinner in bpo-36763.)\nPEP 590: Vectorcall: a fast calling protocol for CPython\u00b6\nThe Vectorcall Protocol is added to the Python/C API. It is meant to formalize existing optimizations which were already done for various classes. Any static type implementing a callable can use this protocol.\nThis is currently provisional. The aim is to make it fully public in Python 3.9.\nSee PEP 590 for a full description.\n(Contributed by Jeroen Demeyer, Mark Shannon and Petr Viktorin in bpo-36974.)\nPickle protocol 5 with out-of-band data buffers\u00b6\nWhen pickle\nis used to transfer large data between Python processes\nin order to take advantage of multi-core or multi-machine processing,\nit is important to optimize the transfer by reducing memory copies, and\npossibly by applying custom techniques such as data-dependent compression.\nThe pickle\nprotocol 5 introduces support for out-of-band buffers\nwhere PEP 3118-compatible data can be transmitted separately from the\nmain pickle stream, at the discretion of the communication layer.\nSee PEP 574 for a full description.\n(Contributed by Antoine Pitrou in bpo-36785.)\nOther Language Changes\u00b6\nA\ncontinue\nstatement was illegal in thefinally\nclause due to a problem with the implementation. In Python 3.8 this restriction was lifted. (Contributed by Serhiy Storchaka in bpo-32489.)The\nbool\n,int\n, andfractions.Fraction\ntypes now have anas_integer_ratio()\nmethod like that found infloat\nanddecimal.Decimal\n. This minor API extension makes it possible to writenumerator, denominator = x.as_integer_ratio()\nand have it work across multiple numeric types. (Contributed by Lisa Roach in bpo-33073 and Raymond Hettinger in bpo-37819.)Constructors of\nint\n,float\nandcomplex\nwill now use the__index__()\nspecial method, if available and the corresponding method__int__()\n,__float__()\nor__complex__()\nis not available. (Contributed by Serhiy Storchaka in bpo-20092.)Added support of\n\\N{name}\nescapes inregular expressions\n:>>> notice = 'Copyright \u00a9 2019' >>> copyright_year_pattern = re.compile(r'\\N{copyright sign}\\s*(\\d{4})') >>> int(copyright_year_pattern.search(notice).group(1)) 2019\n(Contributed by Jonathan Eunice and Serhiy Storchaka in bpo-30688.)\nDict and dictviews are now iterable in reversed insertion order using\nreversed()\n. (Contributed by R\u00e9mi Lapeyre in bpo-33462.)The syntax allowed for keyword names in function calls was further restricted. In particular,\nf((keyword)=arg)\nis no longer allowed. It was never intended to permit more than a bare name on the left-hand side of a keyword argument assignment term. (Contributed by Benjamin Peterson in bpo-34641.)Generalized iterable unpacking in\nyield\nandreturn\nstatements no longer requires enclosing parentheses. This brings the yield and return syntax into better agreement with normal assignment syntax:>>> def parse(family): ... lastname, *members = family.split() ... return lastname.upper(), *members ... >>> parse('simpsons homer marge bart lisa maggie') ('SIMPSONS', 'homer', 'marge', 'bart', 'lisa', 'maggie')\n(Contributed by David Cuthbert and Jordan Chapman in bpo-32117.)\nWhen a comma is missed in code such as\n[(10, 20) (30, 40)]\n, the compiler displays aSyntaxWarning\nwith a helpful suggestion. This improves on just having aTypeError\nindicating that the first tuple was not callable. (Contributed by Serhiy Storchaka in bpo-15248.)Arithmetic operations between subclasses of\ndatetime.date\nordatetime.datetime\nanddatetime.timedelta\nobjects now return an instance of the subclass, rather than the base class. This also affects the return type of operations whose implementation (directly or indirectly) usesdatetime.timedelta\narithmetic, such asastimezone()\n. (Contributed by Paul Ganssle in bpo-32417.)When the Python interpreter is interrupted by Ctrl-C (SIGINT) and the resulting\nKeyboardInterrupt\nexception is not caught, the Python process now exits via a SIGINT signal or with the correct exit code such that the calling process can detect that it died due to a Ctrl-C. Shells on POSIX and Windows use this to properly terminate scripts in interactive sessions. (Contributed by Google via Gregory P. Smith in bpo-1054041.)Some advanced styles of programming require updating the\ntypes.CodeType\nobject for an existing function. Since code objects are immutable, a new code object needs to be created, one that is modeled on the existing code object. With 19 parameters, this was somewhat tedious. Now, the newreplace()\nmethod makes it possible to create a clone with a few altered parameters.Here\u2019s an example that alters the\nstatistics.mean()\nfunction to prevent the data parameter from being used as a keyword argument:>>> from statistics import mean >>> mean(data=[10, 20, 90]) 40 >>> mean.__code__ = mean.__code__.replace(co_posonlyargcount=1) >>> mean(data=[10, 20, 90]) Traceback (most recent call last): ... TypeError: mean() got some positional-only arguments passed as keyword arguments: 'data'\n(Contributed by Victor Stinner in bpo-37032.)\nFor integers, the three-argument form of the\npow()\nfunction now permits the exponent to be negative in the case where the base is relatively prime to the modulus. It then computes a modular inverse to the base when the exponent is-1\n, and a suitable power of that inverse for other negative exponents. For example, to compute the modular multiplicative inverse of 38 modulo 137, write:>>> pow(38, -1, 137) 119 >>> 119 * 38 % 137 1\nModular inverses arise in the solution of linear Diophantine equations. For example, to find integer solutions for\n4258\ud835\udc65 + 147\ud835\udc66 = 369\n, first rewrite as4258\ud835\udc65 \u2261 369 (mod 147)\nthen solve:>>> x = 369 * pow(4258, -1, 147) % 147 >>> y = (4258 * x - 369) // -147 >>> 4258 * x + 147 * y 369\n(Contributed by Mark Dickinson in bpo-36027.)\nDict comprehensions have been synced-up with dict literals so that the key is computed first and the value second:\n>>> # Dict comprehension >>> cast = {input('role? '): input('actor? ') for i in range(2)} role? King Arthur actor? Chapman role? Black Knight actor? Cleese >>> # Dict literal >>> cast = {input('role? '): input('actor? ')} role? Sir Robin actor? Eric Idle\nThe guaranteed execution order is helpful with assignment expressions because variables assigned in the key expression will be available in the value expression:\n>>> names = ['Martin von L\u00f6wis', '\u0141ukasz Langa', 'Walter D\u00f6rwald'] >>> {(n := normalize('NFC', name)).casefold() : n for name in names} {'martin von l\u00f6wis': 'Martin von L\u00f6wis', '\u0142ukasz langa': '\u0141ukasz Langa', 'walter d\u00f6rwald': 'Walter D\u00f6rwald'}\n(Contributed by J\u00f6rn Heissler in bpo-35224.)\nThe\nobject.__reduce__()\nmethod can now return a tuple from two to six elements long. Formerly, five was the limit. The new, optional sixth element is a callable with a(obj, state)\nsignature. This allows the direct control over the state-updating behavior of a specific object. If not None, this callable will have priority over the object\u2019s__setstate__()\nmethod. (Contributed by Pierre Glaser and Olivier Grisel in bpo-35900.)\nNew Modules\u00b6\nThe new\nimportlib.metadata\nmodule provides (provisional) support for reading metadata from third-party packages. For example, it can extract an installed package\u2019s version number, list of entry points, and more:>>> # Note following example requires that the popular \"requests\" >>> # package has been installed. >>> >>> from importlib.metadata import version, requires, files >>> version('requests') '2.22.0' >>> list(requires('requests')) ['chardet (<3.1.0,>=3.0.2)'] >>> list(files('requests'))[:5] [PackagePath('requests-2.22.0.dist-info/INSTALLER'), PackagePath('requests-2.22.0.dist-info/LICENSE'), PackagePath('requests-2.22.0.dist-info/METADATA'), PackagePath('requests-2.22.0.dist-info/RECORD'), PackagePath('requests-2.22.0.dist-info/WHEEL')]\n(Contributed by Barry Warsaw and Jason R. Coombs in bpo-34632.)\nImproved Modules\u00b6\nast\u00b6\nAST nodes now have end_lineno\nand end_col_offset\nattributes,\nwhich give the precise location of the end of the node. (This only\napplies to nodes that have lineno\nand col_offset\nattributes.)\nNew function ast.get_source_segment()\nreturns the source code\nfor a specific AST node.\n(Contributed by Ivan Levkivskyi in bpo-33416.)\nThe ast.parse()\nfunction has some new flags:\ntype_comments=True\ncauses it to return the text of PEP 484 and PEP 526 type comments associated with certain AST nodes;mode='func_type'\ncan be used to parse PEP 484 \u201csignature type comments\u201d (returned for function definition AST nodes);feature_version=(3, N)\nallows specifying an earlier Python 3 version. For example,feature_version=(3, 4)\nwill treatasync\nandawait\nas non-reserved words.\n(Contributed by Guido van Rossum in bpo-35766.)\nasyncio\u00b6\nasyncio.run()\nhas graduated from the provisional to stable API. This\nfunction can be used to execute a coroutine and return the result while\nautomatically managing the event loop. For example:\nimport asyncio\nasync def main():\nawait asyncio.sleep(0)\nreturn 42\nasyncio.run(main())\nThis is roughly equivalent to:\nimport asyncio\nasync def main():\nawait asyncio.sleep(0)\nreturn 42\nloop = asyncio.new_event_loop()\nasyncio.set_event_loop(loop)\ntry:\nloop.run_until_complete(main())\nfinally:\nasyncio.set_event_loop(None)\nloop.close()\nThe actual implementation is significantly more complex. Thus,\nasyncio.run()\nshould be the preferred way of running asyncio programs.\n(Contributed by Yury Selivanov in bpo-32314.)\nRunning python -m asyncio\nlaunches a natively async REPL. This allows rapid\nexperimentation with code that has a top-level await\n. There is no\nlonger a need to directly call asyncio.run()\nwhich would spawn a new event\nloop on every invocation:\n$ python -m asyncio\nasyncio REPL 3.8.0\nUse \"await\" directly instead of \"asyncio.run()\".\nType \"help\", \"copyright\", \"credits\" or \"license\" for more information.\n>>> import asyncio\n>>> await asyncio.sleep(10, result='hello')\nhello\n(Contributed by Yury Selivanov in bpo-37028.)\nThe exception asyncio.CancelledError\nnow inherits from\nBaseException\nrather than Exception\nand no longer inherits\nfrom concurrent.futures.CancelledError\n.\n(Contributed by Yury Selivanov in bpo-32528.)\nOn Windows, the default event loop is now ProactorEventLoop\n.\n(Contributed by Victor Stinner in bpo-34687.)\nProactorEventLoop\nnow also supports UDP.\n(Contributed by Adam Meily and Andrew Svetlov in bpo-29883.)\nProactorEventLoop\ncan now be interrupted by\nKeyboardInterrupt\n(\u201cCTRL+C\u201d).\n(Contributed by Vladimir Matveev in bpo-23057.)\nAdded asyncio.Task.get_coro()\nfor getting the wrapped coroutine\nwithin an asyncio.Task\n.\n(Contributed by Alex Gr\u00f6nholm in bpo-36999.)\nAsyncio tasks can now be named, either by passing the name\nkeyword\nargument to asyncio.create_task()\nor\nthe create_task()\nevent loop method, or by\ncalling the set_name()\nmethod on the task object. The\ntask name is visible in the repr()\noutput of asyncio.Task\nand\ncan also be retrieved using the get_name()\nmethod.\n(Contributed by Alex Gr\u00f6nholm in bpo-34270.)\nAdded support for\nHappy Eyeballs to\nasyncio.loop.create_connection()\n. To specify the behavior, two new\nparameters have been added: happy_eyeballs_delay and interleave. The Happy\nEyeballs algorithm improves responsiveness in applications that support IPv4\nand IPv6 by attempting to simultaneously connect using both.\n(Contributed by twisteroid ambassador in bpo-33530.)\nbuiltins\u00b6\nThe compile()\nbuilt-in has been improved to accept the\nast.PyCF_ALLOW_TOP_LEVEL_AWAIT\nflag. With this new flag passed,\ncompile()\nwill allow top-level await\n, async for\nand async with\nconstructs that are usually considered invalid syntax. Asynchronous code object\nmarked with the CO_COROUTINE\nflag may then be returned.\n(Contributed by Matthias Bussonnier in bpo-34616)\ncollections\u00b6\nThe _asdict()\nmethod for\ncollections.namedtuple()\nnow returns a dict\ninstead of a\ncollections.OrderedDict\n. This works because regular dicts have\nguaranteed ordering since Python 3.7. If the extra features of\nOrderedDict\nare required, the suggested remediation is to cast the\nresult to the desired type: OrderedDict(nt._asdict())\n.\n(Contributed by Raymond Hettinger in bpo-35864.)\ncProfile\u00b6\nThe cProfile.Profile\nclass can now be used as a context manager.\nProfile a block of code by running:\nimport cProfile\nwith cProfile.Profile() as profiler:\n# code to be profiled\n...\n(Contributed by Scott Sanderson in bpo-29235.)\ncsv\u00b6\nThe csv.DictReader\nnow returns instances of dict\ninstead of\na collections.OrderedDict\n. The tool is now faster and uses less\nmemory while still preserving the field order.\n(Contributed by Michael Selik in bpo-34003.)\ncurses\u00b6\nAdded a new variable holding structured version information for the\nunderlying ncurses library: ncurses_version\n.\n(Contributed by Serhiy Storchaka in bpo-31680.)\nctypes\u00b6\nOn Windows, CDLL\nand subclasses now accept a winmode parameter\nto specify flags for the underlying LoadLibraryEx\ncall. The default flags are\nset to only load DLL dependencies from trusted locations, including the path\nwhere the DLL is stored (if a full or partial path is used to load the initial\nDLL) and paths added by add_dll_directory()\n.\n(Contributed by Steve Dower in bpo-36085.)\ndatetime\u00b6\nAdded new alternate constructors datetime.date.fromisocalendar()\nand\ndatetime.datetime.fromisocalendar()\n, which construct date\nand\ndatetime\nobjects respectively from ISO year, week number, and weekday;\nthese are the inverse of each class\u2019s isocalendar\nmethod.\n(Contributed by Paul Ganssle in bpo-36004.)\nfunctools\u00b6\nfunctools.lru_cache()\ncan now be used as a straight decorator rather\nthan as a function returning a decorator. So both of these are now supported:\n@lru_cache\ndef f(x):\n...\n@lru_cache(maxsize=256)\ndef f(x):\n...\n(Contributed by Raymond Hettinger in bpo-36772.)\nAdded a new functools.cached_property()\ndecorator, for computed properties\ncached for the life of the instance.\nimport functools\nimport statistics\nclass Dataset:\ndef __init__(self, sequence_of_numbers):\nself.data = sequence_of_numbers\n@functools.cached_property\ndef variance(self):\nreturn statistics.variance(self.data)\n(Contributed by Carl Meyer in bpo-21145)\nAdded a new functools.singledispatchmethod()\ndecorator that converts\nmethods into generic functions using\nsingle dispatch:\nfrom functools import singledispatchmethod\nfrom contextlib import suppress\nclass TaskManager:\ndef __init__(self, tasks):\nself.tasks = list(tasks)\n@singledispatchmethod\ndef discard(self, value):\nwith suppress(ValueError):\nself.tasks.remove(value)\n@discard.register(list)\ndef _(self, tasks):\ntargets = set(tasks)\nself.tasks = [x for x in self.tasks if x not in targets]\n(Contributed by Ethan Smith in bpo-32380)\ngc\u00b6\nget_objects()\ncan now receive an optional generation parameter\nindicating a generation to get objects from.\n(Contributed by Pablo Galindo in bpo-36016.)\ngettext\u00b6\nAdded pgettext()\nand its variants.\n(Contributed by Franz Glasner, \u00c9ric Araujo, and Cheryl Sabella in bpo-2504.)\ngzip\u00b6\nAdded the mtime parameter to gzip.compress()\nfor reproducible output.\n(Contributed by Guo Ci Teo in bpo-34898.)\nA BadGzipFile\nexception is now raised instead of OSError\nfor certain types of invalid or corrupt gzip files.\n(Contributed by Filip Gruszczy\u0144ski, Michele Orr\u00f9, and Zackery Spytz in\nbpo-6584.)\nIDLE and idlelib\u00b6\nOutput over N lines (50 by default) is squeezed down to a button. N can be changed in the PyShell section of the General page of the Settings dialog. Fewer, but possibly extra long, lines can be squeezed by right clicking on the output. Squeezed output can be expanded in place by double-clicking the button or into the clipboard or a separate window by right-clicking the button. (Contributed by Tal Einat in bpo-1529353.)\nAdd \u201cRun Customized\u201d to the Run menu to run a module with customized settings. Any command line arguments entered are added to sys.argv. They also re-appear in the box for the next customized run. One can also suppress the normal Shell main module restart. (Contributed by Cheryl Sabella, Terry Jan Reedy, and others in bpo-5680 and bpo-37627.)\nAdded optional line numbers for IDLE editor windows. Windows open without line numbers unless set otherwise in the General tab of the configuration dialog. Line numbers for an existing window are shown and hidden in the Options menu. (Contributed by Tal Einat and Saimadhav Heblikar in bpo-17535.)\nOS native encoding is now used for converting between Python strings and Tcl objects. This allows IDLE to work with emoji and other non-BMP characters. These characters can be displayed or copied and pasted to or from the clipboard. Converting strings from Tcl to Python and back now never fails. (Many people worked on this for eight years but the problem was finally solved by Serhiy Storchaka in bpo-13153.)\nNew in 3.8.1:\nAdd option to toggle cursor blink off. (Contributed by Zackery Spytz in bpo-4603.)\nEscape key now closes IDLE completion windows. (Contributed by Johnny Najera in bpo-38944.)\nThe changes above have been backported to 3.7 maintenance releases.\nAdd keywords to module name completion list. (Contributed by Terry J. Reedy in bpo-37765.)\ninspect\u00b6\nThe inspect.getdoc()\nfunction can now find docstrings for __slots__\nif that attribute is a dict\nwhere the values are docstrings.\nThis provides documentation options similar to what we already have\nfor property()\n, classmethod()\n, and staticmethod()\n:\nclass AudioClip:\n__slots__ = {'bit_rate': 'expressed in kilohertz to one decimal place',\n'duration': 'in seconds, rounded up to an integer'}\ndef __init__(self, bit_rate, duration):\nself.bit_rate = round(bit_rate / 1000.0, 1)\nself.duration = ceil(duration)\n(Contributed by Raymond Hettinger in bpo-36326.)\nio\u00b6\nIn development mode (-X\nenv\n) and in debug build, the\nio.IOBase\nfinalizer now logs the exception if the close()\nmethod\nfails. The exception is ignored silently by default in release build.\n(Contributed by Victor Stinner in bpo-18748.)\nitertools\u00b6\nThe itertools.accumulate()\nfunction added an option initial keyword\nargument to specify an initial value:\n>>> from itertools import accumulate\n>>> list(accumulate([10, 5, 30, 15], initial=1000))\n[1000, 1010, 1015, 1045, 1060]\n(Contributed by Lisa Roach in bpo-34659.)\njson.tool\u00b6\nAdd option --json-lines\nto parse every input line as a separate JSON object.\n(Contributed by Weipeng Hong in bpo-31553.)\nlogging\u00b6\nAdded a force keyword argument to logging.basicConfig()\n.\nWhen set to true, any existing handlers attached\nto the root logger are removed and closed before carrying out the\nconfiguration specified by the other arguments.\nThis solves a long-standing problem. Once a logger or basicConfig() had been called, subsequent calls to basicConfig() were silently ignored. This made it difficult to update, experiment with, or teach the various logging configuration options using the interactive prompt or a Jupyter notebook.\n(Suggested by Raymond Hettinger, implemented by Donghee Na, and reviewed by Vinay Sajip in bpo-33897.)\nmath\u00b6\nAdded new function math.dist()\nfor computing Euclidean distance\nbetween two points. (Contributed by Raymond Hettinger in bpo-33089.)\nExpanded the math.hypot()\nfunction to handle multiple dimensions.\nFormerly, it only supported the 2-D case.\n(Contributed by Raymond Hettinger in bpo-33089.)\nAdded new function, math.prod()\n, as analogous function to sum()\nthat returns the product of a \u2018start\u2019 value (default: 1) times an iterable of\nnumbers:\n>>> prior = 0.8\n>>> likelihoods = [0.625, 0.84, 0.30]\n>>> math.prod(likelihoods, start=prior)\n0.126\n(Contributed by Pablo Galindo in bpo-35606.)\nAdded two new combinatoric functions math.perm()\nand math.comb()\n:\n>>> math.perm(10, 3) # Permutations of 10 things taken 3 at a time\n720\n>>> math.comb(10, 3) # Combinations of 10 things taken 3 at a time\n120\n(Contributed by Yash Aggarwal, Keller Fuchs, Serhiy Storchaka, and Raymond Hettinger in bpo-37128, bpo-37178, and bpo-35431.)\nAdded a new function math.isqrt()\nfor computing accurate integer square\nroots without conversion to floating point. The new function supports\narbitrarily large integers. It is faster than floor(sqrt(n))\nbut slower\nthan math.sqrt()\n:\n>>> r = 650320427\n>>> s = r ** 2\n>>> isqrt(s - 1) # correct\n650320426\n>>> floor(sqrt(s - 1)) # incorrect\n650320427\n(Contributed by Mark Dickinson in bpo-36887.)\nThe function math.factorial()\nno longer accepts arguments that are not\nint-like. (Contributed by Pablo Galindo in bpo-33083.)\nmmap\u00b6\nThe mmap.mmap\nclass now has an madvise()\nmethod to\naccess the madvise()\nsystem call.\n(Contributed by Zackery Spytz in bpo-32941.)\nmultiprocessing\u00b6\nAdded new multiprocessing.shared_memory\nmodule.\n(Contributed by Davin Potts in bpo-35813.)\nOn macOS, the spawn start method is now used by default. (Contributed by Victor Stinner in bpo-33725.)\nos\u00b6\nAdded new function add_dll_directory()\non Windows for providing\nadditional search paths for native dependencies when importing extension\nmodules or loading DLLs using ctypes\n.\n(Contributed by Steve Dower in bpo-36085.)\nA new os.memfd_create()\nfunction was added to wrap the\nmemfd_create()\nsyscall.\n(Contributed by Zackery Spytz and Christian Heimes in bpo-26836.)\nOn Windows, much of the manual logic for handling reparse points (including\nsymlinks and directory junctions) has been delegated to the operating system.\nSpecifically, os.stat()\nwill now traverse anything supported by the\noperating system, while os.lstat()\nwill only open reparse points that\nidentify as \u201cname surrogates\u201d while others are opened as for os.stat()\n.\nIn all cases, os.stat_result.st_mode\nwill only have S_IFLNK\nset for\nsymbolic links and not other kinds of reparse points. To identify other kinds\nof reparse point, check the new os.stat_result.st_reparse_tag\nattribute.\nOn Windows, os.readlink()\nis now able to read directory junctions. Note\nthat islink()\nwill return False\nfor directory junctions,\nand so code that checks islink\nfirst will continue to treat junctions as\ndirectories, while code that handles errors from os.readlink()\nmay now\ntreat junctions as links.\n(Contributed by Steve Dower in bpo-37834.)\nos.path\u00b6\nos.path\nfunctions that return a boolean result like\nexists()\n, lexists()\n, isdir()\n,\nisfile()\n, islink()\n, and ismount()\nnow return False\ninstead of raising ValueError\nor its subclasses\nUnicodeEncodeError\nand UnicodeDecodeError\nfor paths that contain\ncharacters or bytes unrepresentable at the OS level.\n(Contributed by Serhiy Storchaka in bpo-33721.)\nexpanduser()\non Windows now prefers the USERPROFILE\nenvironment variable and does not use HOME\n, which is not normally set\nfor regular user accounts.\n(Contributed by Anthony Sottile in bpo-36264.)\nisdir()\non Windows no longer returns True\nfor a link to a\nnon-existent directory.\nrealpath()\non Windows now resolves reparse points, including\nsymlinks and directory junctions.\n(Contributed by Steve Dower in bpo-37834.)\npathlib\u00b6\npathlib.Path\nmethods that return a boolean result like\nexists()\n, is_dir()\n,\nis_file()\n, is_mount()\n,\nis_symlink()\n, is_block_device()\n,\nis_char_device()\n, is_fifo()\n,\nis_socket()\nnow return False\ninstead of raising\nValueError\nor its subclass UnicodeEncodeError\nfor paths that\ncontain characters unrepresentable at the OS level.\n(Contributed by Serhiy Storchaka in bpo-33721.)\nAdded pathlib.Path.link_to()\nwhich creates a hard link pointing\nto a path.\n(Contributed by Joannah Nanjekye in bpo-26978)\nNote that link_to\nwas deprecated in 3.10 and removed in 3.12 in\nfavor of a hardlink_to\nmethod added in 3.10 which matches the\nsemantics of the existing symlink_to\nmethod.\npickle\u00b6\npickle\nextensions subclassing the C-optimized Pickler\ncan now override the pickling logic of functions and classes by defining the\nspecial reducer_override()\nmethod.\n(Contributed by Pierre Glaser and Olivier Grisel in bpo-35900.)\nplistlib\u00b6\nAdded new plistlib.UID\nand enabled support for reading and writing\nNSKeyedArchiver-encoded binary plists.\n(Contributed by Jon Janzen in bpo-26707.)\npprint\u00b6\nThe pprint\nmodule added a sort_dicts parameter to several functions.\nBy default, those functions continue to sort dictionaries before rendering or\nprinting. However, if sort_dicts is set to false, the dictionaries retain\nthe order that keys were inserted. This can be useful for comparison to JSON\ninputs during debugging.\nIn addition, there is a convenience new function, pprint.pp()\nthat is\nlike pprint.pprint()\nbut with sort_dicts defaulting to False\n:\n>>> from pprint import pprint, pp\n>>> d = dict(source='input.txt', operation='filter', destination='output.txt')\n>>> pp(d, width=40) # Original order\n{'source': 'input.txt',\n'operation': 'filter',\n'destination': 'output.txt'}\n>>> pprint(d, width=40) # Keys sorted alphabetically\n{'destination': 'output.txt',\n'operation': 'filter',\n'source': 'input.txt'}\n(Contributed by R\u00e9mi Lapeyre in bpo-30670.)\npy_compile\u00b6\npy_compile.compile()\nnow supports silent mode.\n(Contributed by Joannah Nanjekye in bpo-22640.)\nshlex\u00b6\nThe new shlex.join()\nfunction acts as the inverse of shlex.split()\n.\n(Contributed by Bo Bayles in bpo-32102.)\nshutil\u00b6\nshutil.copytree()\nnow accepts a new dirs_exist_ok\nkeyword argument.\n(Contributed by Josh Bronson in bpo-20849.)\nshutil.make_archive()\nnow defaults to the modern pax (POSIX.1-2001)\nformat for new archives to improve portability and standards conformance,\ninherited from the corresponding change to the tarfile\nmodule.\n(Contributed by C.A.M. Gerlach in bpo-30661.)\nshutil.rmtree()\non Windows now removes directory junctions without\nrecursively removing their contents first.\n(Contributed by Steve Dower in bpo-37834.)\nsocket\u00b6\nAdded create_server()\nand has_dualstack_ipv6()\nconvenience functions to automate the necessary tasks usually involved when\ncreating a server socket, including accepting both IPv4 and IPv6 connections\non the same socket. (Contributed by Giampaolo Rodol\u00e0 in bpo-17561.)\nThe socket.if_nameindex()\n, socket.if_nametoindex()\n, and\nsocket.if_indextoname()\nfunctions have been implemented on Windows.\n(Contributed by Zackery Spytz in bpo-37007.)\nssl\u00b6\nAdded post_handshake_auth\nto enable and\nverify_client_post_handshake()\nto initiate TLS 1.3\npost-handshake authentication.\n(Contributed by Christian Heimes in bpo-34670.)\nstatistics\u00b6\nAdded statistics.fmean()\nas a faster, floating-point variant of\nstatistics.mean()\n. (Contributed by Raymond Hettinger and\nSteven D\u2019Aprano in bpo-35904.)\nAdded statistics.geometric_mean()\n(Contributed by Raymond Hettinger in bpo-27181.)\nAdded statistics.multimode()\nthat returns a list of the most\ncommon values. (Contributed by Raymond Hettinger in bpo-35892.)\nAdded statistics.quantiles()\nthat divides data or a distribution\nin to equiprobable intervals (e.g. quartiles, deciles, or percentiles).\n(Contributed by Raymond Hettinger in bpo-36546.)\nAdded statistics.NormalDist\n, a tool for creating\nand manipulating normal distributions of a random variable.\n(Contributed by Raymond Hettinger in bpo-36018.)\n>>> temperature_feb = NormalDist.from_samples([4, 12, -3, 2, 7, 14])\n>>> temperature_feb.mean\n6.0\n>>> temperature_feb.stdev\n6.356099432828281\n>>> temperature_feb.cdf(3) # Chance of being under 3 degrees\n0.3184678262814532\n>>> # Relative chance of being 7 degrees versus 10 degrees\n>>> temperature_feb.pdf(7) / temperature_feb.pdf(10)\n1.2039930378537762\n>>> el_ni\u00f1o = NormalDist(4, 2.5)\n>>> temperature_feb += el_ni\u00f1o # Add in a climate effect\n>>> temperature_feb\nNormalDist(mu=10.0, sigma=6.830080526611674)\n>>> temperature_feb * (9/5) + 32 # Convert to Fahrenheit\nNormalDist(mu=50.0, sigma=12.294144947901014)\n>>> temperature_feb.samples(3) # Generate random samples\n[7.672102882379219, 12.000027119750287, 4.647488369766392]\nsys\u00b6\nAdd new sys.unraisablehook()\nfunction which can be overridden to control\nhow \u201cunraisable exceptions\u201d are handled. It is called when an exception has\noccurred but there is no way for Python to handle it. For example, when a\ndestructor raises an exception or during garbage collection\n(gc.collect()\n).\n(Contributed by Victor Stinner in bpo-36829.)\ntarfile\u00b6\nThe tarfile\nmodule now defaults to the modern pax (POSIX.1-2001)\nformat for new archives, instead of the previous GNU-specific one.\nThis improves cross-platform portability with a consistent encoding (UTF-8)\nin a standardized and extensible format, and offers several other benefits.\n(Contributed by C.A.M. Gerlach in bpo-36268.)\nthreading\u00b6\nAdd a new threading.excepthook()\nfunction which handles uncaught\nthreading.Thread.run()\nexception. It can be overridden to control how\nuncaught threading.Thread.run()\nexceptions are handled.\n(Contributed by Victor Stinner in bpo-1230540.)\nAdd a new threading.get_native_id()\nfunction and\na native_id\nattribute to the threading.Thread\nclass. These return the native\nintegral Thread ID of the current thread assigned by the kernel.\nThis feature is only available on certain platforms, see\nget_native_id\nfor more information.\n(Contributed by Jake Tesler in bpo-36084.)\ntokenize\u00b6\nThe tokenize\nmodule now implicitly emits a NEWLINE\ntoken when\nprovided with input that does not have a trailing new line. This behavior\nnow matches what the C tokenizer does internally.\n(Contributed by Ammar Askar in bpo-33899.)\ntkinter\u00b6\nAdded methods selection_from()\n,\nselection_present()\n,\nselection_range()\nand\nselection_to()\nin the tkinter.Spinbox\nclass.\n(Contributed by Juliette Monsel in bpo-34829.)\nAdded method moveto()\nin the tkinter.Canvas\nclass.\n(Contributed by Juliette Monsel in bpo-23831.)\nThe tkinter.PhotoImage\nclass now has\ntransparency_get()\nand\ntransparency_set()\nmethods. (Contributed by\nZackery Spytz in bpo-25451.)\ntime\u00b6\nAdded new clock CLOCK_UPTIME_RAW\nfor macOS 10.12.\n(Contributed by Joannah Nanjekye in bpo-35702.)\ntyping\u00b6\nThe typing\nmodule incorporates several new features:\nA dictionary type with per-key types. See PEP 589 and\ntyping.TypedDict\n. TypedDict uses only string keys. By default, every key is required to be present. Specify \u201ctotal=False\u201d to allow keys to be optional:class Location(TypedDict, total=False): lat_long: tuple grid_square: str xy_coordinate: tuple\nLiteral types. See PEP 586 and\ntyping.Literal\n. Literal types indicate that a parameter or return value is constrained to one or more specific literal values:def get_status(port: int) -> Literal['connected', 'disconnected']: ...\n\u201cFinal\u201d variables, functions, methods and classes. See PEP 591,\ntyping.Final\nandtyping.final()\n. The final qualifier instructs a static type checker to restrict subclassing, overriding, or reassignment:pi: Final[float] = 3.1415926536\nProtocol definitions. See PEP 544,\ntyping.Protocol\nandtyping.runtime_checkable()\n. Simple ABCs liketyping.SupportsInt\nare nowProtocol\nsubclasses.New protocol class\ntyping.SupportsIndex\n.New functions\ntyping.get_origin()\nandtyping.get_args()\n.\nunicodedata\u00b6\nThe unicodedata\nmodule has been upgraded to use the Unicode 12.1.0 release.\nNew function is_normalized()\ncan be used to verify a string\nis in a specific normal form, often much faster than by actually normalizing\nthe string. (Contributed by Max Belanger, David Euresti, and Greg Price in\nbpo-32285 and bpo-37966).\nunittest\u00b6\nAdded AsyncMock\nto support an asynchronous version of\nMock\n. Appropriate new assert functions for testing\nhave been added as well.\n(Contributed by Lisa Roach in bpo-26467).\nAdded addModuleCleanup()\nand\naddClassCleanup()\nto unittest to support\ncleanups for setUpModule()\nand\nsetUpClass()\n.\n(Contributed by Lisa Roach in bpo-24412.)\nSeveral mock assert functions now also print a list of actual calls upon failure. (Contributed by Petter Strandmark in bpo-35047.)\nunittest\nmodule gained support for coroutines to be used as test cases\nwith unittest.IsolatedAsyncioTestCase\n.\n(Contributed by Andrew Svetlov in bpo-32972.)\nExample:\nimport unittest\nclass TestRequest(unittest.IsolatedAsyncioTestCase):\nasync def asyncSetUp(self):\nself.connection = await AsyncConnection()\nasync def test_get(self):\nresponse = await self.connection.get(\"https://example.com\")\nself.assertEqual(response.status_code, 200)\nasync def asyncTearDown(self):\nawait self.connection.close()\nif __name__ == \"__main__\":\nunittest.main()\nvenv\u00b6\nvenv\nnow includes an Activate.ps1\nscript on all platforms for\nactivating virtual environments under PowerShell Core 6.1.\n(Contributed by Brett Cannon in bpo-32718.)\nweakref\u00b6\nThe proxy objects returned by weakref.proxy()\nnow support the matrix\nmultiplication operators @\nand @=\nin addition to the other\nnumeric operators. (Contributed by Mark Dickinson in bpo-36669.)\nxml\u00b6\nAs mitigation against DTD and external entity retrieval, the\nxml.dom.minidom\nand xml.sax\nmodules no longer process\nexternal entities by default.\n(Contributed by Christian Heimes in bpo-17239.)\nThe .find*()\nmethods in the xml.etree.ElementTree\nmodule\nsupport wildcard searches like {*}tag\nwhich ignores the namespace\nand {namespace}*\nwhich returns all tags in the given namespace.\n(Contributed by Stefan Behnel in bpo-28238.)\nThe xml.etree.ElementTree\nmodule provides a new function\ncanonicalize()\nthat implements C14N 2.0.\n(Contributed by Stefan Behnel in bpo-13611.)\nThe target object of xml.etree.ElementTree.XMLParser\ncan\nreceive namespace declaration events through the new callback methods\nstart_ns()\nand end_ns()\n. Additionally, the\nxml.etree.ElementTree.TreeBuilder\ntarget can be configured\nto process events about comments and processing instructions to include\nthem in the generated tree.\n(Contributed by Stefan Behnel in bpo-36676 and bpo-36673.)\nxmlrpc\u00b6\nxmlrpc.client.ServerProxy\nnow supports an optional headers keyword\nargument for a sequence of HTTP headers to be sent with each request. Among\nother things, this makes it possible to upgrade from default basic\nauthentication to faster session authentication.\n(Contributed by C\u00e9dric Krier in bpo-35153.)\nOptimizations\u00b6\nThe\nsubprocess\nmodule can now use theos.posix_spawn()\nfunction in some cases for better performance. Currently, it is only used on macOS and Linux (using glibc 2.24 or newer) if all these conditions are met:close_fds is false;\npreexec_fn, pass_fds, cwd and start_new_session parameters are not set;\nthe executable path contains a directory.\n(Contributed by Joannah Nanjekye and Victor Stinner in bpo-35537.)\nshutil.copyfile()\n,shutil.copy()\n,shutil.copy2()\n,shutil.copytree()\nandshutil.move()\nuse platform-specific \u201cfast-copy\u201d syscalls on Linux and macOS in order to copy the file more efficiently. \u201cfast-copy\u201d means that the copying operation occurs within the kernel, avoiding the use of userspace buffers in Python as in \u201coutfd.write(infd.read())\n\u201d. On Windowsshutil.copyfile()\nuses a bigger default buffer size (1 MiB instead of 16 KiB) and amemoryview()\n-based variant ofshutil.copyfileobj()\nis used. The speedup for copying a 512 MiB file within the same partition is about +26% on Linux, +50% on macOS and +40% on Windows. Also, much less CPU cycles are consumed. See Platform-dependent efficient copy operations section. (Contributed by Giampaolo Rodol\u00e0 in bpo-33671.)shutil.copytree()\nusesos.scandir()\nfunction and all copy functions depending from it use cachedos.stat()\nvalues. The speedup for copying a directory with 8000 files is around +9% on Linux, +20% on Windows and +30% on a Windows SMB share. Also the number ofos.stat()\nsyscalls is reduced by 38% makingshutil.copytree()\nespecially faster on network filesystems. (Contributed by Giampaolo Rodol\u00e0 in bpo-33695.)The default protocol in the\npickle\nmodule is now Protocol 4, first introduced in Python 3.4. It offers better performance and smaller size compared to Protocol 3 available since Python 3.0.Removed one\nPy_ssize_t\nmember fromPyGC_Head\n. All GC tracked objects (e.g. tuple, list, dict) size is reduced 4 or 8 bytes. (Contributed by Inada Naoki in bpo-33597.)uuid.UUID\nnow uses__slots__\nto reduce its memory footprint. (Contributed by Wouter Bolsterlee and Tal Einat in bpo-30977)Improved performance of\noperator.itemgetter()\nby 33%. Optimized argument handling and added a fast path for the common case of a single non-negative integer index into a tuple (which is the typical use case in the standard library). (Contributed by Raymond Hettinger in bpo-35664.)Sped-up field lookups in\ncollections.namedtuple()\n. They are now more than two times faster, making them the fastest form of instance variable lookup in Python. (Contributed by Raymond Hettinger, Pablo Galindo, and Joe Jevnik, Serhiy Storchaka in bpo-32492.)The\nlist\nconstructor does not overallocate the internal item buffer if the input iterable has a known length (the input implements__len__\n). This makes the created list 12% smaller on average. (Contributed by Raymond Hettinger and Pablo Galindo in bpo-33234.)Doubled the speed of class variable writes. When a non-dunder attribute was updated, there was an unnecessary call to update slots. (Contributed by Stefan Behnel, Pablo Galindo Salgado, Raymond Hettinger, Neil Schemenauer, and Serhiy Storchaka in bpo-36012.)\nReduced an overhead of converting arguments passed to many builtin functions and methods. This sped up calling some simple builtin functions and methods up to 20\u201350%. (Contributed by Serhiy Storchaka in bpo-23867, bpo-35582 and bpo-36127.)\nLOAD_GLOBAL\ninstruction now uses new \u201cper opcode cache\u201d mechanism. It is about 40% faster now. (Contributed by Yury Selivanov and Inada Naoki in bpo-26219.)\nBuild and C API Changes\u00b6\nDefault\nsys.abiflags\nbecame an empty string: them\nflag for pymalloc became useless (builds with and without pymalloc are ABI compatible) and so has been removed. (Contributed by Victor Stinner in bpo-36707.)Example of changes:\nOnly\npython3.8\nprogram is installed,python3.8m\nprogram is gone.Only\npython3.8-config\nscript is installed,python3.8m-config\nscript is gone.The\nm\nflag has been removed from the suffix of dynamic library filenames: extension modules in the standard library as well as those produced and installed by third-party packages, like those downloaded from PyPI. On Linux, for example, the Python 3.7 suffix.cpython-37m-x86_64-linux-gnu.so\nbecame.cpython-38-x86_64-linux-gnu.so\nin Python 3.8.\nThe header files have been reorganized to better separate the different kinds of APIs:\nInclude/*.h\nshould be the portable public stable C API.Include/cpython/*.h\nshould be the unstable C API specific to CPython; public API, with some private API prefixed by_Py\nor_PY\n.Include/internal/*.h\nis the private internal C API very specific to CPython. This API comes with no backward compatibility warranty and should not be used outside CPython. It is only exposed for very specific needs like debuggers and profiles which has to access to CPython internals without calling functions. This API is now installed bymake install\n.\n(Contributed by Victor Stinner in bpo-35134 and bpo-35081, work initiated by Eric Snow in Python 3.7.)\nSome macros have been converted to static inline functions: parameter types and return type are well defined, they don\u2019t have issues specific to macros, variables have a local scopes. Examples:\nPyObject_INIT\n,PyObject_INIT_VAR\nPrivate functions:\n_PyObject_GC_TRACK()\n,_PyObject_GC_UNTRACK()\n,_Py_Dealloc()\n(Contributed by Victor Stinner in bpo-35059.)\nThe\nPyByteArray_Init()\nandPyByteArray_Fini()\nfunctions have been removed. They did nothing since Python 2.7.4 and Python 3.2.0, were excluded from the limited API (stable ABI), and were not documented. (Contributed by Victor Stinner in bpo-35713.)The result of\nPyExceptionClass_Name()\nis now of typeconst char *\nrather ofchar *\n. (Contributed by Serhiy Storchaka in bpo-33818.)The duality of\nModules/Setup.dist\nandModules/Setup\nhas been removed. Previously, when updating the CPython source tree, one had to manually copyModules/Setup.dist\n(inside the source tree) toModules/Setup\n(inside the build tree) in order to reflect any changes upstream. This was of a small benefit to packagers at the expense of a frequent annoyance to developers following CPython development, as forgetting to copy the file could produce build failures.Now the build system always reads from\nModules/Setup\ninside the source tree. People who want to customize that file are encouraged to maintain their changes in a git fork of CPython or as patch files, as they would do for any other change to the source tree.(Contributed by Antoine Pitrou in bpo-32430.)\nFunctions that convert Python number to C integer like\nPyLong_AsLong()\nand argument parsing functions likePyArg_ParseTuple()\nwith integer converting format units like'i'\nwill now use the__index__()\nspecial method instead of__int__()\n, if available. The deprecation warning will be emitted for objects with the__int__()\nmethod but without the__index__()\nmethod (likeDecimal\nandFraction\n).PyNumber_Check()\nwill now return1\nfor objects implementing__index__()\n.PyNumber_Long()\n,PyNumber_Float()\nandPyFloat_AsDouble()\nalso now use the__index__()\nmethod if available. (Contributed by Serhiy Storchaka in bpo-36048 and bpo-20092.)Heap-allocated type objects will now increase their reference count in\nPyObject_Init()\n(and its parallel macroPyObject_INIT\n) instead of inPyType_GenericAlloc()\n. Types that modify instance allocation or deallocation may need to be adjusted. (Contributed by Eddie Elizondo in bpo-35810.)The new function\nPyCode_NewWithPosOnlyArgs()\nallows to create code objects likePyCode_New()\n, but with an extra posonlyargcount parameter for indicating the number of positional-only arguments. (Contributed by Pablo Galindo in bpo-37221.)Py_SetPath()\nnow setssys.executable\nto the program full path (Py_GetProgramFullPath()\n) rather than to the program name (Py_GetProgramName()\n). (Contributed by Victor Stinner in bpo-38234.)\nDeprecated\u00b6\nThe distutils\nbdist_wininst\ncommand is now deprecated, usebdist_wheel\n(wheel packages) instead. (Contributed by Victor Stinner in bpo-37481.)Deprecated methods\ngetchildren()\nandgetiterator()\nin theElementTree\nmodule now emit aDeprecationWarning\ninstead ofPendingDeprecationWarning\n. They will be removed in Python 3.9. (Contributed by Serhiy Storchaka in bpo-29209.)Passing an object that is not an instance of\nconcurrent.futures.ThreadPoolExecutor\ntoloop.set_default_executor()\nis deprecated and will be prohibited in Python 3.9. (Contributed by Elvis Pranskevichus in bpo-34075.)The\n__getitem__()\nmethods ofxml.dom.pulldom.DOMEventStream\n,wsgiref.util.FileWrapper\nandfileinput.FileInput\nhave been deprecated.Implementations of these methods have been ignoring their index parameter, and returning the next item instead. (Contributed by Berker Peksag in bpo-9372.)\nThe\ntyping.NamedTuple\nclass has deprecated the_field_types\nattribute in favor of the__annotations__\nattribute which has the same information. (Contributed by Raymond Hettinger in bpo-36320.)ast\nclassesNum\n,Str\n,Bytes\n,NameConstant\nandEllipsis\nare considered deprecated and will be removed in future Python versions.Constant\nshould be used instead. (Contributed by Serhiy Storchaka in bpo-32892.)ast.NodeVisitor\nmethodsvisit_Num()\n,visit_Str()\n,visit_Bytes()\n,visit_NameConstant()\nandvisit_Ellipsis()\nare deprecated now and will not be called in future Python versions. Add thevisit_Constant()\nmethod to handle all constant nodes. (Contributed by Serhiy Storchaka in bpo-36917.)The\n@asyncio.coroutine\ndecorator is deprecated and will be removed in version 3.10. Instead of@asyncio.coroutine\n, useasync def\ninstead. (Contributed by Andrew Svetlov in bpo-36921.)In\nasyncio\n, the explicit passing of a loop argument has been deprecated and will be removed in version 3.10 for the following:asyncio.sleep()\n,asyncio.gather()\n,asyncio.shield()\n,asyncio.wait_for()\n,asyncio.wait()\n,asyncio.as_completed()\n,asyncio.Task\n,asyncio.Lock\n,asyncio.Event\n,asyncio.Condition\n,asyncio.Semaphore\n,asyncio.BoundedSemaphore\n,asyncio.Queue\n,asyncio.create_subprocess_exec()\n, andasyncio.create_subprocess_shell()\n.The explicit passing of coroutine objects to\nasyncio.wait()\nhas been deprecated and will be removed in version 3.11. (Contributed by Yury Selivanov in bpo-34790.)The following functions and methods are deprecated in the\ngettext\nmodule:lgettext()\n,ldgettext()\n,lngettext()\nandldngettext()\n. They return encoded bytes, and it\u2019s possible that you will get unexpected Unicode-related exceptions if there are encoding problems with the translated strings. It\u2019s much better to use alternatives which return Unicode strings in Python 3. These functions have been broken for a long time.Function\nbind_textdomain_codeset()\n, methodsNullTranslations.output_charset()\nandNullTranslations.set_output_charset()\n, and the codeset parameter of functionstranslation()\nandinstall()\nare also deprecated, since they are only used for thel*gettext()\nfunctions. (Contributed by Serhiy Storchaka in bpo-33710.)The\nisAlive()\nmethod ofthreading.Thread\nhas been deprecated. (Contributed by Donghee Na in bpo-35283.)Many builtin and extension functions that take integer arguments will now emit a deprecation warning for\nDecimal\ns,Fraction\ns and any other objects that can be converted to integers only with a loss (e.g. that have the__int__()\nmethod but do not have the__index__()\nmethod). In future version they will be errors. (Contributed by Serhiy Storchaka in bpo-36048.)Deprecated passing the following arguments as keyword arguments:\nfunc in\nfunctools.partialmethod()\n,weakref.finalize()\n,profile.Profile.runcall()\n,cProfile.Profile.runcall()\n,bdb.Bdb.runcall()\n,trace.Trace.runfunc()\nandcurses.wrapper()\n.function in\nunittest.TestCase.addCleanup()\n.fn in the\nsubmit()\nmethod ofconcurrent.futures.ThreadPoolExecutor\nandconcurrent.futures.ProcessPoolExecutor\n.callback in\ncontextlib.ExitStack.callback()\n,contextlib.AsyncExitStack.callback()\nandcontextlib.AsyncExitStack.push_async_callback()\n.c and typeid in the\ncreate()\nmethod ofmultiprocessing.managers.Server\nandmultiprocessing.managers.SharedMemoryServer\n.obj in\nweakref.finalize()\n.\nIn future releases of Python, they will be positional-only. (Contributed by Serhiy Storchaka in bpo-36492.)\nAPI and Feature Removals\u00b6\nThe following features and APIs have been removed from Python 3.8:\nStarting with Python 3.3, importing ABCs from\ncollections\nwas deprecated, and importing should be done fromcollections.abc\n. Being able to import from collections was marked for removal in 3.8, but has been delayed to 3.9. (See gh-81134.)The\nmacpath\nmodule, deprecated in Python 3.7, has been removed. (Contributed by Victor Stinner in bpo-35471.)The function\nplatform.popen()\nhas been removed, after having been deprecated since Python 3.3: useos.popen()\ninstead. (Contributed by Victor Stinner in bpo-35345.)The function\ntime.clock()\nhas been removed, after having been deprecated since Python 3.3: usetime.perf_counter()\nortime.process_time()\ninstead, depending on your requirements, to have well-defined behavior. (Contributed by Matthias Bussonnier in bpo-36895.)The\npyvenv\nscript has been removed in favor ofpython3.8 -m venv\nto help eliminate confusion as to what Python interpreter thepyvenv\nscript is tied to. (Contributed by Brett Cannon in bpo-25427.)parse_qs\n,parse_qsl\n, andescape\nare removed from thecgi\nmodule. They are deprecated in Python 3.2 or older. They should be imported from theurllib.parse\nandhtml\nmodules instead.filemode\nfunction is removed from thetarfile\nmodule. It is not documented and deprecated since Python 3.3.The\nXMLParser\nconstructor no longer accepts the html argument. It never had an effect and was deprecated in Python 3.4. All other parameters are now keyword-only. (Contributed by Serhiy Storchaka in bpo-29209.)Removed the\ndoctype()\nmethod ofXMLParser\n. (Contributed by Serhiy Storchaka in bpo-29209.)\u201cunicode_internal\u201d codec is removed. (Contributed by Inada Naoki in bpo-36297.)\nThe\nCache\nandStatement\nobjects of thesqlite3\nmodule are not exposed to the user. (Contributed by Aviv Palivoda in bpo-30262.)The\nbufsize\nkeyword argument offileinput.input()\nandfileinput.FileInput()\nwhich was ignored and deprecated since Python 3.6 has been removed. bpo-36952 (Contributed by Matthias Bussonnier.)The functions\nsys.set_coroutine_wrapper()\nandsys.get_coroutine_wrapper()\ndeprecated in Python 3.7 have been removed; bpo-36933 (Contributed by Matthias Bussonnier.)\nPorting to Python 3.8\u00b6\nThis section lists previously described changes and other bugfixes that may require changes to your code.\nChanges in Python behavior\u00b6\nYield expressions (both\nyield\nandyield from\nclauses) are now disallowed in comprehensions and generator expressions (aside from the iterable expression in the leftmostfor\nclause). (Contributed by Serhiy Storchaka in bpo-10544.)The compiler now produces a\nSyntaxWarning\nwhen identity checks (is\nandis not\n) are used with certain types of literals (e.g. strings, numbers). These can often work by accident in CPython, but are not guaranteed by the language spec. The warning advises users to use equality tests (==\nand!=\n) instead. (Contributed by Serhiy Storchaka in bpo-34850.)The CPython interpreter can swallow exceptions in some circumstances. In Python 3.8 this happens in fewer cases. In particular, exceptions raised when getting the attribute from the type dictionary are no longer ignored. (Contributed by Serhiy Storchaka in bpo-35459.)\nRemoved\n__str__\nimplementations from builtin typesbool\n,int\n,float\n,complex\nand few classes from the standard library. They now inherit__str__()\nfromobject\n. As result, defining the__repr__()\nmethod in the subclass of these classes will affect their string representation. (Contributed by Serhiy Storchaka in bpo-36793.)On AIX,\nsys.platform\ndoesn\u2019t contain the major version anymore. It is always'aix'\n, instead of'aix3'\n..'aix7'\n. Since older Python versions include the version number, so it is recommended to always usesys.platform.startswith('aix')\n. (Contributed by M. Felt in bpo-36588.)PyEval_AcquireLock()\nandPyEval_AcquireThread()\nnow terminate the current thread if called while the interpreter is finalizing, making them consistent withPyEval_RestoreThread()\n,Py_END_ALLOW_THREADS()\n, andPyGILState_Ensure()\n. If this behavior is not desired, guard the call by checking_Py_IsFinalizing()\norsys.is_finalizing()\n. (Contributed by Joannah Nanjekye in bpo-36475.)\nChanges in the Python API\u00b6\nThe\nos.getcwdb()\nfunction now uses the UTF-8 encoding on Windows, rather than the ANSI code page: see PEP 529 for the rationale. The function is no longer deprecated on Windows. (Contributed by Victor Stinner in bpo-37412.)subprocess.Popen\ncan now useos.posix_spawn()\nin some cases for better performance. On Windows Subsystem for Linux and QEMU User Emulation, thePopen\nconstructor usingos.posix_spawn()\nno longer raises an exception on errors like \u201cmissing program\u201d. Instead the child process fails with a non-zeroreturncode\n. (Contributed by Joannah Nanjekye and Victor Stinner in bpo-35537.)The preexec_fn argument of *\nsubprocess.Popen\nis no longer compatible with subinterpreters. The use of the parameter in a subinterpreter now raisesRuntimeError\n. (Contributed by Eric Snow in bpo-34651, modified by Christian Heimes in bpo-37951.)The\nimaplib.IMAP4.logout()\nmethod no longer silently ignores arbitrary exceptions. (Contributed by Victor Stinner in bpo-36348.)The function\nplatform.popen()\nhas been removed, after having been deprecated since Python 3.3: useos.popen()\ninstead. (Contributed by Victor Stinner in bpo-35345.)The\nstatistics.mode()\nfunction no longer raises an exception when given multimodal data. Instead, it returns the first mode encountered in the input data. (Contributed by Raymond Hettinger in bpo-35892.)The\nselection()\nmethod of thetkinter.ttk.Treeview\nclass no longer takes arguments. Using it with arguments for changing the selection was deprecated in Python 3.6. Use specialized methods likeselection_set()\nfor changing the selection. (Contributed by Serhiy Storchaka in bpo-31508.)The\nwritexml()\n,toxml()\nandtoprettyxml()\nmethods ofxml.dom.minidom\nand thewrite()\nmethod ofxml.etree.ElementTree\nnow preserve the attribute order specified by the user. (Contributed by Diego Rojas and Raymond Hettinger in bpo-34160.)A\ndbm.dumb\ndatabase opened with flags'r'\nis now read-only.dbm.dumb.open()\nwith flags'r'\nand'w'\nno longer creates a database if it does not exist. (Contributed by Serhiy Storchaka in bpo-32749.)The\ndoctype()\nmethod defined in a subclass ofXMLParser\nwill no longer be called and will emit aRuntimeWarning\ninstead of aDeprecationWarning\n. Define thedoctype()\nmethod on a target for handling an XML doctype declaration. (Contributed by Serhiy Storchaka in bpo-29209.)A\nRuntimeError\nis now raised when the custom metaclass doesn\u2019t provide the__classcell__\nentry in the namespace passed totype.__new__\n. ADeprecationWarning\nwas emitted in Python 3.6\u20133.7. (Contributed by Serhiy Storchaka in bpo-23722.)The\ncProfile.Profile\nclass can now be used as a context manager. (Contributed by Scott Sanderson in bpo-29235.)shutil.copyfile()\n,shutil.copy()\n,shutil.copy2()\n,shutil.copytree()\nandshutil.move()\nuse platform-specific \u201cfast-copy\u201d syscalls (see Platform-dependent efficient copy operations section).shutil.copyfile()\ndefault buffer size on Windows was changed from 16 KiB to 1 MiB.The\nPyGC_Head\nstruct has changed completely. All code that touched the struct member should be rewritten. (See bpo-33597.)The\nPyInterpreterState\nstruct has been moved into the \u201cinternal\u201d header files (specifically Include/internal/pycore_pystate.h). An opaquePyInterpreterState\nis still available as part of the public API (and stable ABI). The docs indicate that none of the struct\u2019s fields are public, so we hope no one has been using them. However, if you do rely on one or more of those private fields and have no alternative then please open a BPO issue. We\u2019ll work on helping you adjust (possibly including adding accessor functions to the public API). (See bpo-35886.)The\nmmap.flush()\nmethod now returnsNone\non success and raises an exception on error under all platforms. Previously, its behavior was platform-dependent: a nonzero value was returned on success; zero was returned on error under Windows. A zero value was returned on success; an exception was raised on error under Unix. (Contributed by Berker Peksag in bpo-2122.)xml.dom.minidom\nandxml.sax\nmodules no longer process external entities by default. (Contributed by Christian Heimes in bpo-17239.)Deleting a key from a read-only\ndbm\ndatabase (dbm.dumb\n,dbm.gnu\nordbm.ndbm\n) raiseserror\n(dbm.dumb.error\n,dbm.gnu.error\nordbm.ndbm.error\n) instead ofKeyError\n. (Contributed by Xiang Zhang in bpo-33106.)Simplified AST for literals. All constants will be represented as\nast.Constant\ninstances. Instantiating old classesNum\n,Str\n,Bytes\n,NameConstant\nandEllipsis\nwill return an instance ofConstant\n. (Contributed by Serhiy Storchaka in bpo-32892.)expanduser()\non Windows now prefers theUSERPROFILE\nenvironment variable and does not useHOME\n, which is not normally set for regular user accounts. (Contributed by Anthony Sottile in bpo-36264.)The exception\nasyncio.CancelledError\nnow inherits fromBaseException\nrather thanException\nand no longer inherits fromconcurrent.futures.CancelledError\n. (Contributed by Yury Selivanov in bpo-32528.)The function\nasyncio.wait_for()\nnow correctly waits for cancellation when using an instance ofasyncio.Task\n. Previously, upon reaching timeout, it was cancelled and immediately returned. (Contributed by Elvis Pranskevichus in bpo-32751.)The function\nasyncio.BaseTransport.get_extra_info()\nnow returns a safe to use socket object when \u2018socket\u2019 is passed to the name parameter. (Contributed by Yury Selivanov in bpo-37027.)asyncio.BufferedProtocol\nhas graduated to the stable API.\nDLL dependencies for extension modules and DLLs loaded with\nctypes\non Windows are now resolved more securely. Only the system paths, the directory containing the DLL or PYD file, and directories added withadd_dll_directory()\nare searched for load-time dependencies. Specifically,PATH\nand the current working directory are no longer used, and modifications to these will no longer have any effect on normal DLL resolution. If your application relies on these mechanisms, you should check foradd_dll_directory()\nand if it exists, use it to add your DLLs directory while loading your library. Note that Windows 7 users will need to ensure that Windows Update KB2533623 has been installed (this is also verified by the installer). (Contributed by Steve Dower in bpo-36085.)The header files and functions related to pgen have been removed after its replacement by a pure Python implementation. (Contributed by Pablo Galindo in bpo-36623.)\ntypes.CodeType\nhas a new parameter in the second position of the constructor (posonlyargcount) to support positional-only arguments defined in PEP 570. The first argument (argcount) now represents the total number of positional arguments (including positional-only arguments). The newreplace()\nmethod oftypes.CodeType\ncan be used to make the code future-proof.The parameter\ndigestmod\nforhmac.new()\nno longer uses the MD5 digest by default.\nChanges in the C API\u00b6\nThe\nPyCompilerFlags\nstructure got a new cf_feature_version field. It should be initialized toPY_MINOR_VERSION\n. The field is ignored by default, and is used if and only ifPyCF_ONLY_AST\nflag is set in cf_flags. (Contributed by Guido van Rossum in bpo-35766.)The\nPyEval_ReInitThreads()\nfunction has been removed from the C API. It should not be called explicitly: usePyOS_AfterFork_Child()\ninstead. (Contributed by Victor Stinner in bpo-36728.)On Unix, C extensions are no longer linked to libpython except on Android and Cygwin. When Python is embedded,\nlibpython\nmust not be loaded withRTLD_LOCAL\n, butRTLD_GLOBAL\ninstead. Previously, usingRTLD_LOCAL\n, it was already not possible to load C extensions which were not linked tolibpython\n, like C extensions of the standard library built by the*shared*\nsection ofModules/Setup\n. (Contributed by Victor Stinner in bpo-21536.)Use of\n#\nvariants of formats in parsing or building value (e.g.PyArg_ParseTuple()\n,Py_BuildValue()\n,PyObject_CallFunction()\n, etc.) withoutPY_SSIZE_T_CLEAN\ndefined raisesDeprecationWarning\nnow. It will be removed in 3.10 or 4.0. Read Parsing arguments and building values for detail. (Contributed by Inada Naoki in bpo-36381.)Instances of heap-allocated types (such as those created with\nPyType_FromSpec()\n) hold a reference to their type object. Increasing the reference count of these type objects has been moved fromPyType_GenericAlloc()\nto the more low-level functions,PyObject_Init()\nandPyObject_INIT\n. This makes types created throughPyType_FromSpec()\nbehave like other classes in managed code.Statically allocated types are not affected.\nFor the vast majority of cases, there should be no side effect. However, types that manually increase the reference count after allocating an instance (perhaps to work around the bug) may now become immortal. To avoid this, these classes need to call Py_DECREF on the type object during instance deallocation.\nTo correctly port these types into 3.8, please apply the following changes:\nRemove\nPy_INCREF\non the type object after allocating an instance - if any. This may happen after callingPyObject_New\n,PyObject_NewVar\n,PyObject_GC_New()\n,PyObject_GC_NewVar()\n, or any other custom allocator that usesPyObject_Init()\norPyObject_INIT\n.Example:\nstatic foo_struct * foo_new(PyObject *type) { foo_struct *foo = PyObject_GC_New(foo_struct, (PyTypeObject *) type); if (foo == NULL) return NULL; #if PY_VERSION_HEX < 0x03080000 // Workaround for Python issue 35810; no longer necessary in Python 3.8 PY_INCREF(type) #endif return foo; }\nEnsure that all custom\ntp_dealloc\nfunctions of heap-allocated types decrease the type\u2019s reference count.Example:\nstatic void foo_dealloc(foo_struct *instance) { PyObject *type = Py_TYPE(instance); PyObject_GC_Del(instance); #if PY_VERSION_HEX >= 0x03080000 // This was not needed before Python 3.8 (Python issue 35810) Py_DECREF(type); #endif }\n(Contributed by Eddie Elizondo in bpo-35810.)\nThe\nPy_DEPRECATED()\nmacro has been implemented for MSVC. The macro now must be placed before the symbol name.Example:\nPy_DEPRECATED(3.8) PyAPI_FUNC(int) Py_OldFunction(void);\n(Contributed by Zackery Spytz in bpo-33407.)\nThe interpreter does not pretend to support binary compatibility of extension types across feature releases, anymore. A\nPyTypeObject\nexported by a third-party extension module is supposed to have all the slots expected in the current Python version, includingtp_finalize\n(Py_TPFLAGS_HAVE_FINALIZE\nis not checked anymore before readingtp_finalize\n).(Contributed by Antoine Pitrou in bpo-32388.)\nThe functions\nPyNode_AddChild()\nandPyParser_AddToken()\nnow accept two additionalint\narguments end_lineno and end_col_offset.The\nlibpython38.a\nfile to allow MinGW tools to link directly againstpython38.dll\nis no longer included in the regular Windows distribution. If you require this file, it may be generated with thegendef\nanddlltool\ntools, which are part of the MinGW binutils package:gendef - python38.dll > tmp.def dlltool --dllname python38.dll --def tmp.def --output-lib libpython38.a\nThe location of an installed\npythonXY.dll\nwill depend on the installation options and the version and language of Windows. See Using Python on Windows for more information. The resulting library should be placed in the same directory aspythonXY.lib\n, which is generally thelibs\ndirectory under your Python installation.(Contributed by Steve Dower in bpo-37351.)\nCPython bytecode changes\u00b6\nThe interpreter loop has been simplified by moving the logic of unrolling the stack of blocks into the compiler. The compiler emits now explicit instructions for adjusting the stack of values and calling the cleaning-up code for\nbreak\n,continue\nandreturn\n.Removed opcodes\nBREAK_LOOP\n,CONTINUE_LOOP\n,SETUP_LOOP\nandSETUP_EXCEPT\n. Added new opcodesROT_FOUR\n,BEGIN_FINALLY\n,CALL_FINALLY\nandPOP_FINALLY\n. Changed the behavior ofEND_FINALLY\nandWITH_CLEANUP_START\n.(Contributed by Mark Shannon, Antoine Pitrou and Serhiy Storchaka in bpo-17611.)\nAdded new opcode\nEND_ASYNC_FOR\nfor handling exceptions raised when awaiting a next item in anasync for\nloop. (Contributed by Serhiy Storchaka in bpo-33041.)The\nMAP_ADD\nnow expects the value as the first element in the stack and the key as the second element. This change was made so the key is always evaluated before the value in dictionary comprehensions, as proposed by PEP 572. (Contributed by J\u00f6rn Heissler in bpo-35224.)\nDemos and Tools\u00b6\nAdded a benchmark script for timing various ways to access variables:\nTools/scripts/var_access_benchmark.py\n.\n(Contributed by Raymond Hettinger in bpo-35884.)\nHere\u2019s a summary of performance improvements since Python 3.3:\nPython version 3.3 3.4 3.5 3.6 3.7 3.8\n-------------- --- --- --- --- --- ---\nVariable and attribute read access:\nread_local 4.0 7.1 7.1 5.4 5.1 3.9\nread_nonlocal 5.3 7.1 8.1 5.8 5.4 4.4\nread_global 13.3 15.5 19.0 14.3 13.6 7.6\nread_builtin 20.0 21.1 21.6 18.5 19.0 7.5\nread_classvar_from_class 20.5 25.6 26.5 20.7 19.5 18.4\nread_classvar_from_instance 18.5 22.8 23.5 18.8 17.1 16.4\nread_instancevar 26.8 32.4 33.1 28.0 26.3 25.4\nread_instancevar_slots 23.7 27.8 31.3 20.8 20.8 20.2\nread_namedtuple 68.5 73.8 57.5 45.0 46.8 18.4\nread_boundmethod 29.8 37.6 37.9 29.6 26.9 27.7\nVariable and attribute write access:\nwrite_local 4.6 8.7 9.3 5.5 5.3 4.3\nwrite_nonlocal 7.3 10.5 11.1 5.6 5.5 4.7\nwrite_global 15.9 19.7 21.2 18.0 18.0 15.8\nwrite_classvar 81.9 92.9 96.0 104.6 102.1 39.2\nwrite_instancevar 36.4 44.6 45.8 40.0 38.9 35.5\nwrite_instancevar_slots 28.7 35.6 36.1 27.3 26.6 25.7\nData structure read access:\nread_list 19.2 24.2 24.5 20.8 20.8 19.0\nread_deque 19.9 24.7 25.5 20.2 20.6 19.8\nread_dict 19.7 24.3 25.7 22.3 23.0 21.0\nread_strdict 17.9 22.6 24.3 19.5 21.2 18.9\nData structure write access:\nwrite_list 21.2 27.1 28.5 22.5 21.6 20.0\nwrite_deque 23.8 28.7 30.1 22.7 21.8 23.5\nwrite_dict 25.9 31.4 33.3 29.3 29.2 24.7\nwrite_strdict 22.9 28.4 29.9 27.5 25.2 23.1\nStack (or queue) operations:\nlist_append_pop 144.2 93.4 112.7 75.4 74.2 50.8\ndeque_append_pop 30.4 43.5 57.0 49.4 49.2 42.5\ndeque_append_popleft 30.8 43.7 57.3 49.7 49.7 42.8\nTiming loop:\nloop_overhead 0.3 0.5 0.6 0.4 0.3 0.3\nThe benchmarks were measured on an Intel\u00ae Core\u2122 i7-4960HQ processor running the macOS 64-bit builds found at python.org. The benchmark script displays timings in nanoseconds.\nNotable changes in Python 3.8.1\u00b6\nDue to significant security concerns, the reuse_address parameter of\nasyncio.loop.create_datagram_endpoint()\nis no longer supported. This is\nbecause of the behavior of the socket option SO_REUSEADDR\nin UDP. For more\ndetails, see the documentation for loop.create_datagram_endpoint()\n.\n(Contributed by Kyle Stanley, Antoine Pitrou, and Yury Selivanov in\nbpo-37228.)\nNotable changes in Python 3.8.2\u00b6\nFixed a regression with the ignore\ncallback of shutil.copytree()\n.\nThe argument types are now str and List[str] again.\n(Contributed by Manuel Barkhau and Giampaolo Rodola in gh-83571.)\nNotable changes in Python 3.8.3\u00b6\nThe constant values of future flags in the __future__\nmodule\nare updated in order to prevent collision with compiler flags. Previously\nPyCF_ALLOW_TOP_LEVEL_AWAIT\nwas clashing with CO_FUTURE_DIVISION\n.\n(Contributed by Batuhan Taskaya in gh-83743)\nNotable changes in Python 3.8.8\u00b6\nEarlier Python versions allowed using both ;\nand &\nas\nquery parameter separators in urllib.parse.parse_qs()\nand\nurllib.parse.parse_qsl()\n. Due to security concerns, and to conform with\nnewer W3C recommendations, this has been changed to allow only a single\nseparator key, with &\nas the default. This change also affects\ncgi.parse()\nand cgi.parse_multipart()\nas they use the affected\nfunctions internally. For more details, please see their respective\ndocumentation.\n(Contributed by Adam Goldschmidt, Senthil Kumaran and Ken Jin in bpo-42967.)\nNotable changes in Python 3.8.9\u00b6\nA security fix alters the ftplib.FTP\nbehavior to not trust the\nIPv4 address sent from the remote server when setting up a passive data\nchannel. We reuse the ftp server IP address instead. For unusual code\nrequiring the old behavior, set a trust_server_pasv_ipv4_address\nattribute on your FTP instance to True\n. (See gh-87451)\nNotable changes in Python 3.8.10\u00b6\nmacOS 11.0 (Big Sur) and Apple Silicon Mac support\u00b6\nAs of 3.8.10, Python now supports building and running on macOS 11\n(Big Sur) and on Apple Silicon Macs (based on the ARM64\narchitecture).\nA new universal build variant, universal2\n, is now available to natively\nsupport both ARM64\nand Intel 64\nin one set of executables.\nNote that support for \u201cweaklinking\u201d, building binaries targeted for newer\nversions of macOS that will also run correctly on older versions by\ntesting at runtime for missing features, is not included in this backport\nfrom Python 3.9; to support a range of macOS versions, continue to target\nfor and build on the oldest version in the range.\n(Originally contributed by Ronald Oussoren and Lawrence D\u2019Anna in gh-85272, with fixes by FX Coudert and Eli Rykoff, and backported to 3.8 by Maxime B\u00e9langer and Ned Deily)\nNotable changes in Python 3.8.10\u00b6\nurllib.parse\u00b6\nThe presence of newline or tab characters in parts of a URL allows for some\nforms of attacks. Following the WHATWG specification that updates RFC 3986,\nASCII newline \\n\n, \\r\nand tab \\t\ncharacters are stripped from the\nURL by the parser in urllib.parse\npreventing such attacks. The removal\ncharacters are controlled by a new module level variable\nurllib.parse._UNSAFE_URL_BYTES_TO_REMOVE\n. (See bpo-43882)\nNotable changes in Python 3.8.12\u00b6\nChanges in the Python API\u00b6\nStarting with Python 3.8.12 the ipaddress\nmodule no longer accepts\nany leading zeros in IPv4 address strings. Leading zeros are ambiguous and\ninterpreted as octal notation by some libraries. For example the legacy\nfunction socket.inet_aton()\ntreats leading zeros as octal notation.\nglibc implementation of modern inet_pton()\ndoes not accept\nany leading zeros.\n(Originally contributed by Christian Heimes in bpo-36384, and backported to 3.8 by Achraf Merzouki.)\nNotable security feature in 3.8.14\u00b6\nConverting between int\nand str\nin bases other than 2\n(binary), 4, 8 (octal), 16 (hexadecimal), or 32 such as base 10 (decimal)\nnow raises a ValueError\nif the number of digits in string form is\nabove a limit to avoid potential denial of service attacks due to the\nalgorithmic complexity. This is a mitigation for CVE 2020-10735.\nThis limit can be configured or disabled by environment variable, command\nline flag, or sys\nAPIs. See the integer string conversion\nlength limitation documentation. The default limit\nis 4300 digits in string form.\nNotable changes in 3.8.17\u00b6\ntarfile\u00b6\nThe extraction methods in\ntarfile\n, andshutil.unpack_archive()\n, have a new a filter argument that allows limiting tar features than may be surprising or dangerous, such as creating files outside the destination directory. See Extraction filters for details. In Python 3.12, use without the filter argument will show aDeprecationWarning\n. In Python 3.14, the default will switch to'data'\n. (Contributed by Petr Viktorin in PEP 706.)", "code_snippets": [" ", " ", " ", " ", " ", "\n ", "\n", " ", " ", "\n", " ", " ", " ", " ", "\n ", " ", " ", " ", " ", "\n", "\n", " ", " ", " ", " ", " ", "\n ", "\n", " ", " ", " ", " ", "\n ", " ", " ", " ", " ", " ", " ", "\n", " ", " ", " ", " ", " ", " ", " ", "\n ", " ", " ", " ", " ", " ", "\n", " ", " ", " ", " ", " ", "\n", " ", " ", " ", " ", " ", " ", "\n", " ", " ", " ", " ", " ", " ", "\n", " ", " ", "\n ", "\n ", " ", " ", " ", " ", " ", " ", "\n", " ", "\n", " ", " ", " ", " ", "\n ", "\n", " ", " ", " ", "\n", " ", " ", " ", "\n", "\n", " ", " ", " ", " ", " ", "\n", "\n", "\n\n ", " ", " ", " ", "\n ", "\n", " ", " ", " ", " ", "\n", "\n", "\n", "\n", "\n", " ", " ", "\n", " ", " ", "\n", "\n", "\n", "\n", " ", " ", " ", " ", "\n", " ", " ", " ", "\n", "\n", "\n", "\n", " ", "\n", " ", " ", "\n", "\n", " ", " ", "\n", " ", " ", "\n", "\n", "\n", ": ", "\n", " ", " ", "\n", "\n", " ", " ", " ", " ", "\n", "\n", "\n", " ", " ", " ", " ", " ", " ", " ", "\n", "\n", "\n", "\n", "\n\n", "\n", " ", " ", " ", "\n", "\n", "\n", " ", " ", " ", " ", "\n", " ", " ", " ", " ", " ", " ", " ", " ", " ", "\n", "\n", "\n", "\n", "\n", "\n", "\n", " ", " ", " ", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n\n", " ", "\n ", " ", "\n ", " ", "\n\n", "\n", "\n\n", " ", "\n ", " ", "\n ", " ", "\n\n", " ", " ", "\n", "\n", "\n ", "\n", "\n ", "\n ", "\n", "\n\n", " ", " ", " ", "\n ", "\n ", "\n", "\n", "\n ", "\n\n", "\n", "\n ", "\n", "\n", "\n\n", "\n ", " ", "\n ", " ", " ", "\n\n ", "\n ", "\n ", " ", "\n", " ", "\n", " ", "\n\n", "\n\n ", " ", "\n ", " ", " ", "\n\n ", "\n ", " ", "\n ", " ", "\n ", "\n\n ", "\n ", " ", "\n ", " ", " ", "\n ", " ", " ", " ", " ", " ", " ", " ", " ", " ", " ", " ", "\n", "\n ", " ", " ", " ", "\n ", " ", "\n ", " ", " ", "\n ", " ", " ", " ", " ", " ", "\n ", " ", " ", "\n", " ", "\n", " ", " ", " ", " ", "\n", "\n", " ", " ", "\n", " ", " ", " ", " ", "\n", " ", "\n", "\n", " ", " ", "\n", "\n", " ", " ", "\n", "\n", " ", " ", "\n", " ", " ", " ", " ", "\n", " ", " ", " ", "\n", "\n", " ", " ", " ", "\n", "\n", " ", " ", "\n", " ", " ", " ", " ", "\n", " ", " ", "\n", "\n", "\n", "\n", " ", " ", "\n", "\n", "\n", "\n", " ", " ", " ", " ", " ", " ", " ", "\n", "\n", "\n", "\n", "\n\n", " ", "\n", "\n", "\n", " ", " ", "\n", "\n\n", " ", " ", " ", "\n", " ", " ", " ", "\n", "\n", "\n\n", " ", " ", " ", " ", " ", "\n", "\n", " ", "\n", "\n", " ", "\n ", " ", "\n ", " ", "\n ", " ", "\n", " ", " ", " ", " ", "\n ", "\n", " ", " ", " ", "\n", "\n\n\n", "\n\n ", " ", "\n ", " ", " ", " ", "\n\n ", " ", "\n ", " ", " ", " ", "\n ", " ", "\n\n ", " ", "\n ", " ", "\n\n\n", " ", " ", " ", "\n ", "\n"], "language": "Python", "source": "python.org", "token_count": 19241} +{"url": "https://docs.python.org/3/library/email.policy.html", "title": ": Policy Objects", "content": "email.policy\n: Policy Objects\u00b6\nAdded in version 3.3.\nSource code: Lib/email/policy.py\nThe email\npackage\u2019s prime focus is the handling of email messages as\ndescribed by the various email and MIME RFCs. However, the general format of\nemail messages (a block of header fields each consisting of a name followed by\na colon followed by a value, the whole block followed by a blank line and an\narbitrary \u2018body\u2019), is a format that has found utility outside of the realm of\nemail. Some of these uses conform fairly closely to the main email RFCs, some\ndo not. Even when working with email, there are times when it is desirable to\nbreak strict compliance with the RFCs, such as generating emails that\ninteroperate with email servers that do not themselves follow the standards, or\nthat implement extensions you want to use in ways that violate the\nstandards.\nPolicy objects give the email package the flexibility to handle all these disparate use cases.\nA Policy\nobject encapsulates a set of attributes and methods that\ncontrol the behavior of various components of the email package during use.\nPolicy\ninstances can be passed to various classes and methods in the\nemail package to alter the default behavior. The settable values and their\ndefaults are described below.\nThere is a default policy used by all classes in the email package. For all of\nthe parser\nclasses and the related convenience functions, and for\nthe Message\nclass, this is the Compat32\npolicy, via its corresponding pre-defined instance compat32\n. This\npolicy provides for complete backward compatibility (in some cases, including\nbug compatibility) with the pre-Python3.3 version of the email package.\nThis default value for the policy keyword to\nEmailMessage\nis the EmailPolicy\npolicy, via\nits pre-defined instance default\n.\nWhen a Message\nor EmailMessage\nobject is created, it acquires a policy. If the message is created by a\nparser\n, a policy passed to the parser will be the policy used by\nthe message it creates. If the message is created by the program, then the\npolicy can be specified when it is created. When a message is passed to a\ngenerator\n, the generator uses the policy from the message by\ndefault, but you can also pass a specific policy to the generator that will\noverride the one stored on the message object.\nThe default value for the policy keyword for the email.parser\nclasses\nand the parser convenience functions will be changing in a future version of\nPython. Therefore you should always specify explicitly which policy you want\nto use when calling any of the classes and functions described in the\nparser\nmodule.\nThe first part of this documentation covers the features of Policy\n, an\nabstract base class that defines the features that are common to all\npolicy objects, including compat32\n. This includes certain hook\nmethods that are called internally by the email package, which a custom policy\ncould override to obtain different behavior. The second part describes the\nconcrete classes EmailPolicy\nand Compat32\n, which implement\nthe hooks that provide the standard behavior and the backward compatible\nbehavior and features, respectively.\nPolicy\ninstances are immutable, but they can be cloned, accepting the\nsame keyword arguments as the class constructor and returning a new\nPolicy\ninstance that is a copy of the original but with the specified\nattributes values changed.\nAs an example, the following code could be used to read an email message from a\nfile on disk and pass it to the system sendmail\nprogram on a Unix system:\n>>> from email import message_from_binary_file\n>>> from email.generator import BytesGenerator\n>>> from email import policy\n>>> from subprocess import Popen, PIPE\n>>> with open('mymsg.txt', 'rb') as f:\n... msg = message_from_binary_file(f, policy=policy.default)\n...\n>>> p = Popen(['sendmail', msg['To'].addresses[0]], stdin=PIPE)\n>>> g = BytesGenerator(p.stdin, policy=msg.policy.clone(linesep='\\r\\n'))\n>>> g.flatten(msg)\n>>> p.stdin.close()\n>>> rc = p.wait()\nHere we are telling BytesGenerator\nto use the RFC\ncorrect line separator characters when creating the binary string to feed into\nsendmail's\nstdin\n, where the default policy would use \\n\nline\nseparators.\nSome email package methods accept a policy keyword argument, allowing the\npolicy to be overridden for that method. For example, the following code uses\nthe as_bytes()\nmethod of the msg object from\nthe previous example and writes the message to a file using the native line\nseparators for the platform on which it is running:\n>>> import os\n>>> with open('converted.txt', 'wb') as f:\n... f.write(msg.as_bytes(policy=msg.policy.clone(linesep=os.linesep)))\n17\nPolicy objects can also be combined using the addition operator, producing a policy object whose settings are a combination of the non-default values of the summed objects:\n>>> compat_SMTP = policy.compat32.clone(linesep='\\r\\n')\n>>> compat_strict = policy.compat32.clone(raise_on_defect=True)\n>>> compat_strict_SMTP = compat_SMTP + compat_strict\nThis operation is not commutative; that is, the order in which the objects are added matters. To illustrate:\n>>> policy100 = policy.compat32.clone(max_line_length=100)\n>>> policy80 = policy.compat32.clone(max_line_length=80)\n>>> apolicy = policy100 + policy80\n>>> apolicy.max_line_length\n80\n>>> apolicy = policy80 + policy100\n>>> apolicy.max_line_length\n100\n- class email.policy.Policy(**kw)\u00b6\nThis is the abstract base class for all policy classes. It provides default implementations for a couple of trivial methods, as well as the implementation of the immutability property, the\nclone()\nmethod, and the constructor semantics.The constructor of a policy class can be passed various keyword arguments. The arguments that may be specified are any non-method properties on this class, plus any additional non-method properties on the concrete class. A value specified in the constructor will override the default value for the corresponding attribute.\nThis class defines the following properties, and thus values for the following may be passed in the constructor of any policy class:\n- max_line_length\u00b6\nThe maximum length of any line in the serialized output, not counting the end of line character(s). Default is 78, per RFC 5322. A value of\n0\norNone\nindicates that no line wrapping should be done at all.\n- linesep\u00b6\nThe string to be used to terminate lines in serialized output. The default is\n\\n\nbecause that\u2019s the internal end-of-line discipline used by Python, though\\r\\n\nis required by the RFCs.\n- cte_type\u00b6\nControls the type of Content Transfer Encodings that may be or are required to be used. The possible values are:\n7bit\nall data must be \u201c7 bit clean\u201d (ASCII-only). This means that where necessary data will be encoded using either quoted-printable or base64 encoding.\n8bit\ndata is not constrained to be 7 bit clean. Data in headers is still required to be ASCII-only and so will be encoded (see\nfold_binary()\nandutf8\nbelow for exceptions), but body parts may use the8bit\nCTE.A\ncte_type\nvalue of8bit\nonly works withBytesGenerator\n, notGenerator\n, because strings cannot contain binary data. If aGenerator\nis operating under a policy that specifiescte_type=8bit\n, it will act as ifcte_type\nis7bit\n.\n- raise_on_defect\u00b6\nIf\nTrue\n, any defects encountered will be raised as errors. IfFalse\n(the default), defects will be passed to theregister_defect()\nmethod.\n- mangle_from_\u00b6\nIf\nTrue\n, lines starting with \u201cFrom \u201c in the body are escaped by putting a>\nin front of them. This parameter is used when the message is being serialized by a generator. Default:False\n.Added in version 3.5.\n- message_factory\u00b6\nA factory function for constructing a new empty message object. Used by the parser when building messages. Defaults to\nNone\n, in which caseMessage\nis used.Added in version 3.6.\n- verify_generated_headers\u00b6\nIf\nTrue\n(the default), the generator will raiseHeaderWriteError\ninstead of writing a header that is improperly folded or delimited, such that it would be parsed as multiple headers or joined with adjacent data. Such headers can be generated by custom header classes or bugs in theemail\nmodule.As it\u2019s a security feature, this defaults to\nTrue\neven in theCompat32\npolicy. For backwards compatible, but unsafe, behavior, it must be set toFalse\nexplicitly.Added in version 3.13.\nThe following\nPolicy\nmethod is intended to be called by code using the email library to create policy instances with custom settings:- clone(**kw)\u00b6\nReturn a new\nPolicy\ninstance whose attributes have the same values as the current instance, except where those attributes are given new values by the keyword arguments.\nThe remaining\nPolicy\nmethods are called by the email package code, and are not intended to be called by an application using the email package. A custom policy must implement all of these methods.- handle_defect(obj, defect)\u00b6\nHandle a defect found on obj. When the email package calls this method, defect will always be a subclass of\nMessageDefect\n.The default implementation checks the\nraise_on_defect\nflag. If it isTrue\n, defect is raised as an exception. If it isFalse\n(the default), obj and defect are passed toregister_defect()\n.\n- register_defect(obj, defect)\u00b6\nRegister a defect on obj. In the email package, defect will always be a subclass of\nMessageDefect\n.The default implementation calls the\nappend\nmethod of thedefects\nattribute of obj. When the email package callshandle_defect\n, obj will normally have adefects\nattribute that has anappend\nmethod. Custom object types used with the email package (for example, customMessage\nobjects) should also provide such an attribute, otherwise defects in parsed messages will raise unexpected errors.\n- header_max_count(name)\u00b6\nReturn the maximum allowed number of headers named name.\nCalled when a header is added to an\nEmailMessage\norMessage\nobject. If the returned value is not0\norNone\n, and there are already a number of headers with the name name greater than or equal to the value returned, aValueError\nis raised.Because the default behavior of\nMessage.__setitem__\nis to append the value to the list of headers, it is easy to create duplicate headers without realizing it. This method allows certain headers to be limited in the number of instances of that header that may be added to aMessage\nprogrammatically. (The limit is not observed by the parser, which will faithfully produce as many headers as exist in the message being parsed.)The default implementation returns\nNone\nfor all header names.\n- header_source_parse(sourcelines)\u00b6\nThe email package calls this method with a list of strings, each string ending with the line separation characters found in the source being parsed. The first line includes the field header name and separator. All whitespace in the source is preserved. The method should return the\n(name, value)\ntuple that is to be stored in theMessage\nto represent the parsed header.If an implementation wishes to retain compatibility with the existing email package policies, name should be the case preserved name (all characters up to the \u2018\n:\n\u2019 separator), while value should be the unfolded value (all line separator characters removed, but whitespace kept intact), stripped of leading whitespace.sourcelines may contain surrogateescaped binary data.\nThere is no default implementation\n- header_store_parse(name, value)\u00b6\nThe email package calls this method with the name and value provided by the application program when the application program is modifying a\nMessage\nprogrammatically (as opposed to aMessage\ncreated by a parser). The method should return the(name, value)\ntuple that is to be stored in theMessage\nto represent the header.If an implementation wishes to retain compatibility with the existing email package policies, the name and value should be strings or string subclasses that do not change the content of the passed in arguments.\nThere is no default implementation\n- header_fetch_parse(name, value)\u00b6\nThe email package calls this method with the name and value currently stored in the\nMessage\nwhen that header is requested by the application program, and whatever the method returns is what is passed back to the application as the value of the header being retrieved. Note that there may be more than one header with the same name stored in theMessage\n; the method is passed the specific name and value of the header destined to be returned to the application.value may contain surrogateescaped binary data. There should be no surrogateescaped binary data in the value returned by the method.\nThere is no default implementation\n- fold(name, value)\u00b6\nThe email package calls this method with the name and value currently stored in the\nMessage\nfor a given header. The method should return a string that represents that header \u201cfolded\u201d correctly (according to the policy settings) by composing the name with the value and insertinglinesep\ncharacters at the appropriate places. See RFC 5322 for a discussion of the rules for folding email headers.value may contain surrogateescaped binary data. There should be no surrogateescaped binary data in the string returned by the method.\n- class email.policy.EmailPolicy(**kw)\u00b6\nThis concrete\nPolicy\nprovides behavior that is intended to be fully compliant with the current email RFCs. These include (but are not limited to) RFC 5322, RFC 2047, and the current MIME RFCs.This policy adds new header parsing and folding algorithms. Instead of simple strings, headers are\nstr\nsubclasses with attributes that depend on the type of the field. The parsing and folding algorithm fully implement RFC 2047 and RFC 5322.The default value for the\nmessage_factory\nattribute isEmailMessage\n.In addition to the settable attributes listed above that apply to all policies, this policy adds the following additional attributes:\nAdded in version 3.6: [1]\n- utf8\u00b6\nIf\nFalse\n, follow RFC 5322, supporting non-ASCII characters in headers by encoding them as \u201cencoded words\u201d. IfTrue\n, follow RFC 6532 and useutf-8\nencoding for headers. Messages formatted in this way may be passed to SMTP servers that support theSMTPUTF8\nextension (RFC 6531).\n- refold_source\u00b6\nIf the value for a header in the\nMessage\nobject originated from aparser\n(as opposed to being set by a program), this attribute indicates whether or not a generator should refold that value when transforming the message back into serialized form. The possible values are:none\nall source values use original folding\nlong\nsource values that have any line that is longer than\nmax_line_length\nwill be refoldedall\nall values are refolded.\nThe default is\nlong\n.\n- header_factory\u00b6\nA callable that takes two arguments,\nname\nandvalue\n, wherename\nis a header field name andvalue\nis an unfolded header field value, and returns a string subclass that represents that header. A defaultheader_factory\n(seeheaderregistry\n) is provided that supports custom parsing for the various address and date RFC 5322 header field types, and the major MIME header field stypes. Support for additional custom parsing will be added in the future.\n- content_manager\u00b6\nAn object with at least two methods: get_content and set_content. When the\nget_content()\norset_content()\nmethod of anEmailMessage\nobject is called, it calls the corresponding method of this object, passing it the message object as its first argument, and any arguments or keywords that were passed to it as additional arguments. By defaultcontent_manager\nis set toraw_data_manager\n.Added in version 3.4.\nThe class provides the following concrete implementations of the abstract methods of\nPolicy\n:- header_max_count(name)\u00b6\nReturns the value of the\nmax_count\nattribute of the specialized class used to represent the header with the given name.\n- header_source_parse(sourcelines)\u00b6\nThe name is parsed as everything up to the \u2018\n:\n\u2019 and returned unmodified. The value is determined by stripping leading whitespace off the remainder of the first line, joining all subsequent lines together, and stripping any trailing carriage return or linefeed characters.\n- header_store_parse(name, value)\u00b6\nThe name is returned unchanged. If the input value has a\nname\nattribute and it matches name ignoring case, the value is returned unchanged. Otherwise the name and value are passed toheader_factory\n, and the resulting header object is returned as the value. In this case aValueError\nis raised if the input value contains CR or LF characters.\n- header_fetch_parse(name, value)\u00b6\nIf the value has a\nname\nattribute, it is returned to unmodified. Otherwise the name, and the value with any CR or LF characters removed, are passed to theheader_factory\n, and the resulting header object is returned. Any surrogateescaped bytes get turned into the unicode unknown-character glyph.\n- fold(name, value)\u00b6\nHeader folding is controlled by the\nrefold_source\npolicy setting. A value is considered to be a \u2018source value\u2019 if and only if it does not have aname\nattribute (having aname\nattribute means it is a header object of some sort). If a source value needs to be refolded according to the policy, it is converted into a header object by passing the name and the value with any CR and LF characters removed to theheader_factory\n. Folding of a header object is done by calling itsfold\nmethod with the current policy.Source values are split into lines using\nsplitlines()\n. If the value is not to be refolded, the lines are rejoined using thelinesep\nfrom the policy and returned. The exception is lines containing non-ascii binary data. In that case the value is refolded regardless of therefold_source\nsetting, which causes the binary data to be CTE encoded using theunknown-8bit\ncharset.\n- fold_binary(name, value)\u00b6\nThe same as\nfold()\nifcte_type\nis7bit\n, except that the returned value is bytes.If\ncte_type\nis8bit\n, non-ASCII binary data is converted back into bytes. Headers with binary data are not refolded, regardless of therefold_header\nsetting, since there is no way to know whether the binary data consists of single byte characters or multibyte characters.\nThe following instances of EmailPolicy\nprovide defaults suitable for\nspecific application domains. Note that in the future the behavior of these\ninstances (in particular the HTTP\ninstance) may be adjusted to conform even\nmore closely to the RFCs relevant to their domains.\n- email.policy.default\u00b6\nAn instance of\nEmailPolicy\nwith all defaults unchanged. This policy uses the standard Python\\n\nline endings rather than the RFC-correct\\r\\n\n.\n- email.policy.SMTP\u00b6\nSuitable for serializing messages in conformance with the email RFCs. Like\ndefault\n, but withlinesep\nset to\\r\\n\n, which is RFC compliant.\n- email.policy.SMTPUTF8\u00b6\nThe same as\nSMTP\nexcept thatutf8\nisTrue\n. Useful for serializing messages to a message store without using encoded words in the headers. Should only be used for SMTP transmission if the sender or recipient addresses have non-ASCII characters (thesmtplib.SMTP.send_message()\nmethod handles this automatically).\n- email.policy.HTTP\u00b6\nSuitable for serializing headers with for use in HTTP traffic. Like\nSMTP\nexcept thatmax_line_length\nis set toNone\n(unlimited).\n- email.policy.strict\u00b6\nConvenience instance. The same as\ndefault\nexcept thatraise_on_defect\nis set toTrue\n. This allows any policy to be made strict by writing:somepolicy + policy.strict\nWith all of these EmailPolicies\n, the effective API of\nthe email package is changed from the Python 3.2 API in the following ways:\nSetting a header on a\nMessage\nresults in that header being parsed and a header object created.Fetching a header value from a\nMessage\nresults in that header being parsed and a header object created and returned.Any header object, or any header that is refolded due to the policy settings, is folded using an algorithm that fully implements the RFC folding algorithms, including knowing where encoded words are required and allowed.\nFrom the application view, this means that any header obtained through the\nEmailMessage\nis a header object with extra\nattributes, whose string value is the fully decoded unicode value of the\nheader. Likewise, a header may be assigned a new value, or a new header\ncreated, using a unicode string, and the policy will take care of converting\nthe unicode string into the correct RFC encoded form.\nThe header objects and their attributes are described in\nheaderregistry\n.\n- class email.policy.Compat32(**kw)\u00b6\nThis concrete\nPolicy\nis the backward compatibility policy. It replicates the behavior of the email package in Python 3.2. Thepolicy\nmodule also defines an instance of this class,compat32\n, that is used as the default policy. Thus the default behavior of the email package is to maintain compatibility with Python 3.2.The following attributes have values that are different from the\nPolicy\ndefault:- mangle_from_\u00b6\nThe default is\nTrue\n.\nThe class provides the following concrete implementations of the abstract methods of\nPolicy\n:- header_source_parse(sourcelines)\u00b6\nThe name is parsed as everything up to the \u2018\n:\n\u2019 and returned unmodified. The value is determined by stripping leading whitespace off the remainder of the first line, joining all subsequent lines together, and stripping any trailing carriage return or linefeed characters.\n- header_store_parse(name, value)\u00b6\nThe name and value are returned unmodified.\n- header_fetch_parse(name, value)\u00b6\nIf the value contains binary data, it is converted into a\nHeader\nobject using theunknown-8bit\ncharset. Otherwise it is returned unmodified.\n- fold(name, value)\u00b6\nHeaders are folded using the\nHeader\nfolding algorithm, which preserves existing line breaks in the value, and wraps each resulting line to themax_line_length\n. Non-ASCII binary data are CTE encoded using theunknown-8bit\ncharset.\n- fold_binary(name, value)\u00b6\nHeaders are folded using the\nHeader\nfolding algorithm, which preserves existing line breaks in the value, and wraps each resulting line to themax_line_length\n. Ifcte_type\nis7bit\n, non-ascii binary data is CTE encoded using theunknown-8bit\ncharset. Otherwise the original source header is used, with its existing line breaks and any (RFC invalid) binary data it may contain.\n- email.policy.compat32\u00b6\nAn instance of\nCompat32\n, providing backward compatibility with the behavior of the email package in Python 3.2.Note\nThe\ncompat32\npolicy should not be used as a policy forEmailMessage\nobjects, and should only be used to serialize messages that were created using thecompat32\npolicy.\nFootnotes", "code_snippets": ["\n", " ", " ", " ", " ", "\n", " ", "\n", "\n", " ", " ", "\n", " ", " ", "\n", " ", " ", " ", " ", "\n", " ", " ", "\n", " ", " ", "\n", " ", " ", " ", " ", "\n", "\n", "\n", " ", " ", " ", " ", "\n", "\n", "\n", " ", " ", "\n"], "language": "Python", "source": "python.org", "token_count": 5583} +{"url": "https://docs.python.org/3/using/android.html", "title": "Using Python on Android", "content": "6. Using Python on Android\u00b6\nPython on Android is unlike Python on desktop platforms. On a desktop platform, Python is generally installed as a system resource that can be used by any user of that computer. Users then interact with Python by running a python executable and entering commands at an interactive prompt, or by running a Python script.\nOn Android, there is no concept of installing as a system resource. The only unit of software distribution is an \u201capp\u201d. There is also no console where you could run a python executable, or interact with a Python REPL.\nAs a result, the only way you can use Python on Android is in embedded mode \u2013 that\nis, by writing a native Android application, embedding a Python interpreter\nusing libpython\n, and invoking Python code using the Python embedding\nAPI. The full Python interpreter, the standard library, and all\nyour Python code is then packaged into your app for its own private use.\nThe Python standard library has some notable omissions and restrictions on Android. See the API availability guide for details.\n6.1. Adding Python to an Android app\u00b6\nMost app developers should use one of the following tools, which will provide a much easier experience:\nIf you\u2019re sure you want to do all of this manually, read on. You can use the testbed app as a guide; each step below contains a link to the relevant file.\nFirst, acquire a build of Python for Android:\nThe easiest way is to download an Android release from python.org. The\nprefix\ndirectory mentioned below is at the top level of the package.Or if you want to build it yourself, follow the instructions in Android/README.md. The\nprefix\ndirectory will be created undercross-build/HOST\n.\nAdd code to your build.gradle file to copy the following items into your project. All except your own Python code can be copied from\nprefix/lib\n:In your JNI libraries:\nlibpython*.*.so\nlib*_python.so\n(external libraries such as OpenSSL)\nIn your assets:\npython*.*\n(the Python standard library)python*.*/site-packages\n(your own Python code)\nAdd code to your app to extract the assets to the filesystem.\nAdd code to your app to start Python in embedded mode. This will need to be C code called via JNI.\n6.2. Building a Python package for Android\u00b6\nPython packages can be built for Android as wheels and released on PyPI. The recommended tool for doing this is cibuildwheel, which automates all the details of setting up a cross-compilation environment, building the wheel, and testing it on an emulator.", "code_snippets": [], "language": "Python", "source": "python.org", "token_count": 621} +{"url": "https://docs.python.org/3/library/asyncio-queue.html", "title": "Queues", "content": "Queues\u00b6\nSource code: Lib/asyncio/queues.py\nasyncio queues are designed to be similar to classes of the\nqueue\nmodule. Although asyncio queues are not thread-safe,\nthey are designed to be used specifically in async/await code.\nNote that methods of asyncio queues don\u2019t have a timeout parameter;\nuse asyncio.wait_for()\nfunction to do queue operations with a\ntimeout.\nSee also the Examples section below.\nQueue\u00b6\n- class asyncio.Queue(maxsize=0)\u00b6\nA first in, first out (FIFO) queue.\nIf maxsize is less than or equal to zero, the queue size is infinite. If it is an integer greater than\n0\n, thenawait put()\nblocks when the queue reaches maxsize until an item is removed byget()\n.Unlike the standard library threading\nqueue\n, the size of the queue is always known and can be returned by calling theqsize()\nmethod.Changed in version 3.10: Removed the loop parameter.\nThis class is not thread safe.\n- maxsize\u00b6\nNumber of items allowed in the queue.\n- empty()\u00b6\nReturn\nTrue\nif the queue is empty,False\notherwise.\n- full()\u00b6\nReturn\nTrue\nif there aremaxsize\nitems in the queue.If the queue was initialized with\nmaxsize=0\n(the default), thenfull()\nnever returnsTrue\n.\n- async get()\u00b6\nRemove and return an item from the queue. If queue is empty, wait until an item is available.\nRaises\nQueueShutDown\nif the queue has been shut down and is empty, or if the queue has been shut down immediately.\n- get_nowait()\u00b6\nReturn an item if one is immediately available, else raise\nQueueEmpty\n.\n- async join()\u00b6\nBlock until all items in the queue have been received and processed.\nThe count of unfinished tasks goes up whenever an item is added to the queue. The count goes down whenever a consumer coroutine calls\ntask_done()\nto indicate that the item was retrieved and all work on it is complete. When the count of unfinished tasks drops to zero,join()\nunblocks.\n- async put(item)\u00b6\nPut an item into the queue. If the queue is full, wait until a free slot is available before adding the item.\nRaises\nQueueShutDown\nif the queue has been shut down.\n- put_nowait(item)\u00b6\nPut an item into the queue without blocking.\nIf no free slot is immediately available, raise\nQueueFull\n.\n- qsize()\u00b6\nReturn the number of items in the queue.\n- shutdown(immediate=False)\u00b6\nPut a\nQueue\ninstance into a shutdown mode.The queue can no longer grow. Future calls to\nput()\nraiseQueueShutDown\n. Currently blocked callers ofput()\nwill be unblocked and will raiseQueueShutDown\nin the formerly awaiting task.If immediate is false (the default), the queue can be wound down normally with\nget()\ncalls to extract tasks that have already been loaded.And if\ntask_done()\nis called for each remaining task, a pendingjoin()\nwill be unblocked normally.Once the queue is empty, future calls to\nget()\nwill raiseQueueShutDown\n.If immediate is true, the queue is terminated immediately. The queue is drained to be completely empty and the count of unfinished tasks is reduced by the number of tasks drained. If unfinished tasks is zero, callers of\njoin()\nare unblocked. Also, blocked callers ofget()\nare unblocked and will raiseQueueShutDown\nbecause the queue is empty.Use caution when using\njoin()\nwith immediate set to true. This unblocks the join even when no work has been done on the tasks, violating the usual invariant for joining a queue.Added in version 3.13.\n- task_done()\u00b6\nIndicate that a formerly enqueued work item is complete.\nUsed by queue consumers. For each\nget()\nused to fetch a work item, a subsequent call totask_done()\ntells the queue that the processing on the work item is complete.If a\njoin()\nis currently blocking, it will resume when all items have been processed (meaning that atask_done()\ncall was received for every item that had beenput()\ninto the queue).Raises\nValueError\nif called more times than there were items placed in the queue.\nPriority Queue\u00b6\nLIFO Queue\u00b6\nExceptions\u00b6\n- exception asyncio.QueueEmpty\u00b6\nThis exception is raised when the\nget_nowait()\nmethod is called on an empty queue.\n- exception asyncio.QueueFull\u00b6\nException raised when the\nput_nowait()\nmethod is called on a queue that has reached its maxsize.\nExamples\u00b6\nQueues can be used to distribute workload between several concurrent tasks:\nimport asyncio\nimport random\nimport time\nasync def worker(name, queue):\nwhile True:\n# Get a \"work item\" out of the queue.\nsleep_for = await queue.get()\n# Sleep for the \"sleep_for\" seconds.\nawait asyncio.sleep(sleep_for)\n# Notify the queue that the \"work item\" has been processed.\nqueue.task_done()\nprint(f'{name} has slept for {sleep_for:.2f} seconds')\nasync def main():\n# Create a queue that we will use to store our \"workload\".\nqueue = asyncio.Queue()\n# Generate random timings and put them into the queue.\ntotal_sleep_time = 0\nfor _ in range(20):\nsleep_for = random.uniform(0.05, 1.0)\ntotal_sleep_time += sleep_for\nqueue.put_nowait(sleep_for)\n# Create three worker tasks to process the queue concurrently.\ntasks = []\nfor i in range(3):\ntask = asyncio.create_task(worker(f'worker-{i}', queue))\ntasks.append(task)\n# Wait until the queue is fully processed.\nstarted_at = time.monotonic()\nawait queue.join()\ntotal_slept_for = time.monotonic() - started_at\n# Cancel our worker tasks.\nfor task in tasks:\ntask.cancel()\n# Wait until all worker tasks are cancelled.\nawait asyncio.gather(*tasks, return_exceptions=True)\nprint('====')\nprint(f'3 workers slept in parallel for {total_slept_for:.2f} seconds')\nprint(f'total expected sleep time: {total_sleep_time:.2f} seconds')\nasyncio.run(main())", "code_snippets": ["\n", "\n", "\n\n\n", " ", " ", "\n ", " ", "\n ", "\n ", " ", " ", " ", "\n\n ", "\n ", " ", "\n\n ", "\n ", "\n\n ", "\n\n\n", " ", "\n ", "\n ", " ", " ", "\n\n ", "\n ", " ", " ", "\n ", " ", " ", " ", "\n ", " ", " ", " ", "\n ", " ", " ", "\n ", "\n\n ", "\n ", " ", " ", "\n ", " ", " ", " ", "\n ", " ", " ", " ", "\n ", "\n\n ", "\n ", " ", " ", "\n ", " ", "\n ", " ", " ", " ", " ", "\n\n ", "\n ", " ", " ", " ", "\n ", "\n ", "\n ", " ", " ", "\n\n ", "\n ", "\n ", "\n\n\n", "\n"], "language": "Python", "source": "python.org", "token_count": 1362} +{"url": "https://docs.python.org/3/using/unix.html", "title": "Using Python on Unix platforms", "content": "2. Using Python on Unix platforms\u00b6\n2.1. Getting and installing the latest version of Python\u00b6\n2.1.1. On Linux\u00b6\nPython comes preinstalled on most Linux distributions, and is available as a package on all others. However there are certain features you might want to use that are not available on your distro\u2019s package. You can compile the latest version of Python from source.\nIn the event that the latest version of Python doesn\u2019t come preinstalled and isn\u2019t in the repositories as well, you can make packages for your own distro. Have a look at the following links:\nSee also\n- https://www.debian.org/doc/manuals/maint-guide/first.en.html\nfor Debian users\n- https://en.opensuse.org/Portal:Packaging\nfor OpenSuse users\n- https://docs.fedoraproject.org/en-US/package-maintainers/Packaging_Tutorial_GNU_Hello/\nfor Fedora users\n- https://slackbook.org/html/package-management-making-packages.html\nfor Slackware users\n2.1.1.1. Installing IDLE\u00b6\nIn some cases, IDLE might not be included in your Python installation.\nFor Debian and Ubuntu users:\nsudo apt update sudo apt install idle\nFor Fedora, RHEL, and CentOS users:\nsudo dnf install python3-idle\nFor SUSE and OpenSUSE users:\nsudo zypper install python3-idle\nFor Alpine Linux users:\nsudo apk add python3-idle\n2.1.2. On FreeBSD and OpenBSD\u00b6\nFreeBSD users, to add the package use:\npkg install python3\nOpenBSD users, to add the package use:\npkg_add -r python pkg_add ftp://ftp.openbsd.org/pub/OpenBSD/4.2/packages//python-.tgz\nFor example i386 users get the 2.5.1 version of Python using:\npkg_add ftp://ftp.openbsd.org/pub/OpenBSD/4.2/packages/i386/python-2.5.1p2.tgz\n2.2. Building Python\u00b6\nSee also\nIf you want to contribute to CPython, refer to the devguide, which includes build instructions and other tips on setting up environment.\nIf you want to compile CPython yourself, first thing you should do is get the source. You can download either the latest release\u2019s source or grab a fresh clone. You will also need to install the build requirements.\nThe build process consists of the usual commands:\n./configure\nmake\nmake install\nConfiguration options and caveats for specific Unix platforms are extensively documented in the README.rst file in the root of the Python source tree.\nWarning\nmake install\ncan overwrite or masquerade the python3\nbinary.\nmake altinstall\nis therefore recommended instead of make install\nsince it only installs exec_prefix/bin/pythonversion\n.\n2.4. Miscellaneous\u00b6\nTo easily use Python scripts on Unix, you need to make them executable, e.g. with\n$ chmod +x script\nand put an appropriate Shebang line at the top of the script. A good choice is usually\n#!/usr/bin/env python3\nwhich searches for the Python interpreter in the whole PATH\n. However,\nsome Unices may not have the env command, so you may need to hardcode\n/usr/bin/python3\nas the interpreter path.\nTo use shell commands in your Python scripts, look at the subprocess\nmodule.\n2.5. Custom OpenSSL\u00b6\nTo use your vendor\u2019s OpenSSL configuration and system trust store, locate the directory with\nopenssl.cnf\nfile or symlink in/etc\n. On most distribution the file is either in/etc/ssl\nor/etc/pki/tls\n. The directory should also contain acert.pem\nfile and/or acerts\ndirectory.$ find /etc/ -name openssl.cnf -printf \"%h\\n\" /etc/ssl\nDownload, build, and install OpenSSL. Make sure you use\ninstall_sw\nand notinstall\n. Theinstall_sw\ntarget does not overrideopenssl.cnf\n.$ curl -O https://www.openssl.org/source/openssl-VERSION.tar.gz $ tar xzf openssl-VERSION $ pushd openssl-VERSION $ ./config \\ --prefix=/usr/local/custom-openssl \\ --libdir=lib \\ --openssldir=/etc/ssl $ make -j1 depend $ make -j8 $ make install_sw $ popd\nBuild Python with custom OpenSSL (see the configure\n--with-openssl\nand--with-openssl-rpath\noptions)$ pushd python-3.x.x $ ./configure -C \\ --with-openssl=/usr/local/custom-openssl \\ --with-openssl-rpath=auto \\ --prefix=/usr/local/python-3.x.x $ make -j8 $ make altinstall\nNote\nPatch releases of OpenSSL have a backwards compatible ABI. You don\u2019t need to recompile Python to update OpenSSL. It\u2019s sufficient to replace the custom OpenSSL installation with a newer version.", "code_snippets": [], "language": "Python", "source": "python.org", "token_count": 1037} +{"url": "https://docs.python.org/3/library/getopt.html", "title": " \u2014 C-style parser for command line options", "content": "getopt\n\u2014 C-style parser for command line options\u00b6\nSource code: Lib/getopt.py\nNote\nThis module is considered feature complete. A more declarative and\nextensible alternative to this API is provided in the optparse\nmodule. Further functional enhancements for command line parameter\nprocessing are provided either as third party modules on PyPI,\nor else as features in the argparse\nmodule.\nThis module helps scripts to parse the command line arguments in sys.argv\n.\nIt supports the same conventions as the Unix getopt()\nfunction (including\nthe special meanings of arguments of the form \u2018-\n\u2019 and \u2018--\n\u2018). Long\noptions similar to those supported by GNU software may be used as well via an\noptional third argument.\nUsers who are unfamiliar with the Unix getopt()\nfunction should consider\nusing the argparse\nmodule instead. Users who are familiar with the Unix\ngetopt()\nfunction, but would like to get equivalent behavior while\nwriting less code and getting better help and error messages should consider\nusing the optparse\nmodule. See Choosing an argument parsing library for\nadditional details.\nThis module provides two functions and an exception:\n- getopt.getopt(args, shortopts, longopts=[])\u00b6\nParses command line options and parameter list. args is the argument list to be parsed, without the leading reference to the running program. Typically, this means\nsys.argv[1:]\n. shortopts is the string of option letters that the script wants to recognize, with options that require an argument followed by a colon (':'\n) and options that accept an optional argument followed by two colons ('::'\n); i.e., the same format that Unixgetopt()\nuses.Note\nUnlike GNU\ngetopt()\n, after a non-option argument, all further arguments are considered also non-options. This is similar to the way non-GNU Unix systems work.longopts, if specified, must be a list of strings with the names of the long options which should be supported. The leading\n'--'\ncharacters should not be included in the option name. Long options which require an argument should be followed by an equal sign ('='\n). Long options which accept an optional argument should be followed by an equal sign and question mark ('=?'\n). To accept only long options, shortopts should be an empty string. Long options on the command line can be recognized so long as they provide a prefix of the option name that matches exactly one of the accepted options. For example, if longopts is['foo', 'frob']\n, the option--fo\nwill match as--foo\n, but--f\nwill not match uniquely, soGetoptError\nwill be raised.The return value consists of two elements: the first is a list of\n(option, value)\npairs; the second is the list of program arguments left after the option list was stripped (this is a trailing slice of args). Each option-and-value pair returned has the option as its first element, prefixed with a hyphen for short options (e.g.,'-x'\n) or two hyphens for long options (e.g.,'--long-option'\n), and the option argument as its second element, or an empty string if the option has no argument. The options occur in the list in the same order in which they were found, thus allowing multiple occurrences. Long and short options may be mixed.Changed in version 3.14: Optional arguments are supported.\n- getopt.gnu_getopt(args, shortopts, longopts=[])\u00b6\nThis function works like\ngetopt()\n, except that GNU style scanning mode is used by default. This means that option and non-option arguments may be intermixed. Thegetopt()\nfunction stops processing options as soon as a non-option argument is encountered.If the first character of the option string is\n'+'\n, or if the environment variablePOSIXLY_CORRECT\nis set, then option processing stops as soon as a non-option argument is encountered.If the first character of the option string is\n'-'\n, non-option arguments that are followed by options are added to the list of option-and-value pairs as a pair that hasNone\nas its first element and the list of non-option arguments as its second element. The second element of thegnu_getopt()\nresult is a list of program arguments after the last option.Changed in version 3.14: Support for returning intermixed options and non-option arguments in order.\n- exception getopt.GetoptError\u00b6\nThis is raised when an unrecognized option is found in the argument list or when an option requiring an argument is given none. The argument to the exception is a string indicating the cause of the error. For long options, an argument given to an option which does not require one will also cause this exception to be raised. The attributes\nmsg\nandopt\ngive the error message and related option; if there is no specific option to which the exception relates,opt\nis an empty string.\n- exception getopt.error\u00b6\nAlias for\nGetoptError\n; for backward compatibility.\nAn example using only Unix style options:\n>>> import getopt\n>>> args = '-a -b -cfoo -d bar a1 a2'.split()\n>>> args\n['-a', '-b', '-cfoo', '-d', 'bar', 'a1', 'a2']\n>>> optlist, args = getopt.getopt(args, 'abc:d:')\n>>> optlist\n[('-a', ''), ('-b', ''), ('-c', 'foo'), ('-d', 'bar')]\n>>> args\n['a1', 'a2']\nUsing long option names is equally easy:\n>>> s = '--condition=foo --testing --output-file abc.def -x a1 a2'\n>>> args = s.split()\n>>> args\n['--condition=foo', '--testing', '--output-file', 'abc.def', '-x', 'a1', 'a2']\n>>> optlist, args = getopt.getopt(args, 'x', [\n... 'condition=', 'output-file=', 'testing'])\n>>> optlist\n[('--condition', 'foo'), ('--testing', ''), ('--output-file', 'abc.def'), ('-x', '')]\n>>> args\n['a1', 'a2']\nOptional arguments should be specified explicitly:\n>>> s = '-Con -C --color=off --color a1 a2'\n>>> args = s.split()\n>>> args\n['-Con', '-C', '--color=off', '--color', 'a1', 'a2']\n>>> optlist, args = getopt.getopt(args, 'C::', ['color=?'])\n>>> optlist\n[('-C', 'on'), ('-C', ''), ('--color', 'off'), ('--color', '')]\n>>> args\n['a1', 'a2']\nThe order of options and non-option arguments can be preserved:\n>>> s = 'a1 -x a2 a3 a4 --long a5 a6'\n>>> args = s.split()\n>>> args\n['a1', '-x', 'a2', 'a3', 'a4', '--long', 'a5', 'a6']\n>>> optlist, args = getopt.gnu_getopt(args, '-x:', ['long='])\n>>> optlist\n[(None, ['a1']), ('-x', 'a2'), (None, ['a3', 'a4']), ('--long', 'a5')]\n>>> args\n['a6']\nIn a script, typical usage is something like this:\nimport getopt, sys\ndef main():\ntry:\nopts, args = getopt.getopt(sys.argv[1:], \"ho:v\", [\"help\", \"output=\"])\nexcept getopt.GetoptError as err:\n# print help information and exit:\nprint(err) # will print something like \"option -a not recognized\"\nusage()\nsys.exit(2)\noutput = None\nverbose = False\nfor o, a in opts:\nif o == \"-v\":\nverbose = True\nelif o in (\"-h\", \"--help\"):\nusage()\nsys.exit()\nelif o in (\"-o\", \"--output\"):\noutput = a\nelse:\nassert False, \"unhandled option\"\nprocess(args, output=output, verbose=verbose)\nif __name__ == \"__main__\":\nmain()\nNote that an equivalent command line interface could be produced with less code\nand more informative help and error messages by using the optparse\nmodule:\nimport optparse\nif __name__ == '__main__':\nparser = optparse.OptionParser()\nparser.add_option('-o', '--output')\nparser.add_option('-v', dest='verbose', action='store_true')\nopts, args = parser.parse_args()\nprocess(args, output=opts.output, verbose=opts.verbose)\nA roughly equivalent command line interface for this case can also be\nproduced by using the argparse\nmodule:\nimport argparse\nif __name__ == '__main__':\nparser = argparse.ArgumentParser()\nparser.add_argument('-o', '--output')\nparser.add_argument('-v', dest='verbose', action='store_true')\nparser.add_argument('rest', nargs='*')\nargs = parser.parse_args()\nprocess(args.rest, output=args.output, verbose=args.verbose)\nSee Choosing an argument parsing library for details on how the argparse\nversion of this code differs in behaviour from the optparse\n(and\ngetopt\n) version.", "code_snippets": [], "language": "Python", "source": "python.org", "token_count": 1939} +{"url": "https://docs.python.org/3/library/audioop.html", "title": " \u2014 Manipulate raw audio data", "content": "audioop\n\u2014 Manipulate raw audio data\u00b6\nDeprecated since version 3.11, removed in version 3.13.\nThis module is no longer part of the Python standard library. It was removed in Python 3.13 after being deprecated in Python 3.11. The removal was decided in PEP 594.\nThe last version of Python that provided the audioop\nmodule was\nPython 3.12.", "code_snippets": [], "language": "Python", "source": "python.org", "token_count": 84} +{"url": "https://docs.python.org/3/library/pwd.html", "title": " \u2014 The password database", "content": "pwd\n\u2014 The password database\u00b6\nThis module provides access to the Unix user account and password database. It is available on all Unix versions.\nAvailability: Unix, not WASI, not iOS.\nPassword database entries are reported as a tuple-like object, whose attributes\ncorrespond to the members of the passwd\nstructure (Attribute field below,\nsee \n):\nIndex |\nAttribute |\nMeaning |\n|---|---|---|\n0 |\n|\nLogin name |\n1 |\n|\nOptional encrypted password |\n2 |\n|\nNumerical user ID |\n3 |\n|\nNumerical group ID |\n4 |\n|\nUser name or comment field |\n5 |\n|\nUser home directory |\n6 |\n|\nUser command interpreter |\nThe uid and gid items are integers, all others are strings. KeyError\nis\nraised if the entry asked for cannot be found.\nNote\nIn traditional Unix the field pw_passwd\nusually contains a password\nencrypted with a DES derived algorithm. However most\nmodern unices use a so-called shadow password system. On those unices the\npw_passwd field only contains an asterisk ('*'\n) or the letter 'x'\nwhere the encrypted password is stored in a file /etc/shadow\nwhich is\nnot world readable. Whether the pw_passwd field contains anything useful is\nsystem-dependent.\nIt defines the following items:\n- pwd.getpwuid(uid)\u00b6\nReturn the password database entry for the given numeric user ID.\n- pwd.getpwnam(name)\u00b6\nReturn the password database entry for the given user name.\n- pwd.getpwall()\u00b6\nReturn a list of all available password database entries, in arbitrary order.\nSee also\n- Module\ngrp\nAn interface to the group database, similar to this.", "code_snippets": [], "language": "Python", "source": "python.org", "token_count": 380} +{"url": "https://docs.python.org/3/howto/ipaddress.html", "title": "An introduction to the ipaddress module", "content": "An introduction to the ipaddress module\u00b6\n- author:\nPeter Moody\n- author:\nNick Coghlan\nCreating Address/Network/Interface objects\u00b6\nSince ipaddress\nis a module for inspecting and manipulating IP addresses,\nthe first thing you\u2019ll want to do is create some objects. You can use\nipaddress\nto create objects from strings and integers.\nA Note on IP Versions\u00b6\nFor readers that aren\u2019t particularly familiar with IP addressing, it\u2019s important to know that the Internet Protocol (IP) is currently in the process of moving from version 4 of the protocol to version 6. This transition is occurring largely because version 4 of the protocol doesn\u2019t provide enough addresses to handle the needs of the whole world, especially given the increasing number of devices with direct connections to the internet.\nExplaining the details of the differences between the two versions of the protocol is beyond the scope of this introduction, but readers need to at least be aware that these two versions exist, and it will sometimes be necessary to force the use of one version or the other.\nIP Host Addresses\u00b6\nAddresses, often referred to as \u201chost addresses\u201d are the most basic unit\nwhen working with IP addressing. The simplest way to create addresses is\nto use the ipaddress.ip_address()\nfactory function, which automatically\ndetermines whether to create an IPv4 or IPv6 address based on the passed in\nvalue:\n>>> ipaddress.ip_address('192.0.2.1')\nIPv4Address('192.0.2.1')\n>>> ipaddress.ip_address('2001:DB8::1')\nIPv6Address('2001:db8::1')\nAddresses can also be created directly from integers. Values that will fit within 32 bits are assumed to be IPv4 addresses:\n>>> ipaddress.ip_address(3221225985)\nIPv4Address('192.0.2.1')\n>>> ipaddress.ip_address(42540766411282592856903984951653826561)\nIPv6Address('2001:db8::1')\nTo force the use of IPv4 or IPv6 addresses, the relevant classes can be invoked directly. This is particularly useful to force creation of IPv6 addresses for small integers:\n>>> ipaddress.ip_address(1)\nIPv4Address('0.0.0.1')\n>>> ipaddress.IPv4Address(1)\nIPv4Address('0.0.0.1')\n>>> ipaddress.IPv6Address(1)\nIPv6Address('::1')\nDefining Networks\u00b6\nHost addresses are usually grouped together into IP networks, so\nipaddress\nprovides a way to create, inspect and manipulate network\ndefinitions. IP network objects are constructed from strings that define the\nrange of host addresses that are part of that network. The simplest form\nfor that information is a \u201cnetwork address/network prefix\u201d pair, where the\nprefix defines the number of leading bits that are compared to determine\nwhether or not an address is part of the network and the network address\ndefines the expected value of those bits.\nAs for addresses, a factory function is provided that determines the correct IP version automatically:\n>>> ipaddress.ip_network('192.0.2.0/24')\nIPv4Network('192.0.2.0/24')\n>>> ipaddress.ip_network('2001:db8::0/96')\nIPv6Network('2001:db8::/96')\nNetwork objects cannot have any host bits set. The practical effect of this\nis that 192.0.2.1/24\ndoes not describe a network. Such definitions are\nreferred to as interface objects since the ip-on-a-network notation is\ncommonly used to describe network interfaces of a computer on a given network\nand are described further in the next section.\nBy default, attempting to create a network object with host bits set will\nresult in ValueError\nbeing raised. To request that the\nadditional bits instead be coerced to zero, the flag strict=False\ncan\nbe passed to the constructor:\n>>> ipaddress.ip_network('192.0.2.1/24')\nTraceback (most recent call last):\n...\nValueError: 192.0.2.1/24 has host bits set\n>>> ipaddress.ip_network('192.0.2.1/24', strict=False)\nIPv4Network('192.0.2.0/24')\nWhile the string form offers significantly more flexibility, networks can also be defined with integers, just like host addresses. In this case, the network is considered to contain only the single address identified by the integer, so the network prefix includes the entire network address:\n>>> ipaddress.ip_network(3221225984)\nIPv4Network('192.0.2.0/32')\n>>> ipaddress.ip_network(42540766411282592856903984951653826560)\nIPv6Network('2001:db8::/128')\nAs with addresses, creation of a particular kind of network can be forced by calling the class constructor directly instead of using the factory function.\nHost Interfaces\u00b6\nAs mentioned just above, if you need to describe an address on a particular\nnetwork, neither the address nor the network classes are sufficient.\nNotation like 192.0.2.1/24\nis commonly used by network engineers and the\npeople who write tools for firewalls and routers as shorthand for \u201cthe host\n192.0.2.1\non the network 192.0.2.0/24\n\u201d, Accordingly, ipaddress\nprovides a set of hybrid classes that associate an address with a particular\nnetwork. The interface for creation is identical to that for defining network\nobjects, except that the address portion isn\u2019t constrained to being a network\naddress.\n>>> ipaddress.ip_interface('192.0.2.1/24')\nIPv4Interface('192.0.2.1/24')\n>>> ipaddress.ip_interface('2001:db8::1/96')\nIPv6Interface('2001:db8::1/96')\nInteger inputs are accepted (as with networks), and use of a particular IP version can be forced by calling the relevant constructor directly.\nInspecting Address/Network/Interface Objects\u00b6\nYou\u2019ve gone to the trouble of creating an IPv(4|6)(Address|Network|Interface)\nobject, so you probably want to get information about it. ipaddress\ntries to make doing this easy and intuitive.\nExtracting the IP version:\n>>> addr4 = ipaddress.ip_address('192.0.2.1')\n>>> addr6 = ipaddress.ip_address('2001:db8::1')\n>>> addr6.version\n6\n>>> addr4.version\n4\nObtaining the network from an interface:\n>>> host4 = ipaddress.ip_interface('192.0.2.1/24')\n>>> host4.network\nIPv4Network('192.0.2.0/24')\n>>> host6 = ipaddress.ip_interface('2001:db8::1/96')\n>>> host6.network\nIPv6Network('2001:db8::/96')\nFinding out how many individual addresses are in a network:\n>>> net4 = ipaddress.ip_network('192.0.2.0/24')\n>>> net4.num_addresses\n256\n>>> net6 = ipaddress.ip_network('2001:db8::0/96')\n>>> net6.num_addresses\n4294967296\nIterating through the \u201cusable\u201d addresses on a network:\n>>> net4 = ipaddress.ip_network('192.0.2.0/24')\n>>> for x in net4.hosts():\n... print(x)\n192.0.2.1\n192.0.2.2\n192.0.2.3\n192.0.2.4\n...\n192.0.2.252\n192.0.2.253\n192.0.2.254\nObtaining the netmask (i.e. set bits corresponding to the network prefix) or the hostmask (any bits that are not part of the netmask):\n>>> net4 = ipaddress.ip_network('192.0.2.0/24')\n>>> net4.netmask\nIPv4Address('255.255.255.0')\n>>> net4.hostmask\nIPv4Address('0.0.0.255')\n>>> net6 = ipaddress.ip_network('2001:db8::0/96')\n>>> net6.netmask\nIPv6Address('ffff:ffff:ffff:ffff:ffff:ffff::')\n>>> net6.hostmask\nIPv6Address('::ffff:ffff')\nExploding or compressing the address:\n>>> addr6.exploded\n'2001:0db8:0000:0000:0000:0000:0000:0001'\n>>> addr6.compressed\n'2001:db8::1'\n>>> net6.exploded\n'2001:0db8:0000:0000:0000:0000:0000:0000/96'\n>>> net6.compressed\n'2001:db8::/96'\nWhile IPv4 doesn\u2019t support explosion or compression, the associated objects still provide the relevant properties so that version neutral code can easily ensure the most concise or most verbose form is used for IPv6 addresses while still correctly handling IPv4 addresses.\nNetworks as lists of Addresses\u00b6\nIt\u2019s sometimes useful to treat networks as lists. This means it is possible to index them like this:\n>>> net4[1]\nIPv4Address('192.0.2.1')\n>>> net4[-1]\nIPv4Address('192.0.2.255')\n>>> net6[1]\nIPv6Address('2001:db8::1')\n>>> net6[-1]\nIPv6Address('2001:db8::ffff:ffff')\nIt also means that network objects lend themselves to using the list membership test syntax like this:\nif address in network:\n# do something\nContainment testing is done efficiently based on the network prefix:\n>>> addr4 = ipaddress.ip_address('192.0.2.1')\n>>> addr4 in ipaddress.ip_network('192.0.2.0/24')\nTrue\n>>> addr4 in ipaddress.ip_network('192.0.3.0/24')\nFalse\nComparisons\u00b6\nipaddress\nprovides some simple, hopefully intuitive ways to compare\nobjects, where it makes sense:\n>>> ipaddress.ip_address('192.0.2.1') < ipaddress.ip_address('192.0.2.2')\nTrue\nA TypeError\nexception is raised if you try to compare objects of\ndifferent versions or different types.\nUsing IP Addresses with other modules\u00b6\nOther modules that use IP addresses (such as socket\n) usually won\u2019t\naccept objects from this module directly. Instead, they must be coerced to\nan integer or string that the other module will accept:\n>>> addr4 = ipaddress.ip_address('192.0.2.1')\n>>> str(addr4)\n'192.0.2.1'\n>>> int(addr4)\n3221225985\nGetting more detail when instance creation fails\u00b6\nWhen creating address/network/interface objects using the version-agnostic\nfactory functions, any errors will be reported as ValueError\nwith\na generic error message that simply says the passed in value was not\nrecognized as an object of that type. The lack of a specific error is\nbecause it\u2019s necessary to know whether the value is supposed to be IPv4\nor IPv6 in order to provide more detail on why it has been rejected.\nTo support use cases where it is useful to have access to this additional\ndetail, the individual class constructors actually raise the\nValueError\nsubclasses ipaddress.AddressValueError\nand\nipaddress.NetmaskValueError\nto indicate exactly which part of\nthe definition failed to parse correctly.\nThe error messages are significantly more detailed when using the class constructors directly. For example:\n>>> ipaddress.ip_address(\"192.168.0.256\")\nTraceback (most recent call last):\n...\nValueError: '192.168.0.256' does not appear to be an IPv4 or IPv6 address\n>>> ipaddress.IPv4Address(\"192.168.0.256\")\nTraceback (most recent call last):\n...\nipaddress.AddressValueError: Octet 256 (> 255) not permitted in '192.168.0.256'\n>>> ipaddress.ip_network(\"192.168.0.1/64\")\nTraceback (most recent call last):\n...\nValueError: '192.168.0.1/64' does not appear to be an IPv4 or IPv6 network\n>>> ipaddress.IPv4Network(\"192.168.0.1/64\")\nTraceback (most recent call last):\n...\nipaddress.NetmaskValueError: '64' is not a valid netmask\nHowever, both of the module specific exceptions have ValueError\nas their\nparent class, so if you\u2019re not concerned with the particular type of error,\nyou can still write code like the following:\ntry:\nnetwork = ipaddress.IPv4Network(address)\nexcept ValueError:\nprint('address/netmask is invalid for IPv4:', address)", "code_snippets": ["\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", ": ", "\n", " ", "\n", "\n", "\n", "\n", "\n", "\n", " ", " ", "\n", " ", " ", "\n", "\n", "\n", "\n", "\n", " ", " ", "\n", "\n", "\n", " ", " ", "\n", "\n", "\n", " ", " ", "\n", "\n", "\n", " ", " ", "\n", "\n", "\n", " ", " ", "\n", " ", " ", " ", "\n", " ", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", " ", " ", " ", "\n ", "\n", " ", " ", "\n", " ", " ", "\n", "\n", " ", " ", "\n", "\n", " ", " ", "\n", "\n", " ", " ", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", ": ", "\n", "\n", "\n", "\n", ": ", "\n\n", "\n", "\n", "\n", ": ", "\n", "\n", "\n", "\n", ": ", "\n", "\n ", " ", " ", "\n", " ", "\n ", " ", "\n"], "language": "Python", "source": "python.org", "token_count": 2615} +{"url": "https://docs.python.org/3/library/superseded.html", "title": "Superseded Modules", "content": "Superseded Modules\u00b6\nThe modules described in this chapter have been superseded by other modules for most use cases, and are retained primarily to preserve backwards compatibility.\nModules may appear in this chapter because they only cover a limited subset of\na problem space, and a more generally applicable solution is available elsewhere\nin the standard library (for example, getopt\ncovers the very specific\ntask of \u201cmimic the C getopt()\nAPI in Python\u201d, rather than the broader\ncommand line option parsing and argument parsing capabilities offered by\noptparse\nand argparse\n).\nAlternatively, modules may appear in this chapter because they are deprecated outright, and awaiting removal in a future release, or they are soft deprecated and their use is actively discouraged in new projects. With the removal of various obsolete modules through PEP 594, there are currently no modules in this latter category.", "code_snippets": [], "language": "Python", "source": "python.org", "token_count": 227} +{"url": "https://docs.python.org/3/library/xdrlib.html", "title": " \u2014 Encode and decode XDR data", "content": "xdrlib\n\u2014 Encode and decode XDR data\u00b6\nDeprecated since version 3.11, removed in version 3.13.\nThis module is no longer part of the Python standard library. It was removed in Python 3.13 after being deprecated in Python 3.11. The removal was decided in PEP 594.\nThe last version of Python that provided the xdrlib\nmodule was\nPython 3.12.", "code_snippets": [], "language": "Python", "source": "python.org", "token_count": 84} +{"url": "https://docs.python.org/3/download.html", "title": "Download Python 3.14 documentation", "content": "Download Python 3.14 documentation\nLast updated on: Feb 18, 2026 (17:01 UTC).\nDownload an archive containing all the documentation for this version of Python:\n| Format | Packed as .zip | Packed as .tar.bz2 |\n|---|---|---|\n| HTML | Download | Download |\n| Plain text | Download | Download |\n| Texinfo | Download | Download |\n| EPUB | Download |\nWe no longer provide pre-built PDFs of the documentation.\nTo build a PDF archive, follow the instructions in the\nDeveloper's Guide\nand run make dist-pdf\nin the Doc/\ndirectory of a copy of the CPython repository.\nSee the directory listing for file sizes.\nProblems\nOpen an issue if you have comments or suggestions for the Python documentation.", "code_snippets": [], "language": "Python", "source": "python.org", "token_count": 171} +{"url": "https://docs.python.org/3/tutorial/venv.html", "title": "Virtual Environments and Packages", "content": "12. Virtual Environments and Packages\u00b6\n12.1. Introduction\u00b6\nPython applications will often use packages and modules that don\u2019t come as part of the standard library. Applications will sometimes need a specific version of a library, because the application may require that a particular bug has been fixed or the application may be written using an obsolete version of the library\u2019s interface.\nThis means it may not be possible for one Python installation to meet the requirements of every application. If application A needs version 1.0 of a particular module but application B needs version 2.0, then the requirements are in conflict and installing either version 1.0 or 2.0 will leave one application unable to run.\nThe solution for this problem is to create a virtual environment, a self-contained directory tree that contains a Python installation for a particular version of Python, plus a number of additional packages.\nDifferent applications can then use different virtual environments. To resolve the earlier example of conflicting requirements, application A can have its own virtual environment with version 1.0 installed while application B has another virtual environment with version 2.0. If application B requires a library be upgraded to version 3.0, this will not affect application A\u2019s environment.\n12.2. Creating Virtual Environments\u00b6\nThe module used to create and manage virtual environments is called\nvenv\n. venv\nwill install the Python version from which\nthe command was run (as reported by the --version\noption).\nFor instance, executing the command with python3.12\nwill install\nversion 3.12.\nTo create a virtual environment, decide upon a directory where you want to\nplace it, and run the venv\nmodule as a script with the directory path:\npython -m venv tutorial-env\nThis will create the tutorial-env\ndirectory if it doesn\u2019t exist,\nand also create directories inside it containing a copy of the Python\ninterpreter and various supporting files.\nA common directory location for a virtual environment is .venv\n.\nThis name keeps the directory typically hidden in your shell and thus\nout of the way while giving it a name that explains why the directory\nexists. It also prevents clashing with .env\nenvironment variable\ndefinition files that some tooling supports.\nOnce you\u2019ve created a virtual environment, you may activate it.\nOn Windows, run:\ntutorial-env\\Scripts\\activate\nOn Unix or MacOS, run:\nsource tutorial-env/bin/activate\n(This script is written for the bash shell. If you use the\ncsh or fish shells, there are alternate\nactivate.csh\nand activate.fish\nscripts you should use\ninstead.)\nActivating the virtual environment will change your shell\u2019s prompt to show what\nvirtual environment you\u2019re using, and modify the environment so that running\npython\nwill get you that particular version and installation of Python.\nFor example:\n$ source ~/envs/tutorial-env/bin/activate\n(tutorial-env) $ python\nPython 3.5.1 (default, May 6 2016, 10:59:36)\n...\n>>> import sys\n>>> sys.path\n['', '/usr/local/lib/python35.zip', ...,\n'~/envs/tutorial-env/lib/python3.5/site-packages']\n>>>\nTo deactivate a virtual environment, type:\ndeactivate\ninto the terminal.\n12.3. Managing Packages with pip\u00b6\nYou can install, upgrade, and remove packages using a program called\npip. By default pip\nwill install packages from the Python\nPackage Index. You can browse the Python\nPackage Index by going to it in your web browser.\npip\nhas a number of subcommands: \u201cinstall\u201d, \u201cuninstall\u201d,\n\u201cfreeze\u201d, etc. (Consult the Installing Python Modules guide for\ncomplete documentation for pip\n.)\nYou can install the latest version of a package by specifying a package\u2019s name:\n(tutorial-env) $ python -m pip install novas\nCollecting novas\nDownloading novas-3.1.1.3.tar.gz (136kB)\nInstalling collected packages: novas\nRunning setup.py install for novas\nSuccessfully installed novas-3.1.1.3\nYou can also install a specific version of a package by giving the\npackage name followed by ==\nand the version number:\n(tutorial-env) $ python -m pip install requests==2.6.0\nCollecting requests==2.6.0\nUsing cached requests-2.6.0-py2.py3-none-any.whl\nInstalling collected packages: requests\nSuccessfully installed requests-2.6.0\nIf you re-run this command, pip\nwill notice that the requested\nversion is already installed and do nothing. You can supply a\ndifferent version number to get that version, or you can run python\n-m pip install --upgrade\nto upgrade the package to the latest version:\n(tutorial-env) $ python -m pip install --upgrade requests\nCollecting requests\nInstalling collected packages: requests\nFound existing installation: requests 2.6.0\nUninstalling requests-2.6.0:\nSuccessfully uninstalled requests-2.6.0\nSuccessfully installed requests-2.7.0\npython -m pip uninstall\nfollowed by one or more package names will\nremove the packages from the virtual environment.\npython -m pip show\nwill display information about a particular package:\n(tutorial-env) $ python -m pip show requests\n---\nMetadata-Version: 2.0\nName: requests\nVersion: 2.7.0\nSummary: Python HTTP for Humans.\nHome-page: http://python-requests.org\nAuthor: Kenneth Reitz\nAuthor-email: me@kennethreitz.com\nLicense: Apache 2.0\nLocation: /Users/akuchling/envs/tutorial-env/lib/python3.4/site-packages\nRequires:\npython -m pip list\nwill display all of the packages installed in\nthe virtual environment:\n(tutorial-env) $ python -m pip list\nnovas (3.1.1.3)\nnumpy (1.9.2)\npip (7.0.3)\nrequests (2.7.0)\nsetuptools (16.0)\npython -m pip freeze\nwill produce a similar list of the installed packages,\nbut the output uses the format that python -m pip install\nexpects.\nA common convention is to put this list in a requirements.txt\nfile:\n(tutorial-env) $ python -m pip freeze > requirements.txt\n(tutorial-env) $ cat requirements.txt\nnovas==3.1.1.3\nnumpy==1.9.2\nrequests==2.7.0\nThe requirements.txt\ncan then be committed to version control and\nshipped as part of an application. Users can then install all the\nnecessary packages with install -r\n:\n(tutorial-env) $ python -m pip install -r requirements.txt\nCollecting novas==3.1.1.3 (from -r requirements.txt (line 1))\n...\nCollecting numpy==1.9.2 (from -r requirements.txt (line 2))\n...\nCollecting requests==2.7.0 (from -r requirements.txt (line 3))\n...\nInstalling collected packages: novas, numpy, requests\nRunning setup.py install for novas\nSuccessfully installed novas-3.1.1.3 numpy-1.9.2 requests-2.7.0\npip\nhas many more options. Consult the Installing Python Modules\nguide for complete documentation for pip\n. When you\u2019ve written\na package and want to make it available on the Python Package Index,\nconsult the Python packaging user guide.", "code_snippets": [" ", " ", " ", "\n", "\\", "\\", "\n", " ", "\n", "\n"], "language": "Python", "source": "python.org", "token_count": 1653} +{"url": "https://docs.python.org/3/library/removed.html", "title": "Removed Modules", "content": "Removed Modules\u00b6\nThe modules described in this chapter have been removed from the Python standard library. They are documented here to help people find replacements.\naifc\n\u2014 Read and write AIFF and AIFC filesasynchat\n\u2014 Asynchronous socket command/response handlerasyncore\n\u2014 Asynchronous socket handleraudioop\n\u2014 Manipulate raw audio datacgi\n\u2014 Common Gateway Interface supportcgitb\n\u2014 Traceback manager for CGI scriptschunk\n\u2014 Read IFF chunked datacrypt\n\u2014 Function to check Unix passwordsdistutils\n\u2014 Building and installing Python modulesimghdr\n\u2014 Determine the type of an imageimp\n\u2014 Access the import internalsmailcap\n\u2014 Mailcap file handlingmsilib\n\u2014 Read and write Microsoft Installer filesnis\n\u2014 Interface to Sun\u2019s NIS (Yellow Pages)nntplib\n\u2014 NNTP protocol clientossaudiodev\n\u2014 Access to OSS-compatible audio devicespipes\n\u2014 Interface to shell pipelinessmtpd\n\u2014 SMTP Serversndhdr\n\u2014 Determine type of sound filespwd\n\u2014 The shadow password databasesunau\n\u2014 Read and write Sun AU filestelnetlib\n\u2014 Telnet clientuu\n\u2014 Encode and decode uuencode filesxdrlib\n\u2014 Encode and decode XDR data", "code_snippets": [], "language": "Python", "source": "python.org", "token_count": 267} +{"url": "https://docs.python.org/3/howto/index.html", "title": "Python HOWTOs", "content": "Python HOWTOs\u00b6\nPython HOWTOs are documents that cover a specific topic in-depth. Modeled on the Linux Documentation Project\u2019s HOWTO collection, this collection is an effort to foster documentation that\u2019s more detailed than the Python Library Reference.\nGeneral:\nAdvanced development:\nDebugging and profiling:", "code_snippets": [], "language": "Python", "source": "python.org", "token_count": 77} +{"url": "https://docs.python.org/3/library/tty.html", "title": " \u2014 Terminal control functions", "content": "tty\n\u2014 Terminal control functions\u00b6\nSource code: Lib/tty.py\nThe tty\nmodule defines functions for putting the tty into cbreak and raw\nmodes.\nAvailability: Unix.\nBecause it requires the termios\nmodule, it will work only on Unix.\nThe tty\nmodule defines the following functions:\n- tty.cfmakeraw(mode)\u00b6\nConvert the tty attribute list mode, which is a list like the one returned by\ntermios.tcgetattr()\n, to that of a tty in raw mode.Added in version 3.12.\n- tty.cfmakecbreak(mode)\u00b6\nConvert the tty attribute list mode, which is a list like the one returned by\ntermios.tcgetattr()\n, to that of a tty in cbreak mode.This clears the\nECHO\nandICANON\nlocal mode flags in mode as well as setting the minimum input to 1 byte with no delay.Added in version 3.12.\nChanged in version 3.12.2: The\nICRNL\nflag is no longer cleared. This matches Linux and macOSstty cbreak\nbehavior and whatsetcbreak()\nhistorically did.\n- tty.setraw(fd, when=termios.TCSAFLUSH)\u00b6\nChange the mode of the file descriptor fd to raw. If when is omitted, it defaults to\ntermios.TCSAFLUSH\n, and is passed totermios.tcsetattr()\n. The return value oftermios.tcgetattr()\nis saved before setting fd to raw mode; this value is returned.Changed in version 3.12: The return value is now the original tty attributes, instead of\nNone\n.\n- tty.setcbreak(fd, when=termios.TCSAFLUSH)\u00b6\nChange the mode of file descriptor fd to cbreak. If when is omitted, it defaults to\ntermios.TCSAFLUSH\n, and is passed totermios.tcsetattr()\n. The return value oftermios.tcgetattr()\nis saved before setting fd to cbreak mode; this value is returned.This clears the\nECHO\nandICANON\nlocal mode flags as well as setting the minimum input to 1 byte with no delay.Changed in version 3.12: The return value is now the original tty attributes, instead of\nNone\n.Changed in version 3.12.2: The\nICRNL\nflag is no longer cleared. This restores the behavior of Python 3.11 and earlier as well as matching what Linux, macOS, & BSDs describe in theirstty(1)\nman pages regarding cbreak mode.\nSee also\n- Module\ntermios\nLow-level terminal control interface.", "code_snippets": [], "language": "Python", "source": "python.org", "token_count": 515} +{"url": "https://docs.python.org/3/c-api/structures.html", "title": "Common Object Structures", "content": "Common Object Structures\u00b6\nThere are a large number of structures which are used in the definition of object types for Python. This section describes these structures and how they are used.\nBase object types and macros\u00b6\nAll Python objects ultimately share a small number of fields at the beginning\nof the object\u2019s representation in memory. These are represented by the\nPyObject\nand PyVarObject\ntypes, which are defined, in turn,\nby the expansions of some macros also used, whether directly or indirectly, in\nthe definition of all other Python objects. Additional macros can be found\nunder reference counting.\n-\ntype PyObject\u00b6\n- Part of the Limited API. (Only some members are part of the stable ABI.)\nAll object types are extensions of this type. This is a type which contains the information Python needs to treat a pointer to an object as an object. In a normal \u201crelease\u201d build, it contains only the object\u2019s reference count and a pointer to the corresponding type object. Nothing is actually declared to be a\nPyObject\n, but every pointer to a Python object can be cast to a PyObject*.The members must not be accessed directly; instead use macros such as\nPy_REFCNT\nandPy_TYPE\n.-\nPy_ssize_t ob_refcnt\u00b6\n- Part of the Stable ABI.\nThe object\u2019s reference count, as returned by\nPy_REFCNT\n. Do not use this field directly; instead use functions and macros such asPy_REFCNT\n,Py_INCREF()\nandPy_DecRef()\n.The field type may be different from\nPy_ssize_t\n, depending on build configuration and platform.\n-\nPyTypeObject *ob_type\u00b6\n- Part of the Stable ABI.\nThe object\u2019s type. Do not use this field directly; use\nPy_TYPE\nandPy_SET_TYPE()\ninstead.\n-\nPy_ssize_t ob_refcnt\u00b6\n-\ntype PyVarObject\u00b6\n- Part of the Limited API. (Only some members are part of the stable ABI.)\nAn extension of\nPyObject\nthat adds theob_size\nfield. This is intended for objects that have some notion of length.As with\nPyObject\n, the members must not be accessed directly; instead use macros such asPy_SIZE\n,Py_REFCNT\nandPy_TYPE\n.-\nPy_ssize_t ob_size\u00b6\n- Part of the Stable ABI.\nA size field, whose contents should be considered an object\u2019s internal implementation detail.\nDo not use this field directly; use\nPy_SIZE\ninstead.Object creation functions such as\nPyObject_NewVar()\nwill generally set this field to the requested size (number of items). After creation, arbitrary values can be stored inob_size\nusingPy_SET_SIZE\n.To get an object\u2019s publicly exposed length, as returned by the Python function\nlen()\n, usePyObject_Length()\ninstead.\n-\nPy_ssize_t ob_size\u00b6\n-\nPyObject_HEAD\u00b6\nThis is a macro used when declaring new types which represent objects without a varying length. The PyObject_HEAD macro expands to:\nPyObject ob_base;\nSee documentation of\nPyObject\nabove.\n-\nPyObject_VAR_HEAD\u00b6\nThis is a macro used when declaring new types which represent objects with a length that varies from instance to instance. The PyObject_VAR_HEAD macro expands to:\nPyVarObject ob_base;\nSee documentation of\nPyVarObject\nabove.\n-\nPyTypeObject PyBaseObject_Type\u00b6\n- Part of the Stable ABI.\nThe base class of all other objects, the same as\nobject\nin Python.\n-\nint Py_Is(PyObject *x, PyObject *y)\u00b6\n- Part of the Stable ABI since version 3.10.\nTest if the x object is the y object, the same as\nx is y\nin Python.Added in version 3.10.\n-\nint Py_IsNone(PyObject *x)\u00b6\n- Part of the Stable ABI since version 3.10.\nTest if an object is the\nNone\nsingleton, the same asx is None\nin Python.Added in version 3.10.\n-\nint Py_IsTrue(PyObject *x)\u00b6\n- Part of the Stable ABI since version 3.10.\nTest if an object is the\nTrue\nsingleton, the same asx is True\nin Python.Added in version 3.10.\n-\nint Py_IsFalse(PyObject *x)\u00b6\n- Part of the Stable ABI since version 3.10.\nTest if an object is the\nFalse\nsingleton, the same asx is False\nin Python.Added in version 3.10.\n-\nPyTypeObject *Py_TYPE(PyObject *o)\u00b6\n- Return value: Borrowed reference. Part of the Stable ABI since version 3.14.\nGet the type of the Python object o.\nThe returned reference is borrowed from o. Do not release it with\nPy_DECREF()\nor similar.\n-\nint Py_IS_TYPE(PyObject *o, PyTypeObject *type)\u00b6\nReturn non-zero if the object o type is type. Return zero otherwise. Equivalent to:\nPy_TYPE(o) == type\n.Added in version 3.9.\n-\nvoid Py_SET_TYPE(PyObject *o, PyTypeObject *type)\u00b6\nSet the type of object o to type, without any checking or reference counting.\nThis is a very low-level operation. Consider instead setting the Python attribute\n__class__\nusingPyObject_SetAttrString()\nor similar.Note that assigning an incompatible type can lead to undefined behavior.\nIf type is a heap type, the caller must create a new reference to it. Similarly, if the old type of o is a heap type, the caller must release a reference to that type.\nAdded in version 3.9.\n-\nPy_ssize_t Py_SIZE(PyVarObject *o)\u00b6\nGet the\nob_size\nfield of o.Changed in version 3.11:\nPy_SIZE()\nis changed to an inline static function. The parameter type is no longer const PyVarObject*.\n-\nvoid Py_SET_SIZE(PyVarObject *o, Py_ssize_t size)\u00b6\nSet the\nob_size\nfield of o to size.Added in version 3.9.\n-\nPyObject_HEAD_INIT(type)\u00b6\nThis is a macro which expands to initialization values for a new\nPyObject\ntype. This macro expands to:_PyObject_EXTRA_INIT 1, type,\n-\nPyVarObject_HEAD_INIT(type, size)\u00b6\nThis is a macro which expands to initialization values for a new\nPyVarObject\ntype, including theob_size\nfield. This macro expands to:_PyObject_EXTRA_INIT 1, type, size,\nImplementing functions and methods\u00b6\n-\ntype PyCFunction\u00b6\n- Part of the Stable ABI.\nType of the functions used to implement most Python callables in C. Functions of this type take two PyObject* parameters and return one such value. If the return value is\nNULL\n, an exception shall have been set. If notNULL\n, the return value is interpreted as the return value of the function as exposed in Python. The function must return a new reference.The function signature is:\nPyObject *PyCFunction(PyObject *self, PyObject *args);\n-\ntype PyCFunctionWithKeywords\u00b6\n- Part of the Stable ABI.\nType of the functions used to implement Python callables in C with signature METH_VARARGS | METH_KEYWORDS. The function signature is:\nPyObject *PyCFunctionWithKeywords(PyObject *self, PyObject *args, PyObject *kwargs);\n-\ntype PyCFunctionFast\u00b6\n- Part of the Stable ABI since version 3.13.\nType of the functions used to implement Python callables in C with signature\nMETH_FASTCALL\n. The function signature is:PyObject *PyCFunctionFast(PyObject *self, PyObject *const *args, Py_ssize_t nargs);\n-\ntype PyCFunctionFastWithKeywords\u00b6\n- Part of the Stable ABI since version 3.13.\nType of the functions used to implement Python callables in C with signature METH_FASTCALL | METH_KEYWORDS. The function signature is:\nPyObject *PyCFunctionFastWithKeywords(PyObject *self, PyObject *const *args, Py_ssize_t nargs, PyObject *kwnames);\n-\ntype PyCMethod\u00b6\nType of the functions used to implement Python callables in C with signature METH_METHOD | METH_FASTCALL | METH_KEYWORDS. The function signature is:\nPyObject *PyCMethod(PyObject *self, PyTypeObject *defining_class, PyObject *const *args, Py_ssize_t nargs, PyObject *kwnames)\nAdded in version 3.9.\n-\ntype PyMethodDef\u00b6\n- Part of the Stable ABI (including all members).\nStructure used to describe a method of an extension type. This structure has four fields:\n-\nconst char *ml_name\u00b6\nName of the method.\n-\nPyCFunction ml_meth\u00b6\nPointer to the C implementation.\n-\nint ml_flags\u00b6\nFlags bits indicating how the call should be constructed.\n-\nconst char *ml_doc\u00b6\nPoints to the contents of the docstring.\n-\nconst char *ml_name\u00b6\nThe ml_meth\nis a C function pointer.\nThe functions may be of different\ntypes, but they always return PyObject*. If the function is not of\nthe PyCFunction\n, the compiler will require a cast in the method table.\nEven though PyCFunction\ndefines the first parameter as\nPyObject*, it is common that the method implementation uses the\nspecific C type of the self object.\nThe ml_flags\nfield is a bitfield which can include\nthe following flags.\nThe individual flags indicate either a calling convention or a binding\nconvention.\nThere are these calling conventions:\n-\nMETH_VARARGS\u00b6\n- Part of the Stable ABI.\nThis is the typical calling convention, where the methods have the type\nPyCFunction\n. The function expects two PyObject* values. The first one is the self object for methods; for module functions, it is the module object. The second parameter (often called args) is a tuple object representing all arguments. This parameter is typically processed usingPyArg_ParseTuple()\norPyArg_UnpackTuple()\n.\n-\nMETH_KEYWORDS\u00b6\nCan only be used in certain combinations with other flags: METH_VARARGS | METH_KEYWORDS, METH_FASTCALL | METH_KEYWORDS and METH_METHOD | METH_FASTCALL | METH_KEYWORDS.\n- METH_VARARGS | METH_KEYWORDS\nMethods with these flags must be of type\nPyCFunctionWithKeywords\n. The function expects three parameters: self, args, kwargs where kwargs is a dictionary of all the keyword arguments or possiblyNULL\nif there are no keyword arguments. The parameters are typically processed usingPyArg_ParseTupleAndKeywords()\n.\n-\nMETH_FASTCALL\u00b6\n- Part of the Stable ABI since version 3.7.\nFast calling convention supporting only positional arguments. The methods have the type\nPyCFunctionFast\n. The first parameter is self, the second parameter is a C array of PyObject* values indicating the arguments and the third parameter is the number of arguments (the length of the array).Added in version 3.7.\nChanged in version 3.10:\nMETH_FASTCALL\nis now part of the stable ABI.\n- METH_FASTCALL | METH_KEYWORDS\nExtension of\nMETH_FASTCALL\nsupporting also keyword arguments, with methods of typePyCFunctionFastWithKeywords\n. Keyword arguments are passed the same way as in the vectorcall protocol: there is an additional fourth PyObject* parameter which is a tuple representing the names of the keyword arguments (which are guaranteed to be strings) or possiblyNULL\nif there are no keywords. The values of the keyword arguments are stored in the args array, after the positional arguments.Added in version 3.7.\n-\nMETH_METHOD\u00b6\n- Part of the Stable ABI since version 3.7.\nCan only be used in the combination with other flags: METH_METHOD | METH_FASTCALL | METH_KEYWORDS.\n- METH_METHOD | METH_FASTCALL | METH_KEYWORDS\nExtension of METH_FASTCALL | METH_KEYWORDS supporting the defining class, that is, the class that contains the method in question. The defining class might be a superclass of\nPy_TYPE(self)\n.The method needs to be of type\nPyCMethod\n, the same as forMETH_FASTCALL | METH_KEYWORDS\nwithdefining_class\nargument added afterself\n.Added in version 3.9.\n-\nMETH_NOARGS\u00b6\n- Part of the Stable ABI.\nMethods without parameters don\u2019t need to check whether arguments are given if they are listed with the\nMETH_NOARGS\nflag. They need to be of typePyCFunction\n. The first parameter is typically named self and will hold a reference to the module or object instance. In all cases the second parameter will beNULL\n.The function must have 2 parameters. Since the second parameter is unused,\nPy_UNUSED\ncan be used to prevent a compiler warning.\n-\nMETH_O\u00b6\n- Part of the Stable ABI.\nMethods with a single object argument can be listed with the\nMETH_O\nflag, instead of invokingPyArg_ParseTuple()\nwith a\"O\"\nargument. They have the typePyCFunction\n, with the self parameter, and a PyObject* parameter representing the single argument.\nThese two constants are not used to indicate the calling convention but the binding when used with methods of classes. These may not be used for functions defined for modules. At most one of these flags may be set for any given method.\n-\nMETH_CLASS\u00b6\n- Part of the Stable ABI.\nThe method will be passed the type object as the first parameter rather than an instance of the type. This is used to create class methods, similar to what is created when using the\nclassmethod()\nbuilt-in function.\n-\nMETH_STATIC\u00b6\n- Part of the Stable ABI.\nThe method will be passed\nNULL\nas the first parameter rather than an instance of the type. This is used to create static methods, similar to what is created when using thestaticmethod()\nbuilt-in function.\nOne other constant controls whether a method is loaded in place of another definition with the same method name.\n-\nMETH_COEXIST\u00b6\n- Part of the Stable ABI.\nThe method will be loaded in place of existing definitions. Without METH_COEXIST, the default is to skip repeated definitions. Since slot wrappers are loaded before the method table, the existence of a sq_contains slot, for example, would generate a wrapped method named\n__contains__()\nand preclude the loading of a corresponding PyCFunction with the same name. With the flag defined, the PyCFunction will be loaded in place of the wrapper object and will co-exist with the slot. This is helpful because calls to PyCFunctions are optimized more than wrapper object calls.\n-\nPyTypeObject PyCMethod_Type\u00b6\nThe type object corresponding to Python C method objects. This is available as\ntypes.BuiltinMethodType\nin the Python layer.\n-\nint PyCMethod_Check(PyObject *op)\u00b6\nReturn true if op is an instance of the\nPyCMethod_Type\ntype or a subtype of it. This function always succeeds.\n-\nint PyCMethod_CheckExact(PyObject *op)\u00b6\nThis is the same as\nPyCMethod_Check()\n, but does not account for subtypes.\n-\nPyObject *PyCMethod_New(PyMethodDef *ml, PyObject *self, PyObject *module, PyTypeObject *cls)\u00b6\n- Return value: New reference. Part of the Stable ABI since version 3.9.\nTurn ml into a Python callable object. The caller must ensure that ml outlives the callable. Typically, ml is defined as a static variable.\nThe self parameter will be passed as the self argument to the C function in\nml->ml_meth\nwhen invoked. self can beNULL\n.The callable object\u2019s\n__module__\nattribute can be set from the given module argument. module should be a Python string, which will be used as name of the module the function is defined in. If unavailable, it can be set toNone\norNULL\n.See also\nThe cls parameter will be passed as the defining_class argument to the C function. Must be set if\nMETH_METHOD\nis set onml->ml_flags\n.Added in version 3.9.\n-\nPyTypeObject PyCFunction_Type\u00b6\n- Part of the Stable ABI.\nThe type object corresponding to Python C function objects. This is available as\ntypes.BuiltinFunctionType\nin the Python layer.\n-\nint PyCFunction_Check(PyObject *op)\u00b6\nReturn true if op is an instance of the\nPyCFunction_Type\ntype or a subtype of it. This function always succeeds.\n-\nint PyCFunction_CheckExact(PyObject *op)\u00b6\nThis is the same as\nPyCFunction_Check()\n, but does not account for subtypes.\n-\nPyObject *PyCFunction_NewEx(PyMethodDef *ml, PyObject *self, PyObject *module)\u00b6\n- Return value: New reference. Part of the Stable ABI.\nEquivalent to\nPyCMethod_New(ml, self, module, NULL)\n.\n-\nPyObject *PyCFunction_New(PyMethodDef *ml, PyObject *self)\u00b6\n- Return value: New reference. Part of the Stable ABI since version 3.4.\nEquivalent to\nPyCMethod_New(ml, self, NULL, NULL)\n.\n-\nint PyCFunction_GetFlags(PyObject *func)\u00b6\n- Part of the Stable ABI.\nGet the function\u2019s flags on func as they were passed to\nml_flags\n.If func is not a C function object, this fails with an exception. func must not be\nNULL\n.This function returns the function\u2019s flags on success, and\n-1\nwith an exception set on failure.\n-\nint PyCFunction_GET_FLAGS(PyObject *func)\u00b6\nThis is the same as\nPyCFunction_GetFlags()\n, but without error or type checking.\n-\nPyCFunction PyCFunction_GetFunction(PyObject *func)\u00b6\n- Part of the Stable ABI.\nGet the function pointer on func as it was passed to\nml_meth\n.If func is not a C function object, this fails with an exception. func must not be\nNULL\n.This function returns the function pointer on success, and\nNULL\nwith an exception set on failure.\n-\nint PyCFunction_GET_FUNCTION(PyObject *func)\u00b6\nThis is the same as\nPyCFunction_GetFunction()\n, but without error or type checking.\n-\nPyObject *PyCFunction_GetSelf(PyObject *func)\u00b6\n- Part of the Stable ABI.\nGet the \u201cself\u201d object on func. This is the object that would be passed to the first argument of a\nPyCFunction\n. For C function objects created through aPyMethodDef\non aPyModuleDef\n, this is the resulting module object.If func is not a C function object, this fails with an exception. func must not be\nNULL\n.This function returns a borrowed reference to the \u201cself\u201d object on success, and\nNULL\nwith an exception set on failure.\n-\nPyObject *PyCFunction_GET_SELF(PyObject *func)\u00b6\nThis is the same as\nPyCFunction_GetSelf()\n, but without error or type checking.\nAccessing attributes of extension types\u00b6\n-\ntype PyMemberDef\u00b6\n- Part of the Stable ABI (including all members).\nStructure which describes an attribute of a type which corresponds to a C struct member. When defining a class, put a NULL-terminated array of these structures in the\ntp_members\nslot.Its fields are, in order:\n-\nconst char *name\u00b6\nName of the member. A NULL value marks the end of a\nPyMemberDef[]\narray.The string should be static, no copy is made of it.\n-\nint type\u00b6\nThe type of the member in the C struct. See Member types for the possible values.\n-\nPy_ssize_t offset\u00b6\nThe offset in bytes that the member is located on the type\u2019s object struct.\n-\nint flags\u00b6\nZero or more of the Member flags, combined using bitwise OR.\n-\nconst char *doc\u00b6\nThe docstring, or NULL. The string should be static, no copy is made of it. Typically, it is defined using\nPyDoc_STR\n.\nBy default (when\nflags\nis0\n), members allow both read and write access. Use thePy_READONLY\nflag for read-only access. Certain types, likePy_T_STRING\n, implyPy_READONLY\n. OnlyPy_T_OBJECT_EX\n(and legacyT_OBJECT\n) members can be deleted.For heap-allocated types (created using\nPyType_FromSpec()\nor similar),PyMemberDef\nmay contain a definition for the special member\"__vectorcalloffset__\"\n, corresponding totp_vectorcall_offset\nin type objects. This member must be defined withPy_T_PYSSIZET\n, and eitherPy_READONLY\norPy_READONLY | Py_RELATIVE_OFFSET\n. For example:static PyMemberDef spam_type_members[] = { {\"__vectorcalloffset__\", Py_T_PYSSIZET, offsetof(Spam_object, vectorcall), Py_READONLY}, {NULL} /* Sentinel */ };\n(You may need to\n#include \nforoffsetof()\n.)The legacy offsets\ntp_dictoffset\nandtp_weaklistoffset\ncan be defined similarly using\"__dictoffset__\"\nand\"__weaklistoffset__\"\nmembers, but extensions are strongly encouraged to usePy_TPFLAGS_MANAGED_DICT\nandPy_TPFLAGS_MANAGED_WEAKREF\ninstead.Changed in version 3.12:\nPyMemberDef\nis always available. Previously, it required including\"structmember.h\"\n.Changed in version 3.14:\nPy_RELATIVE_OFFSET\nis now allowed for\"__vectorcalloffset__\"\n,\"__dictoffset__\"\nand\"__weaklistoffset__\"\n. -\nconst char *name\u00b6\n-\nPyObject *PyMember_GetOne(const char *obj_addr, struct PyMemberDef *m)\u00b6\n- Part of the Stable ABI.\nGet an attribute belonging to the object at address obj_addr. The attribute is described by\nPyMemberDef\nm. ReturnsNULL\non error.Changed in version 3.12:\nPyMember_GetOne\nis always available. Previously, it required including\"structmember.h\"\n.\n-\nint PyMember_SetOne(char *obj_addr, struct PyMemberDef *m, PyObject *o)\u00b6\n- Part of the Stable ABI.\nSet an attribute belonging to the object at address obj_addr to object o. The attribute to set is described by\nPyMemberDef\nm. Returns0\nif successful and a negative value on failure.Changed in version 3.12:\nPyMember_SetOne\nis always available. Previously, it required including\"structmember.h\"\n.\nMember flags\u00b6\nThe following flags can be used with PyMemberDef.flags\n:\n-\nPy_READONLY\u00b6\n- Part of the Stable ABI since version 3.12.\nNot writable.\n-\nPy_AUDIT_READ\u00b6\n- Part of the Stable ABI since version 3.12.\nEmit an\nobject.__getattr__\naudit event before reading.\n-\nPy_RELATIVE_OFFSET\u00b6\n- Part of the Stable ABI since version 3.12.\nIndicates that the\noffset\nof thisPyMemberDef\nentry indicates an offset from the subclass-specific data, rather than fromPyObject\n.Can only be used as part of the\nPy_tp_members\nslot\nwhen creating a class using negativebasicsize\n. It is mandatory in that case. When settingtp_members\nfrom the slot during class creation, Python clears the flag and setsPyMemberDef.offset\nto the offset from thePyObject\nstruct.\nChanged in version 3.10: The RESTRICTED\n, READ_RESTRICTED\nand\nWRITE_RESTRICTED\nmacros available with\n#include \"structmember.h\"\nare deprecated.\nREAD_RESTRICTED\nand RESTRICTED\nare equivalent to\nPy_AUDIT_READ\n; WRITE_RESTRICTED\ndoes nothing.\nChanged in version 3.12: The READONLY\nmacro was renamed to Py_READONLY\n.\nThe PY_AUDIT_READ\nmacro was renamed with the Py_\nprefix.\nThe new names are now always available.\nPreviously, these required #include \"structmember.h\"\n.\nThe header is still available and it provides the old names.\nMember types\u00b6\nPyMemberDef.type\ncan be one of the following macros corresponding\nto various C types.\nWhen the member is accessed in Python, it will be converted to the\nequivalent Python type.\nWhen it is set from Python, it will be converted back to the C type.\nIf that is not possible, an exception such as TypeError\nor\nValueError\nis raised.\nUnless marked (D), attributes defined this way cannot be deleted\nusing e.g. del\nor delattr()\n.\nMacro name |\nC type |\nPython type |\n|---|---|---|\n|\nchar |\n|\n|\nshort |\n|\n|\nint |\n|\n|\nlong |\n|\n|\nlong long |\n|\n|\nunsigned char |\n|\n|\nunsigned int |\n|\n|\nunsigned short |\n|\n|\nunsigned long |\n|\n|\nunsigned long long |\n|\n|\n||\n|\nfloat |\n|\n|\ndouble |\n|\n|\nchar (written as 0 or 1) |\n|\n|\nconst char* (*) |\n|\n|\nconst char[] (*) |\n|\n|\nchar (0-127) |\n|\n|\n|\n(*): Zero-terminated, UTF8-encoded C string. With\nPy_T_STRING\nthe C representation is a pointer; withPy_T_STRING_INPLACE\nthe string is stored directly in the structure.(**): String of length 1. Only ASCII is accepted.\n(RO): Implies\nPy_READONLY\n.(D): Can be deleted, in which case the pointer is set to\nNULL\n. Reading aNULL\npointer raisesAttributeError\n.\nAdded in version 3.12: In previous versions, the macros were only available with\n#include \"structmember.h\"\nand were named without the Py_\nprefix\n(e.g. as T_INT\n).\nThe header is still available and contains the old names, along with\nthe following deprecated types:\n-\nT_OBJECT\u00b6\nLike\nPy_T_OBJECT_EX\n, butNULL\nis converted toNone\n. This results in surprising behavior in Python: deleting the attribute effectively sets it toNone\n.\n-\nT_NONE\u00b6\nAlways\nNone\n. Must be used withPy_READONLY\n.\nDefining Getters and Setters\u00b6\n-\ntype PyGetSetDef\u00b6\n- Part of the Stable ABI (including all members).\nStructure to define property-like access for a type. See also description of the\nPyTypeObject.tp_getset\nslot.-\nconst char *name\u00b6\nattribute name\n-\nsetter set\u00b6\nOptional C function to set or delete the attribute. If\nNULL\n, the attribute is read-only.\n-\nconst char *doc\u00b6\noptional docstring\n-\nvoid *closure\u00b6\nOptional user data pointer, providing additional data for getter and setter.\n-\nconst char *name\u00b6\n-\ntypedef PyObject *(*getter)(PyObject*, void*)\u00b6\n- Part of the Stable ABI.\nThe\nget\nfunction takes one PyObject* parameter (the instance) and a user data pointer (the associatedclosure\n):It should return a new reference on success or\nNULL\nwith a set exception on failure.\n-\ntypedef int (*setter)(PyObject*, PyObject*, void*)\u00b6\n- Part of the Stable ABI.\nset\nfunctions take two PyObject* parameters (the instance and the value to be set) and a user data pointer (the associatedclosure\n):In case the attribute should be deleted the second parameter is\nNULL\n. Should return0\non success or-1\nwith a set exception on failure.", "code_snippets": [], "language": "Python", "source": "python.org", "token_count": 5824} +{"url": "https://docs.python.org/3/c-api/iter.html", "title": "Iterator Protocol", "content": "Iterator Protocol\u00b6\nThere are two functions specifically for working with iterators.\n-\nint PyIter_Check(PyObject *o)\u00b6\n- Part of the Stable ABI since version 3.8.\nReturn non-zero if the object o can be safely passed to\nPyIter_NextItem()\nand0\notherwise. This function always succeeds.\n-\nint PyAIter_Check(PyObject *o)\u00b6\n- Part of the Stable ABI since version 3.10.\nReturn non-zero if the object o provides the\nAsyncIterator\nprotocol, and0\notherwise. This function always succeeds.Added in version 3.10.\n-\nint PyIter_NextItem(PyObject *iter, PyObject **item)\u00b6\n- Part of the Stable ABI since version 3.14.\nReturn\n1\nand set item to a strong reference of the next value of the iterator iter on success. Return0\nand set item toNULL\nif there are no remaining values. Return-1\n, set item toNULL\nand set an exception on error.Added in version 3.14.\n-\nPyObject *PyIter_Next(PyObject *o)\u00b6\n- Return value: New reference. Part of the Stable ABI.\nThis is an older version of\nPyIter_NextItem()\n, which is retained for backwards compatibility. PreferPyIter_NextItem()\n.Return the next value from the iterator o. The object must be an iterator according to\nPyIter_Check()\n(it is up to the caller to check this). If there are no remaining values, returnsNULL\nwith no exception set. If an error occurs while retrieving the item, returnsNULL\nand passes along the exception.\n-\ntype PySendResult\u00b6\nThe enum value used to represent different results of\nPyIter_Send()\n.Added in version 3.10.\n-\nPySendResult PyIter_Send(PyObject *iter, PyObject *arg, PyObject **presult)\u00b6\n- Part of the Stable ABI since version 3.10.\nSends the arg value into the iterator iter. Returns:\nPYGEN_RETURN\nif iterator returns. Return value is returned via presult.PYGEN_NEXT\nif iterator yields. Yielded value is returned via presult.PYGEN_ERROR\nif iterator has raised and exception. presult is set toNULL\n.\nAdded in version 3.10.", "code_snippets": [], "language": "Python", "source": "python.org", "token_count": 469} +{"url": "https://docs.python.org/3/library/language.html", "title": "Python Language Services", "content": "Python Language Services\u00b6\nPython provides a number of modules to assist in working with the Python language. These modules support tokenizing, parsing, syntax analysis, bytecode disassembly, and various other facilities.\nThese modules include:\nast\n\u2014 Abstract syntax treessymtable\n\u2014 Access to the compiler\u2019s symbol tablestoken\n\u2014 Constants used with Python parse treeskeyword\n\u2014 Testing for Python keywordstokenize\n\u2014 Tokenizer for Python sourcetabnanny\n\u2014 Detection of ambiguous indentationpyclbr\n\u2014 Python module browser supportpy_compile\n\u2014 Compile Python source filescompileall\n\u2014 Byte-compile Python librariesdis\n\u2014 Disassembler for Python bytecodepickletools\n\u2014 Tools for pickle developers", "code_snippets": [], "language": "Python", "source": "python.org", "token_count": 171} +{"url": "https://docs.python.org/3/c-api/hash.html", "title": "PyHash API", "content": "PyHash API\u00b6\nSee also the PyTypeObject.tp_hash\nmember and Hashing of numeric types.\n-\ntype Py_hash_t\u00b6\nHash value type: signed integer.\nAdded in version 3.2.\n-\ntype Py_uhash_t\u00b6\nHash value type: unsigned integer.\nAdded in version 3.2.\n-\nPy_HASH_ALGORITHM\u00b6\nA numerical value indicating the algorithm for hashing of\nstr\n,bytes\n, andmemoryview\n.The algorithm name is exposed by\nsys.hash_info.algorithm\n.Added in version 3.4.\n-\nPy_HASH_FNV\u00b6\n-\nPy_HASH_SIPHASH24\u00b6\n-\nPy_HASH_SIPHASH13\u00b6\nNumerical values to compare to\nPy_HASH_ALGORITHM\nto determine which algorithm is used for hashing. The hash algorithm can be configured via the configure--with-hash-algorithm\noption.Added in version 3.4: Add\nPy_HASH_FNV\nandPy_HASH_SIPHASH24\n.Added in version 3.11: Add\nPy_HASH_SIPHASH13\n.\n-\nPy_HASH_CUTOFF\u00b6\nBuffers of length in range\n[1, Py_HASH_CUTOFF)\nare hashed using DJBX33A instead of the algorithm described byPy_HASH_ALGORITHM\n.A\nPy_HASH_CUTOFF\nof 0 disables the optimization.Py_HASH_CUTOFF\nmust be non-negative and less or equal than 7.\n32-bit platforms should use a cutoff smaller than 64-bit platforms because it is easier to create colliding strings. A cutoff of 7 on 64-bit platforms and 5 on 32-bit platforms should provide a decent safety margin.\nThis corresponds to the\nsys.hash_info.cutoff\nconstant.Added in version 3.4.\n-\nPyHASH_MODULUS\u00b6\nThe Mersenne prime\nP = 2**n -1\n, used for numeric hash scheme.This corresponds to the\nsys.hash_info.modulus\nconstant.Added in version 3.13.\n-\nPyHASH_BITS\u00b6\nThe exponent\nn\nofP\ninPyHASH_MODULUS\n.Added in version 3.13.\n-\nPyHASH_MULTIPLIER\u00b6\nPrime multiplier used in string and various other hashes.\nAdded in version 3.13.\n-\nPyHASH_INF\u00b6\nThe hash value returned for a positive infinity.\nThis corresponds to the\nsys.hash_info.inf\nconstant.Added in version 3.13.\n-\nPyHASH_IMAG\u00b6\nThe multiplier used for the imaginary part of a complex number.\nThis corresponds to the\nsys.hash_info.imag\nconstant.Added in version 3.13.\n-\ntype PyHash_FuncDef\u00b6\nHash function definition used by\nPyHash_GetFuncDef()\n.-\nPy_hash_t (*const hash)(const void*, Py_ssize_t)\u00b6\nHash function.\n-\nconst char *name\u00b6\nHash function name (UTF-8 encoded string).\nThis corresponds to the\nsys.hash_info.algorithm\nconstant.\n-\nconst int hash_bits\u00b6\nInternal size of the hash value in bits.\nThis corresponds to the\nsys.hash_info.hash_bits\nconstant.\n-\nconst int seed_bits\u00b6\nSize of seed input in bits.\nThis corresponds to the\nsys.hash_info.seed_bits\nconstant.\nAdded in version 3.4.\n-\nPy_hash_t (*const hash)(const void*, Py_ssize_t)\u00b6\n-\nPyHash_FuncDef *PyHash_GetFuncDef(void)\u00b6\nGet the hash function definition.\nSee also\nPEP 456 \u201cSecure and interchangeable hash algorithm\u201d.\nAdded in version 3.4.\n-\nPy_hash_t Py_HashPointer(const void *ptr)\u00b6\nHash a pointer value: process the pointer value as an integer (cast it to\nuintptr_t\ninternally). The pointer is not dereferenced.The function cannot fail: it cannot return\n-1\n.Added in version 3.13.\n-\nPy_hash_t Py_HashBuffer(const void *ptr, Py_ssize_t len)\u00b6\nCompute and return the hash value of a buffer of len bytes starting at address ptr. The hash is guaranteed to match that of\nbytes\n,memoryview\n, and other built-in objects that implement the buffer protocol.Use this function to implement hashing for immutable objects whose\ntp_richcompare\nfunction compares to another object\u2019s buffer.len must be greater than or equal to\n0\n.This function always succeeds.\nAdded in version 3.14.\n-\nPy_hash_t PyObject_GenericHash(PyObject *obj)\u00b6\nGeneric hashing function that is meant to be put into a type object\u2019s\ntp_hash\nslot. Its result only depends on the object\u2019s identity.CPython implementation detail: In CPython, it is equivalent to\nPy_HashPointer()\n.Added in version 3.13.", "code_snippets": [], "language": "Python", "source": "python.org", "token_count": 921} +{"url": "https://docs.python.org/3/c-api/curses.html", "title": "Curses C API", "content": "Curses C API\u00b6\ncurses\nexposes a small C interface for extension modules.\nConsumers must include the header file py_curses.h\n(which is not\nincluded by default by Python.h\n) and import_curses()\nmust\nbe invoked, usually as part of the module initialisation function, to populate\nPyCurses_API\n.\nWarning\nNeither the C API nor the pure Python curses\nmodule are compatible\nwith subinterpreters.\n-\nimport_curses()\u00b6\nImport the curses C API. The macro does not need a semi-colon to be called.\nOn success, populate the\nPyCurses_API\npointer.On failure, set\nPyCurses_API\nto NULL and set an exception. The caller must check if an error occurred viaPyErr_Occurred()\n:import_curses(); // semi-colon is optional but recommended if (PyErr_Occurred()) { /* cleanup */ }\n-\nvoid **PyCurses_API\u00b6\nDynamically allocated object containing the curses C API. This variable is only available once\nimport_curses\nsucceeds.PyCurses_API[0]\ncorresponds toPyCursesWindow_Type\n.PyCurses_API[1]\n,PyCurses_API[2]\n, andPyCurses_API[3]\nare pointers to predicate functions of typeint (*)(void)\n.When called, these predicates return whether\ncurses.setupterm()\n,curses.initscr()\n, andcurses.start_color()\nhave been called respectively.See also the convenience macros\nPyCursesSetupTermCalled\n,PyCursesInitialised\n, andPyCursesInitialisedColor\n.Note\nThe number of entries in this structure is subject to changes. Consider using\nPyCurses_API_pointers\nto check if new fields are available or not.\n-\nPyCurses_API_pointers\u00b6\nThe number of accessible fields (\n4\n) inPyCurses_API\n. This number is incremented whenever new fields are added.\n-\nPyTypeObject PyCursesWindow_Type\u00b6\nThe heap type corresponding to\ncurses.window\n.\n-\nint PyCursesWindow_Check(PyObject *op)\u00b6\nReturn true if op is a\ncurses.window\ninstance, false otherwise.\nThe following macros are convenience macros expanding into C statements.\nIn particular, they can only be used as macro;\nor macro\n, but not\nmacro()\nor macro();\n.\n-\nPyCursesSetupTermCalled\u00b6\nMacro checking if\ncurses.setupterm()\nhas been called.The macro expansion is roughly equivalent to:\n{ typedef int (*predicate_t)(void); predicate_t was_setupterm_called = (predicate_t)PyCurses_API[1]; if (!was_setupterm_called()) { return NULL; } }\n-\nPyCursesInitialised\u00b6\nMacro checking if\ncurses.initscr()\nhas been called.The macro expansion is roughly equivalent to:\n{ typedef int (*predicate_t)(void); predicate_t was_initscr_called = (predicate_t)PyCurses_API[2]; if (!was_initscr_called()) { return NULL; } }\n-\nPyCursesInitialisedColor\u00b6\nMacro checking if\ncurses.start_color()\nhas been called.The macro expansion is roughly equivalent to:\n{ typedef int (*predicate_t)(void); predicate_t was_start_color_called = (predicate_t)PyCurses_API[3]; if (!was_start_color_called()) { return NULL; } }\nInternal data\u00b6\nThe following objects are exposed by the C API but should be considered internal-only.\n-\nPyCurses_CAPSULE_NAME\u00b6\nName of the curses capsule to pass to\nPyCapsule_Import()\n.Internal usage only. Use\nimport_curses\ninstead.", "code_snippets": [], "language": "Python", "source": "python.org", "token_count": 748} +{"url": "https://docs.python.org/3/c-api/arg.html", "title": "Parsing arguments and building values", "content": "Parsing arguments and building values\u00b6\nThese functions are useful when creating your own extension functions and methods. Additional information and examples are available in Extending and Embedding the Python Interpreter.\nThe first three of these functions described, PyArg_ParseTuple()\n,\nPyArg_ParseTupleAndKeywords()\n, and PyArg_Parse()\n, all use format\nstrings which are used to tell the function about the expected arguments. The\nformat strings use the same syntax for each of these functions.\nParsing arguments\u00b6\nA format string consists of zero or more \u201cformat units.\u201d A format unit describes one Python object; it is usually a single character or a parenthesized sequence of format units. With a few exceptions, a format unit that is not a parenthesized sequence normally corresponds to a single address argument to these functions. In the following description, the quoted form is the format unit; the entry in (round) parentheses is the Python object type that matches the format unit; and the entry in [square] brackets is the type of the C variable(s) whose address should be passed.\nStrings and buffers\u00b6\nNote\nOn Python 3.12 and older, the macro PY_SSIZE_T_CLEAN\nmust be\ndefined before including Python.h\nto use all #\nvariants of\nformats (s#\n, y#\n, etc.) explained below.\nThis is not necessary on Python 3.13 and later.\nThese formats allow accessing an object as a contiguous chunk of memory. You don\u2019t have to provide raw storage for the returned unicode or bytes area.\nUnless otherwise stated, buffers are not NUL-terminated.\nThere are three ways strings and buffers can be converted to C:\nFormats such as\ny*\nands*\nfill aPy_buffer\nstructure. This locks the underlying buffer so that the caller can subsequently use the buffer even inside aPy_BEGIN_ALLOW_THREADS\nblock without the risk of mutable data being resized or destroyed. As a result, you have to callPyBuffer_Release()\nafter you have finished processing the data (or in any early abort case).The\nes\n,es#\n,et\nandet#\nformats allocate the result buffer. You have to callPyMem_Free()\nafter you have finished processing the data (or in any early abort case).Other formats take a\nstr\nor a read-only bytes-like object, such asbytes\n, and provide aconst char *\npointer to its buffer. In this case the buffer is \u201cborrowed\u201d: it is managed by the corresponding Python object, and shares the lifetime of this object. You won\u2019t have to release any memory yourself.To ensure that the underlying buffer may be safely borrowed, the object\u2019s\nPyBufferProcs.bf_releasebuffer\nfield must beNULL\n. This disallows common mutable objects such asbytearray\n, but also some read-only objects such asmemoryview\nofbytes\n.Besides this\nbf_releasebuffer\nrequirement, there is no check to verify whether the input object is immutable (e.g. whether it would honor a request for a writable buffer, or whether another thread can mutate the data).\ns\n(str\n) [const char *]Convert a Unicode object to a C pointer to a character string. A pointer to an existing string is stored in the character pointer variable whose address you pass. The C string is NUL-terminated. The Python string must not contain embedded null code points; if it does, a\nValueError\nexception is raised. Unicode objects are converted to C strings using'utf-8'\nencoding. If this conversion fails, aUnicodeError\nis raised.Note\nThis format does not accept bytes-like objects. If you want to accept filesystem paths and convert them to C character strings, it is preferable to use the\nO&\nformat withPyUnicode_FSConverter()\nas converter.Changed in version 3.5: Previously,\nTypeError\nwas raised when embedded null code points were encountered in the Python string.s*\n(str\nor bytes-like object) [Py_buffer]This format accepts Unicode objects as well as bytes-like objects. It fills a\nPy_buffer\nstructure provided by the caller. In this case the resulting C string may contain embedded NUL bytes. Unicode objects are converted to C strings using'utf-8'\nencoding.s#\n(str\n, read-only bytes-like object) [const char *,Py_ssize_t\n]Like\ns*\n, except that it provides a borrowed buffer. The result is stored into two C variables, the first one a pointer to a C string, the second one its length. The string may contain embedded null bytes. Unicode objects are converted to C strings using'utf-8'\nencoding.z\n(str\norNone\n) [const char *]Like\ns\n, but the Python object may also beNone\n, in which case the C pointer is set toNULL\n.z*\n(str\n, bytes-like object orNone\n) [Py_buffer]Like\ns*\n, but the Python object may also beNone\n, in which case thebuf\nmember of thePy_buffer\nstructure is set toNULL\n.z#\n(str\n, read-only bytes-like object orNone\n) [const char *,Py_ssize_t\n]Like\ns#\n, but the Python object may also beNone\n, in which case the C pointer is set toNULL\n.y\n(read-only bytes-like object) [const char *]This format converts a bytes-like object to a C pointer to a borrowed character string; it does not accept Unicode objects. The bytes buffer must not contain embedded null bytes; if it does, a\nValueError\nexception is raised.Changed in version 3.5: Previously,\nTypeError\nwas raised when embedded null bytes were encountered in the bytes buffer.y*\n(bytes-like object) [Py_buffer]This variant on\ns*\ndoesn\u2019t accept Unicode objects, only bytes-like objects. This is the recommended way to accept binary data.y#\n(read-only bytes-like object) [const char *,Py_ssize_t\n]This variant on\ns#\ndoesn\u2019t accept Unicode objects, only bytes-like objects.S\n(bytes\n) [PyBytesObject *]Requires that the Python object is a\nbytes\nobject, without attempting any conversion. RaisesTypeError\nif the object is not a bytes object. The C variable may also be declared as PyObject*.Y\n(bytearray\n) [PyByteArrayObject *]Requires that the Python object is a\nbytearray\nobject, without attempting any conversion. RaisesTypeError\nif the object is not abytearray\nobject. The C variable may also be declared as PyObject*.U\n(str\n) [PyObject *]Requires that the Python object is a Unicode object, without attempting any conversion. Raises\nTypeError\nif the object is not a Unicode object. The C variable may also be declared as PyObject*.w*\n(read-write bytes-like object) [Py_buffer]This format accepts any object which implements the read-write buffer interface. It fills a\nPy_buffer\nstructure provided by the caller. The buffer may contain embedded null bytes. The caller has to callPyBuffer_Release()\nwhen it is done with the buffer.es\n(str\n) [const char *encoding, char **buffer]This variant on\ns\nis used for encoding Unicode into a character buffer. It only works for encoded data without embedded NUL bytes.This format requires two arguments. The first is only used as input, and must be a const char* which points to the name of an encoding as a NUL-terminated string, or\nNULL\n, in which case'utf-8'\nencoding is used. An exception is raised if the named encoding is not known to Python. The second argument must be a char**; the value of the pointer it references will be set to a buffer with the contents of the argument text. The text will be encoded in the encoding specified by the first argument.PyArg_ParseTuple()\nwill allocate a buffer of the needed size, copy the encoded data into this buffer and adjust *buffer to reference the newly allocated storage. The caller is responsible for callingPyMem_Free()\nto free the allocated buffer after use.et\n(str\n,bytes\norbytearray\n) [const char *encoding, char **buffer]Same as\nes\nexcept that byte string objects are passed through without recoding them. Instead, the implementation assumes that the byte string object uses the encoding passed in as parameter.es#\n(str\n) [const char *encoding, char **buffer,Py_ssize_t\n*buffer_length]This variant on\ns#\nis used for encoding Unicode into a character buffer. Unlike thees\nformat, this variant allows input data which contains NUL characters.It requires three arguments. The first is only used as input, and must be a const char* which points to the name of an encoding as a NUL-terminated string, or\nNULL\n, in which case'utf-8'\nencoding is used. An exception is raised if the named encoding is not known to Python. The second argument must be a char**; the value of the pointer it references will be set to a buffer with the contents of the argument text. The text will be encoded in the encoding specified by the first argument. The third argument must be a pointer to an integer; the referenced integer will be set to the number of bytes in the output buffer.There are two modes of operation:\nIf *buffer points a\nNULL\npointer, the function will allocate a buffer of the needed size, copy the encoded data into this buffer and set *buffer to reference the newly allocated storage. The caller is responsible for callingPyMem_Free()\nto free the allocated buffer after usage.If *buffer points to a non-\nNULL\npointer (an already allocated buffer),PyArg_ParseTuple()\nwill use this location as the buffer and interpret the initial value of *buffer_length as the buffer size. It will then copy the encoded data into the buffer and NUL-terminate it. If the buffer is not large enough, aValueError\nwill be set.In both cases, *buffer_length is set to the length of the encoded data without the trailing NUL byte.\net#\n(str\n,bytes\norbytearray\n) [const char *encoding, char **buffer,Py_ssize_t\n*buffer_length]Same as\nes#\nexcept that byte string objects are passed through without recoding them. Instead, the implementation assumes that the byte string object uses the encoding passed in as parameter.\nChanged in version 3.12: u\n, u#\n, Z\n, and Z#\nare removed because they used a legacy\nPy_UNICODE*\nrepresentation.\nNumbers\u00b6\nThese formats allow representing Python numbers or single characters as C numbers.\nFormats that require int\n, float\nor complex\ncan\nalso use the corresponding special methods __index__()\n,\n__float__()\nor __complex__()\nto convert\nthe Python object to the required type.\nFor signed integer formats, OverflowError\nis raised if the value\nis out of range for the C type.\nFor unsigned integer formats, no range checking is done \u2014 the\nmost significant bits are silently truncated when the receiving field is too\nsmall to receive the value.\nb\n(int\n) [unsigned char]Convert a nonnegative Python integer to an unsigned tiny integer, stored in a C unsigned char.\nB\n(int\n) [unsigned char]Convert a Python integer to a tiny integer without overflow checking, stored in a C unsigned char.\nh\n(int\n) [short int]Convert a Python integer to a C short int.\nH\n(int\n) [unsigned short int]Convert a Python integer to a C unsigned short int, without overflow checking.\ni\n(int\n) [int]Convert a Python integer to a plain C int.\nI\n(int\n) [unsigned int]Convert a Python integer to a C unsigned int, without overflow checking.\nl\n(int\n) [long int]Convert a Python integer to a C long int.\nk\n(int\n) [unsigned long]Convert a Python integer to a C unsigned long without overflow checking.\nChanged in version 3.14: Use\n__index__()\nif available.L\n(int\n) [long long]Convert a Python integer to a C long long.\nK\n(int\n) [unsigned long long]Convert a Python integer to a C unsigned long long without overflow checking.\nChanged in version 3.14: Use\n__index__()\nif available.n\n(int\n) [Py_ssize_t\n]Convert a Python integer to a C\nPy_ssize_t\n.c\n(bytes\norbytearray\nof length 1) [char]Convert a Python byte, represented as a\nbytes\norbytearray\nobject of length 1, to a C char.Changed in version 3.3: Allow\nbytearray\nobjects.C\n(str\nof length 1) [int]Convert a Python character, represented as a\nstr\nobject of length 1, to a C int.f\n(float\n) [float]Convert a Python floating-point number to a C float.\nd\n(float\n) [double]Convert a Python floating-point number to a C double.\nD\n(complex\n) [Py_complex]Convert a Python complex number to a C\nPy_complex\nstructure.\nOther objects\u00b6\nO\n(object) [PyObject *]Store a Python object (without any conversion) in a C object pointer. The C program thus receives the actual object that was passed. A new strong reference to the object is not created (i.e. its reference count is not increased). The pointer stored is not\nNULL\n.O!\n(object) [typeobject, PyObject *]Store a Python object in a C object pointer. This is similar to\nO\n, but takes two C arguments: the first is the address of a Python type object, the second is the address of the C variable (of type PyObject*) into which the object pointer is stored. If the Python object does not have the required type,TypeError\nis raised.\nO&\n(object) [converter, address]Convert a Python object to a C variable through a converter function. This takes two arguments: the first is a function, the second is the address of a C variable (of arbitrary type), converted to void*. The converter function in turn is called as follows:\nstatus = converter(object, address);\nwhere object is the Python object to be converted and address is the void* argument that was passed to the\nPyArg_Parse*\nfunction. The returned status should be1\nfor a successful conversion and0\nif the conversion has failed. When the conversion fails, the converter function should raise an exception and leave the content of address unmodified.If the converter returns\nPy_CLEANUP_SUPPORTED\n, it may get called a second time if the argument parsing eventually fails, giving the converter a chance to release any memory that it had already allocated. In this second call, the object parameter will beNULL\n; address will have the same value as in the original call.Examples of converters:\nPyUnicode_FSConverter()\nandPyUnicode_FSDecoder()\n.Changed in version 3.1:\nPy_CLEANUP_SUPPORTED\nwas added.p\n(bool\n) [int]Tests the value passed in for truth (a boolean predicate) and converts the result to its equivalent C true/false integer value. Sets the int to\n1\nif the expression was true and0\nif it was false. This accepts any valid Python value. See Truth Value Testing for more information about how Python tests values for truth.Added in version 3.3.\n(items)\n(sequence) [matching-items]The object must be a Python sequence (except\nstr\n,bytes\norbytearray\n) whose length is the number of format units in items. The C arguments must correspond to the individual format units in items. Format units for sequences may be nested.If items contains format units which store a borrowed buffer (\ns\n,s#\n,z\n,z#\n,y\n, ory#\n) or a borrowed reference (S\n,Y\n,U\n,O\n, orO!\n), the object must be a Python tuple. The converter for theO&\nformat unit in items must not store a borrowed buffer or a borrowed reference.Deprecated since version 3.14: Non-tuple sequences are deprecated if items contains format units which store a borrowed buffer or a borrowed reference.\nA few other characters have a meaning in a format string. These may not occur inside nested parentheses. They are:\n|\nIndicates that the remaining arguments in the Python argument list are optional. The C variables corresponding to optional arguments should be initialized to their default value \u2014 when an optional argument is not specified,\nPyArg_ParseTuple()\ndoes not touch the contents of the corresponding C variable(s).$\nPyArg_ParseTupleAndKeywords()\nonly: Indicates that the remaining arguments in the Python argument list are keyword-only. Currently, all keyword-only arguments must also be optional arguments, so|\nmust always be specified before$\nin the format string.Added in version 3.3.\n:\nThe list of format units ends here; the string after the colon is used as the function name in error messages (the \u201cassociated value\u201d of the exception that\nPyArg_ParseTuple()\nraises).;\nThe list of format units ends here; the string after the semicolon is used as the error message instead of the default error message.\n:\nand;\nmutually exclude each other.\nNote that any Python object references which are provided to the caller are borrowed references; do not release them (i.e. do not decrement their reference count)!\nAdditional arguments passed to these functions must be addresses of variables whose type is determined by the format string; these are used to store values from the input tuple. There are a few cases, as described in the list of format units above, where these parameters are used as input values; they should match what is specified for the corresponding format unit in that case.\nFor the conversion to succeed, the arg object must match the format\nand the format must be exhausted. On success, the\nPyArg_Parse*\nfunctions return true, otherwise they return\nfalse and raise an appropriate exception. When the\nPyArg_Parse*\nfunctions fail due to conversion failure in one\nof the format units, the variables at the addresses corresponding to that\nand the following format units are left untouched.\nAPI Functions\u00b6\n-\nint PyArg_ParseTuple(PyObject *args, const char *format, ...)\u00b6\n- Part of the Stable ABI.\nParse the parameters of a function that takes only positional parameters into local variables. Returns true on success; on failure, it returns false and raises the appropriate exception.\n-\nint PyArg_VaParse(PyObject *args, const char *format, va_list vargs)\u00b6\n- Part of the Stable ABI.\nIdentical to\nPyArg_ParseTuple()\n, except that it accepts a va_list rather than a variable number of arguments.\n-\nint PyArg_ParseTupleAndKeywords(PyObject *args, PyObject *kw, const char *format, char *const *keywords, ...)\u00b6\n- Part of the Stable ABI.\nParse the parameters of a function that takes both positional and keyword parameters into local variables. The keywords argument is a\nNULL\n-terminated array of keyword parameter names specified as null-terminated ASCII or UTF-8 encoded C strings. Empty names denote positional-only parameters. Returns true on success; on failure, it returns false and raises the appropriate exception.Note\nThe keywords parameter declaration is char *const* in C and const char *const* in C++. This can be overridden with the\nPY_CXX_CONST\nmacro.Changed in version 3.6: Added support for positional-only parameters.\nChanged in version 3.13: The keywords parameter has now type char *const* in C and const char *const* in C++, instead of char**. Added support for non-ASCII keyword parameter names.\n-\nint PyArg_VaParseTupleAndKeywords(PyObject *args, PyObject *kw, const char *format, char *const *keywords, va_list vargs)\u00b6\n- Part of the Stable ABI.\nIdentical to\nPyArg_ParseTupleAndKeywords()\n, except that it accepts a va_list rather than a variable number of arguments.\n-\nint PyArg_ValidateKeywordArguments(PyObject*)\u00b6\n- Part of the Stable ABI.\nEnsure that the keys in the keywords argument dictionary are strings. This is only needed if\nPyArg_ParseTupleAndKeywords()\nis not used, since the latter already does this check.Added in version 3.2.\n-\nint PyArg_Parse(PyObject *args, const char *format, ...)\u00b6\n- Part of the Stable ABI.\nParse the parameter of a function that takes a single positional parameter into a local variable. Returns true on success; on failure, it returns false and raises the appropriate exception.\nExample:\n// Function using METH_O calling convention static PyObject* my_function(PyObject *module, PyObject *arg) { int value; if (!PyArg_Parse(arg, \"i:my_function\", &value)) { return NULL; } // ... use value ... }\n-\nint PyArg_UnpackTuple(PyObject *args, const char *name, Py_ssize_t min, Py_ssize_t max, ...)\u00b6\n- Part of the Stable ABI.\nA simpler form of parameter retrieval which does not use a format string to specify the types of the arguments. Functions which use this method to retrieve their parameters should be declared as\nMETH_VARARGS\nin function or method tables. The tuple containing the actual parameters should be passed as args; it must actually be a tuple. The length of the tuple must be at least min and no more than max; min and max may be equal. Additional arguments must be passed to the function, each of which should be a pointer to a PyObject* variable; these will be filled in with the values from args; they will contain borrowed references. The variables which correspond to optional parameters not given by args will not be filled in; these should be initialized by the caller. This function returns true on success and false if args is not a tuple or contains the wrong number of elements; an exception will be set if there was a failure.This is an example of the use of this function, taken from the sources for the\n_weakref\nhelper module for weak references:static PyObject * weakref_ref(PyObject *self, PyObject *args) { PyObject *object; PyObject *callback = NULL; PyObject *result = NULL; if (PyArg_UnpackTuple(args, \"ref\", 1, 2, &object, &callback)) { result = PyWeakref_NewRef(object, callback); } return result; }\nThe call to\nPyArg_UnpackTuple()\nin this example is entirely equivalent to this call toPyArg_ParseTuple()\n:PyArg_ParseTuple(args, \"O|O:ref\", &object, &callback)\n-\nPY_CXX_CONST\u00b6\nThe value to be inserted, if any, before char *const* in the keywords parameter declaration of\nPyArg_ParseTupleAndKeywords()\nandPyArg_VaParseTupleAndKeywords()\n. Default empty for C andconst\nfor C++ (const char *const*). To override, define it to the desired value before includingPython.h\n.Added in version 3.13.\nBuilding values\u00b6\n-\nPyObject *Py_BuildValue(const char *format, ...)\u00b6\n- Return value: New reference. Part of the Stable ABI.\nCreate a new value based on a format string similar to those accepted by the\nPyArg_Parse*\nfamily of functions and a sequence of values. Returns the value orNULL\nin the case of an error; an exception will be raised ifNULL\nis returned.Py_BuildValue()\ndoes not always build a tuple. It builds a tuple only if its format string contains two or more format units. If the format string is empty, it returnsNone\n; if it contains exactly one format unit, it returns whatever object is described by that format unit. To force it to return a tuple of size 0 or one, parenthesize the format string.When memory buffers are passed as parameters to supply data to build objects, as for the\ns\nands#\nformats, the required data is copied. Buffers provided by the caller are never referenced by the objects created byPy_BuildValue()\n. In other words, if your code invokesmalloc()\nand passes the allocated memory toPy_BuildValue()\n, your code is responsible for callingfree()\nfor that memory oncePy_BuildValue()\nreturns.In the following description, the quoted form is the format unit; the entry in (round) parentheses is the Python object type that the format unit will return; and the entry in [square] brackets is the type of the C value(s) to be passed.\nThe characters space, tab, colon and comma are ignored in format strings (but not within format units such as\ns#\n). This can be used to make long format strings a tad more readable.s\n(str\norNone\n) [const char *]Convert a null-terminated C string to a Python\nstr\nobject using'utf-8'\nencoding. If the C string pointer isNULL\n,None\nis used.s#\n(str\norNone\n) [const char *,Py_ssize_t\n]Convert a C string and its length to a Python\nstr\nobject using'utf-8'\nencoding. If the C string pointer isNULL\n, the length is ignored andNone\nis returned.y\n(bytes\n) [const char *]This converts a C string to a Python\nbytes\nobject. If the C string pointer isNULL\n,None\nis returned.y#\n(bytes\n) [const char *,Py_ssize_t\n]This converts a C string and its lengths to a Python object. If the C string pointer is\nNULL\n,None\nis returned.z\n(str\norNone\n) [const char *]Same as\ns\n.z#\n(str\norNone\n) [const char *,Py_ssize_t\n]Same as\ns#\n.u\n(str\n) [const wchar_t *]Convert a null-terminated\nwchar_t\nbuffer of Unicode (UTF-16 or UCS-4) data to a Python Unicode object. If the Unicode buffer pointer isNULL\n,None\nis returned.u#\n(str\n) [const wchar_t *,Py_ssize_t\n]Convert a Unicode (UTF-16 or UCS-4) data buffer and its length to a Python Unicode object. If the Unicode buffer pointer is\nNULL\n, the length is ignored andNone\nis returned.U\n(str\norNone\n) [const char *]Same as\ns\n.U#\n(str\norNone\n) [const char *,Py_ssize_t\n]Same as\ns#\n.i\n(int\n) [int]Convert a plain C int to a Python integer object.\nb\n(int\n) [char]Convert a plain C char to a Python integer object.\nh\n(int\n) [short int]Convert a plain C short int to a Python integer object.\nl\n(int\n) [long int]Convert a C long int to a Python integer object.\nB\n(int\n) [unsigned char]Convert a C unsigned char to a Python integer object.\nH\n(int\n) [unsigned short int]Convert a C unsigned short int to a Python integer object.\nI\n(int\n) [unsigned int]Convert a C unsigned int to a Python integer object.\nk\n(int\n) [unsigned long]Convert a C unsigned long to a Python integer object.\nL\n(int\n) [long long]Convert a C long long to a Python integer object.\nK\n(int\n) [unsigned long long]Convert a C unsigned long long to a Python integer object.\nn\n(int\n) [Py_ssize_t\n]Convert a C\nPy_ssize_t\nto a Python integer.p\n(bool\n) [int]Convert a C int to a Python\nbool\nobject.Be aware that this format requires an\nint\nargument. Unlike most other contexts in C, variadic arguments are not coerced to a suitable type automatically. You can convert another type (for example, a pointer or a float) to a suitableint\nvalue using(x) ? 1 : 0\nor!!x\n.Added in version 3.14.\nc\n(bytes\nof length 1) [char]Convert a C int representing a byte to a Python\nbytes\nobject of length 1.C\n(str\nof length 1) [int]Convert a C int representing a character to Python\nstr\nobject of length 1.d\n(float\n) [double]Convert a C double to a Python floating-point number.\nf\n(float\n) [float]Convert a C float to a Python floating-point number.\nD\n(complex\n) [Py_complex *]Convert a C\nPy_complex\nstructure to a Python complex number.O\n(object) [PyObject *]Pass a Python object untouched but create a new strong reference to it (i.e. its reference count is incremented by one). If the object passed in is a\nNULL\npointer, it is assumed that this was caused because the call producing the argument found an error and set an exception. Therefore,Py_BuildValue()\nwill returnNULL\nbut won\u2019t raise an exception. If no exception has been raised yet,SystemError\nis set.S\n(object) [PyObject *]Same as\nO\n.N\n(object) [PyObject *]Same as\nO\n, except it doesn\u2019t create a new strong reference. Useful when the object is created by a call to an object constructor in the argument list.O&\n(object) [converter, anything]Convert anything to a Python object through a converter function. The function is called with anything (which should be compatible with void*) as its argument and should return a \u201cnew\u201d Python object, or\nNULL\nif an error occurred.(items)\n(tuple\n) [matching-items]Convert a sequence of C values to a Python tuple with the same number of items.\n[items]\n(list\n) [matching-items]Convert a sequence of C values to a Python list with the same number of items.\n{items}\n(dict\n) [matching-items]Convert a sequence of C values to a Python dictionary. Each pair of consecutive C values adds one item to the dictionary, serving as key and value, respectively.\nIf there is an error in the format string, the\nSystemError\nexception is set andNULL\nreturned.\n-\nPyObject *Py_VaBuildValue(const char *format, va_list vargs)\u00b6\n- Return value: New reference. Part of the Stable ABI.\nIdentical to\nPy_BuildValue()\n, except that it accepts a va_list rather than a variable number of arguments.", "code_snippets": [], "language": "Python", "source": "python.org", "token_count": 6723} +{"url": "https://docs.python.org/3/c-api/slice.html", "title": "Slice Objects", "content": "Slice Objects\u00b6\n-\nPyTypeObject PySlice_Type\u00b6\n- Part of the Stable ABI.\nThe type object for slice objects. This is the same as\nslice\nin the Python layer.\n-\nint PySlice_Check(PyObject *ob)\u00b6\nReturn true if ob is a slice object; ob must not be\nNULL\n. This function always succeeds.\n-\nPyObject *PySlice_New(PyObject *start, PyObject *stop, PyObject *step)\u00b6\n- Return value: New reference. Part of the Stable ABI.\nReturn a new slice object with the given values. The start, stop, and step parameters are used as the values of the slice object attributes of the same names. Any of the values may be\nNULL\n, in which case theNone\nwill be used for the corresponding attribute.Return\nNULL\nwith an exception set if the new object could not be allocated.\n-\nint PySlice_GetIndices(PyObject *slice, Py_ssize_t length, Py_ssize_t *start, Py_ssize_t *stop, Py_ssize_t *step)\u00b6\n- Part of the Stable ABI.\nRetrieve the start, stop and step indices from the slice object slice, assuming a sequence of length length. Treats indices greater than length as errors.\nReturns\n0\non success and-1\non error with no exception set (unless one of the indices was notNone\nand failed to be converted to an integer, in which case-1\nis returned with an exception set).You probably do not want to use this function.\nChanged in version 3.2: The parameter type for the slice parameter was\nPySliceObject*\nbefore.\n-\nint PySlice_GetIndicesEx(PyObject *slice, Py_ssize_t length, Py_ssize_t *start, Py_ssize_t *stop, Py_ssize_t *step, Py_ssize_t *slicelength)\u00b6\n- Part of the Stable ABI.\nUsable replacement for\nPySlice_GetIndices()\n. Retrieve the start, stop, and step indices from the slice object slice assuming a sequence of length length, and store the length of the slice in slicelength. Out of bounds indices are clipped in a manner consistent with the handling of normal slices.Return\n0\non success and-1\non error with an exception set.Note\nThis function is considered not safe for resizable sequences. Its invocation should be replaced by a combination of\nPySlice_Unpack()\nandPySlice_AdjustIndices()\nwhereif (PySlice_GetIndicesEx(slice, length, &start, &stop, &step, &slicelength) < 0) { // return error }\nis replaced by\nif (PySlice_Unpack(slice, &start, &stop, &step) < 0) { // return error } slicelength = PySlice_AdjustIndices(length, &start, &stop, step);\nChanged in version 3.2: The parameter type for the slice parameter was\nPySliceObject*\nbefore.Changed in version 3.6.1: If\nPy_LIMITED_API\nis not set or set to the value between0x03050400\nand0x03060000\n(not including) or0x03060100\nor higherPySlice_GetIndicesEx()\nis implemented as a macro usingPySlice_Unpack()\nandPySlice_AdjustIndices()\n. Arguments start, stop and step are evaluated more than once.Deprecated since version 3.6.1: If\nPy_LIMITED_API\nis set to the value less than0x03050400\nor between0x03060000\nand0x03060100\n(not including)PySlice_GetIndicesEx()\nis a deprecated function.\n-\nint PySlice_Unpack(PyObject *slice, Py_ssize_t *start, Py_ssize_t *stop, Py_ssize_t *step)\u00b6\n- Part of the Stable ABI since version 3.7.\nExtract the start, stop and step data members from a slice object as C integers. Silently reduce values larger than\nPY_SSIZE_T_MAX\ntoPY_SSIZE_T_MAX\n, silently boost the start and stop values less thanPY_SSIZE_T_MIN\ntoPY_SSIZE_T_MIN\n, and silently boost the step values less than-PY_SSIZE_T_MAX\nto-PY_SSIZE_T_MAX\n.Return\n-1\nwith an exception set on error,0\non success.Added in version 3.6.1.\n-\nPy_ssize_t PySlice_AdjustIndices(Py_ssize_t length, Py_ssize_t *start, Py_ssize_t *stop, Py_ssize_t step)\u00b6\n- Part of the Stable ABI since version 3.7.\nAdjust start/end slice indices assuming a sequence of the specified length. Out of bounds indices are clipped in a manner consistent with the handling of normal slices.\nReturn the length of the slice. Always successful. Doesn\u2019t call Python code.\nAdded in version 3.6.1.\nEllipsis Object\u00b6\n-\nPyTypeObject PyEllipsis_Type\u00b6\n- Part of the Stable ABI.\nThe type of Python\nEllipsis\nobject. Same astypes.EllipsisType\nin the Python layer.\n-\nPyObject *Py_Ellipsis\u00b6\nThe Python\nEllipsis\nobject. This object has no methods. LikePy_None\n, it is an immortal singleton object.Changed in version 3.12:\nPy_Ellipsis\nis immortal.", "code_snippets": [], "language": "Python", "source": "python.org", "token_count": 1048} +{"url": "https://docs.python.org/3/howto/regex.html", "title": "Regular Expression HOWTO", "content": "Regular Expression HOWTO\u00b6\n- Author:\nA.M. Kuchling \nIntroduction\u00b6\nRegular expressions (called REs, or regexes, or regex patterns) are essentially\na tiny, highly specialized programming language embedded inside Python and made\navailable through the re\nmodule. Using this little language, you specify\nthe rules for the set of possible strings that you want to match; this set might\ncontain English sentences, or e-mail addresses, or TeX commands, or anything you\nlike. You can then ask questions such as \u201cDoes this string match the pattern?\u201d,\nor \u201cIs there a match for the pattern anywhere in this string?\u201d. You can also\nuse REs to modify a string or to split it apart in various ways.\nRegular expression patterns are compiled into a series of bytecodes which are then executed by a matching engine written in C. For advanced use, it may be necessary to pay careful attention to how the engine will execute a given RE, and write the RE in a certain way in order to produce bytecode that runs faster. Optimization isn\u2019t covered in this document, because it requires that you have a good understanding of the matching engine\u2019s internals.\nThe regular expression language is relatively small and restricted, so not all possible string processing tasks can be done using regular expressions. There are also tasks that can be done with regular expressions, but the expressions turn out to be very complicated. In these cases, you may be better off writing Python code to do the processing; while Python code will be slower than an elaborate regular expression, it will also probably be more understandable.\nSimple Patterns\u00b6\nWe\u2019ll start by learning about the simplest possible regular expressions. Since regular expressions are used to operate on strings, we\u2019ll begin with the most common task: matching characters.\nFor a detailed explanation of the computer science underlying regular expressions (deterministic and non-deterministic finite automata), you can refer to almost any textbook on writing compilers.\nMatching Characters\u00b6\nMost letters and characters will simply match themselves. For example, the\nregular expression test\nwill match the string test\nexactly. (You can\nenable a case-insensitive mode that would let this RE match Test\nor TEST\nas well; more about this later.)\nThere are exceptions to this rule; some characters are special metacharacters, and don\u2019t match themselves. Instead, they signal that some out-of-the-ordinary thing should be matched, or they affect other portions of the RE by repeating them or changing their meaning. Much of this document is devoted to discussing various metacharacters and what they do.\nHere\u2019s a complete list of the metacharacters; their meanings will be discussed in the rest of this HOWTO.\n. ^ $ * + ? { } [ ] \\ | ( )\nThe first metacharacters we\u2019ll look at are [\nand ]\n. They\u2019re used for\nspecifying a character class, which is a set of characters that you wish to\nmatch. Characters can be listed individually, or a range of characters can be\nindicated by giving two characters and separating them by a '-'\n. For\nexample, [abc]\nwill match any of the characters a\n, b\n, or c\n; this\nis the same as [a-c]\n, which uses a range to express the same set of\ncharacters. If you wanted to match only lowercase letters, your RE would be\n[a-z]\n.\nMetacharacters (except \\\n) are not active inside classes. For example, [akm$]\nwill\nmatch any of the characters 'a'\n, 'k'\n, 'm'\n, or '$'\n; '$'\nis\nusually a metacharacter, but inside a character class it\u2019s stripped of its\nspecial nature.\nYou can match the characters not listed within the class by complementing\nthe set. This is indicated by including a '^'\nas the first character of the\nclass. For example, [^5]\nwill match any character except '5'\n. If the\ncaret appears elsewhere in a character class, it does not have special meaning.\nFor example: [5^]\nwill match either a '5'\nor a '^'\n.\nPerhaps the most important metacharacter is the backslash, \\\n. As in Python\nstring literals, the backslash can be followed by various characters to signal\nvarious special sequences. It\u2019s also used to escape all the metacharacters so\nyou can still match them in patterns; for example, if you need to match a [\nor \\\n, you can precede them with a backslash to remove their special\nmeaning: \\[\nor \\\\\n.\nSome of the special sequences beginning with '\\'\nrepresent\npredefined sets of characters that are often useful, such as the set\nof digits, the set of letters, or the set of anything that isn\u2019t\nwhitespace.\nLet\u2019s take an example: \\w\nmatches any alphanumeric character. If\nthe regex pattern is expressed in bytes, this is equivalent to the\nclass [a-zA-Z0-9_]\n. If the regex pattern is a string, \\w\nwill\nmatch all the characters marked as letters in the Unicode database\nprovided by the unicodedata\nmodule. You can use the more\nrestricted definition of \\w\nin a string pattern by supplying the\nre.ASCII\nflag when compiling the regular expression.\nThe following list of special sequences isn\u2019t complete. For a complete list of sequences and expanded class definitions for Unicode string patterns, see the last part of Regular Expression Syntax in the Standard Library reference. In general, the Unicode versions match any character that\u2019s in the appropriate category in the Unicode database.\n\\d\nMatches any decimal digit; this is equivalent to the class\n[0-9]\n.\\D\nMatches any non-digit character; this is equivalent to the class\n[^0-9]\n.\\s\nMatches any whitespace character; this is equivalent to the class\n[ \\t\\n\\r\\f\\v]\n.\\S\nMatches any non-whitespace character; this is equivalent to the class\n[^ \\t\\n\\r\\f\\v]\n.\\w\nMatches any alphanumeric character; this is equivalent to the class\n[a-zA-Z0-9_]\n.\\W\nMatches any non-alphanumeric character; this is equivalent to the class\n[^a-zA-Z0-9_]\n.\nThese sequences can be included inside a character class. For example,\n[\\s,.]\nis a character class that will match any whitespace character, or\n','\nor '.'\n.\nThe final metacharacter in this section is .\n. It matches anything except a\nnewline character, and there\u2019s an alternate mode (re.DOTALL\n) where it will\nmatch even a newline. .\nis often used where you want to match \u201cany\ncharacter\u201d.\nRepeating Things\u00b6\nBeing able to match varying sets of characters is the first thing regular expressions can do that isn\u2019t already possible with the methods available on strings. However, if that was the only additional capability of regexes, they wouldn\u2019t be much of an advance. Another capability is that you can specify that portions of the RE must be repeated a certain number of times.\nThe first metacharacter for repeating things that we\u2019ll look at is *\n. *\ndoesn\u2019t match the literal character '*'\n; instead, it specifies that the\nprevious character can be matched zero or more times, instead of exactly once.\nFor example, ca*t\nwill match 'ct'\n(0 'a'\ncharacters), 'cat'\n(1 'a'\n),\n'caaat'\n(3 'a'\ncharacters), and so forth.\nRepetitions such as *\nare greedy; when repeating a RE, the matching\nengine will try to repeat it as many times as possible. If later portions of the\npattern don\u2019t match, the matching engine will then back up and try again with\nfewer repetitions.\nA step-by-step example will make this more obvious. Let\u2019s consider the\nexpression a[bcd]*b\n. This matches the letter 'a'\n, zero or more letters\nfrom the class [bcd]\n, and finally ends with a 'b'\n. Now imagine matching\nthis RE against the string 'abcbd'\n.\nStep |\nMatched |\nExplanation |\n|---|---|---|\n1 |\n|\nThe |\n2 |\n|\nThe engine matches |\n3 |\nFailure |\nThe engine tries to match\n|\n4 |\n|\nBack up, so that |\n5 |\nFailure |\nTry |\n6 |\n|\nBack up again, so that\n|\n6 |\n|\nTry |\nThe end of the RE has now been reached, and it has matched 'abcb'\n. This\ndemonstrates how the matching engine goes as far as it can at first, and if no\nmatch is found it will then progressively back up and retry the rest of the RE\nagain and again. It will back up until it has tried zero matches for\n[bcd]*\n, and if that subsequently fails, the engine will conclude that the\nstring doesn\u2019t match the RE at all.\nAnother repeating metacharacter is +\n, which matches one or more times. Pay\ncareful attention to the difference between *\nand +\n; *\nmatches\nzero or more times, so whatever\u2019s being repeated may not be present at all,\nwhile +\nrequires at least one occurrence. To use a similar example,\nca+t\nwill match 'cat'\n(1 'a'\n), 'caaat'\n(3 'a'\ns), but won\u2019t\nmatch 'ct'\n.\nThere are two more repeating operators or quantifiers. The question mark character, ?\n,\nmatches either once or zero times; you can think of it as marking something as\nbeing optional. For example, home-?brew\nmatches either 'homebrew'\nor\n'home-brew'\n.\nThe most complicated quantifier is {m,n}\n, where m and n are\ndecimal integers. This quantifier means there must be at least m repetitions,\nand at most n. For example, a/{1,3}b\nwill match 'a/b'\n, 'a//b'\n, and\n'a///b'\n. It won\u2019t match 'ab'\n, which has no slashes, or 'a////b'\n, which\nhas four.\nYou can omit either m or n; in that case, a reasonable value is assumed for the missing value. Omitting m is interpreted as a lower limit of 0, while omitting n results in an upper bound of infinity.\nThe simplest case {m}\nmatches the preceding item exactly m times.\nFor example, a/{2}b\nwill only match 'a//b'\n.\nReaders of a reductionist bent may notice that the three other quantifiers can\nall be expressed using this notation. {0,}\nis the same as *\n, {1,}\nis equivalent to +\n, and {0,1}\nis the same as ?\n. It\u2019s better to use\n*\n, +\n, or ?\nwhen you can, simply because they\u2019re shorter and easier\nto read.\nUsing Regular Expressions\u00b6\nNow that we\u2019ve looked at some simple regular expressions, how do we actually use\nthem in Python? The re\nmodule provides an interface to the regular\nexpression engine, allowing you to compile REs into objects and then perform\nmatches with them.\nCompiling Regular Expressions\u00b6\nRegular expressions are compiled into pattern objects, which have methods for various operations such as searching for pattern matches or performing string substitutions.\n>>> import re\n>>> p = re.compile('ab*')\n>>> p\nre.compile('ab*')\nre.compile()\nalso accepts an optional flags argument, used to enable\nvarious special features and syntax variations. We\u2019ll go over the available\nsettings later, but for now a single example will do:\n>>> p = re.compile('ab*', re.IGNORECASE)\nThe RE is passed to re.compile()\nas a string. REs are handled as strings\nbecause regular expressions aren\u2019t part of the core Python language, and no\nspecial syntax was created for expressing them. (There are applications that\ndon\u2019t need REs at all, so there\u2019s no need to bloat the language specification by\nincluding them.) Instead, the re\nmodule is simply a C extension module\nincluded with Python, just like the socket\nor zlib\nmodules.\nPutting REs in strings keeps the Python language simpler, but has one disadvantage which is the topic of the next section.\nThe Backslash Plague\u00b6\nAs stated earlier, regular expressions use the backslash character ('\\'\n) to\nindicate special forms or to allow special characters to be used without\ninvoking their special meaning. This conflicts with Python\u2019s usage of the same\ncharacter for the same purpose in string literals.\nLet\u2019s say you want to write a RE that matches the string \\section\n, which\nmight be found in a LaTeX file. To figure out what to write in the program\ncode, start with the desired string to be matched. Next, you must escape any\nbackslashes and other metacharacters by preceding them with a backslash,\nresulting in the string \\\\section\n. The resulting string that must be passed\nto re.compile()\nmust be \\\\section\n. However, to express this as a\nPython string literal, both backslashes must be escaped again.\nCharacters |\nStage |\n|---|---|\n|\nText string to be matched |\n|\nEscaped backslash for |\n|\nEscaped backslashes for a string literal |\nIn short, to match a literal backslash, one has to write '\\\\\\\\'\nas the RE\nstring, because the regular expression must be \\\\\n, and each backslash must\nbe expressed as \\\\\ninside a regular Python string literal. In REs that\nfeature backslashes repeatedly, this leads to lots of repeated backslashes and\nmakes the resulting strings difficult to understand.\nThe solution is to use Python\u2019s raw string notation for regular expressions;\nbackslashes are not handled in any special way in a string literal prefixed with\n'r'\n, so r\"\\n\"\nis a two-character string containing '\\'\nand 'n'\n,\nwhile \"\\n\"\nis a one-character string containing a newline. Regular\nexpressions will often be written in Python code using this raw string notation.\nIn addition, special escape sequences that are valid in regular expressions,\nbut not valid as Python string literals, now result in a\nDeprecationWarning\nand will eventually become a SyntaxError\n,\nwhich means the sequences will be invalid if raw string notation or escaping\nthe backslashes isn\u2019t used.\nRegular String |\nRaw string |\n|---|---|\n|\n|\n|\n|\n|\n|\nPerforming Matches\u00b6\nOnce you have an object representing a compiled regular expression, what do you\ndo with it? Pattern objects have several methods and attributes.\nOnly the most significant ones will be covered here; consult the re\ndocs\nfor a complete listing.\nMethod/Attribute |\nPurpose |\n|---|---|\n|\nDetermine if the RE matches at the beginning of the string. |\n|\nScan through a string, looking for any location where this RE matches. |\n|\nFind all substrings where the RE matches, and returns them as a list. |\n|\nFind all substrings where the RE matches, and returns them as an iterator. |\nmatch()\nand search()\nreturn None\nif no match can be found. If\nthey\u2019re successful, a match object instance is returned,\ncontaining information about the match: where it starts and ends, the substring\nit matched, and more.\nYou can learn about this by interactively experimenting with the re\nmodule.\nThis HOWTO uses the standard Python interpreter for its examples. First, run the\nPython interpreter, import the re\nmodule, and compile a RE:\n>>> import re\n>>> p = re.compile('[a-z]+')\n>>> p\nre.compile('[a-z]+')\nNow, you can try matching various strings against the RE [a-z]+\n. An empty\nstring shouldn\u2019t match at all, since +\nmeans \u2018one or more repetitions\u2019.\nmatch()\nshould return None\nin this case, which will cause the\ninterpreter to print no output. You can explicitly print the result of\nmatch()\nto make this clear.\n>>> p.match(\"\")\n>>> print(p.match(\"\"))\nNone\nNow, let\u2019s try it on a string that it should match, such as tempo\n. In this\ncase, match()\nwill return a match object, so you\nshould store the result in a variable for later use.\n>>> m = p.match('tempo')\n>>> m\n\nNow you can query the match object for information about the matching string. Match object instances also have several methods and attributes; the most important ones are:\nMethod/Attribute |\nPurpose |\n|---|---|\n|\nReturn the string matched by the RE |\n|\nReturn the starting position of the match |\n|\nReturn the ending position of the match |\n|\nReturn a tuple containing the (start, end) positions of the match |\nTrying these methods will soon clarify their meaning:\n>>> m.group()\n'tempo'\n>>> m.start(), m.end()\n(0, 5)\n>>> m.span()\n(0, 5)\ngroup()\nreturns the substring that was matched by the RE. start()\nand end()\nreturn the starting and ending index of the match. span()\nreturns both start and end indexes in a single tuple. Since the match()\nmethod only checks if the RE matches at the start of a string, start()\nwill always be zero. However, the search()\nmethod of patterns\nscans through the string, so the match may not start at zero in that\ncase.\n>>> print(p.match('::: message'))\nNone\n>>> m = p.search('::: message'); print(m)\n\n>>> m.group()\n'message'\n>>> m.span()\n(4, 11)\nIn actual programs, the most common style is to store the\nmatch object in a variable, and then check if it was\nNone\n. This usually looks like:\np = re.compile( ... )\nm = p.match( 'string goes here' )\nif m:\nprint('Match found: ', m.group())\nelse:\nprint('No match')\nTwo pattern methods return all of the matches for a pattern.\nfindall()\nreturns a list of matching strings:\n>>> p = re.compile(r'\\d+')\n>>> p.findall('12 drummers drumming, 11 pipers piping, 10 lords a-leaping')\n['12', '11', '10']\nThe r\nprefix, making the literal a raw string literal, is needed in this\nexample because escape sequences in a normal \u201ccooked\u201d string literal that are\nnot recognized by Python, as opposed to regular expressions, now result in a\nDeprecationWarning\nand will eventually become a SyntaxError\n. See\nThe Backslash Plague.\nfindall()\nhas to create the entire list before it can be returned as the\nresult. The finditer()\nmethod returns a sequence of\nmatch object instances as an iterator:\n>>> iterator = p.finditer('12 drummers drumming, 11 ... 10 ...')\n>>> iterator\n\n>>> for match in iterator:\n... print(match.span())\n...\n(0, 2)\n(22, 24)\n(29, 31)\nModule-Level Functions\u00b6\nYou don\u2019t have to create a pattern object and call its methods; the\nre\nmodule also provides top-level functions called match()\n,\nsearch()\n, findall()\n, sub()\n, and so forth. These functions\ntake the same arguments as the corresponding pattern method with\nthe RE string added as the first argument, and still return either None\nor a\nmatch object instance.\n>>> print(re.match(r'From\\s+', 'Fromage amk'))\nNone\n>>> re.match(r'From\\s+', 'From amk Thu May 14 19:12:10 1998')\n\nUnder the hood, these functions simply create a pattern object for you and call the appropriate method on it. They also store the compiled object in a cache, so future calls using the same RE won\u2019t need to parse the pattern again and again.\nShould you use these module-level functions, or should you get the pattern and call its methods yourself? If you\u2019re accessing a regex within a loop, pre-compiling it will save a few function calls. Outside of loops, there\u2019s not much difference thanks to the internal cache.\nCompilation Flags\u00b6\nCompilation flags let you modify some aspects of how regular expressions work.\nFlags are available in the re\nmodule under two names, a long name such as\nIGNORECASE\nand a short, one-letter form such as I\n. (If you\u2019re\nfamiliar with Perl\u2019s pattern modifiers, the one-letter forms use the same\nletters; the short form of re.VERBOSE\nis re.X\n, for example.)\nMultiple flags can be specified by bitwise OR-ing them; re.I | re.M\nsets\nboth the I\nand M\nflags, for example.\nHere\u2019s a table of the available flags, followed by a more detailed explanation of each one.\nFlag |\nMeaning |\n|---|---|\nMakes several escapes like |\n|\nMake |\n|\nDo case-insensitive matches. |\n|\nDo a locale-aware match. |\n|\nMulti-line matching, affecting |\n|\nEnable verbose REs, which can be organized more cleanly and understandably. |\n- re.I\n- re.IGNORECASE\nPerform case-insensitive matching; character class and literal strings will match letters by ignoring case. For example,\n[A-Z]\nwill match lowercase letters, too. Full Unicode matching also works unless theASCII\nflag is used to disable non-ASCII matches. When the Unicode patterns[a-z]\nor[A-Z]\nare used in combination with theIGNORECASE\nflag, they will match the 52 ASCII letters and 4 additional non-ASCII letters: \u2018\u0130\u2019 (U+0130, Latin capital letter I with dot above), \u2018\u0131\u2019 (U+0131, Latin small letter dotless i), \u2018\u017f\u2019 (U+017F, Latin small letter long s) and \u2018K\u2019 (U+212A, Kelvin sign).Spam\nwill match'Spam'\n,'spam'\n,'spAM'\n, or'\u017fpam'\n(the latter is matched only in Unicode mode). This lowercasing doesn\u2019t take the current locale into account; it will if you also set theLOCALE\nflag.\n- re.L\n- re.LOCALE\nMake\n\\w\n,\\W\n,\\b\n,\\B\nand case-insensitive matching dependent on the current locale instead of the Unicode database.Locales are a feature of the C library intended to help in writing programs that take account of language differences. For example, if you\u2019re processing encoded French text, you\u2019d want to be able to write\n\\w+\nto match words, but\\w\nonly matches the character class[A-Za-z]\nin bytes patterns; it won\u2019t match bytes corresponding to\u00e9\nor\u00e7\n. If your system is configured properly and a French locale is selected, certain C functions will tell the program that the byte corresponding to\u00e9\nshould also be considered a letter. Setting theLOCALE\nflag when compiling a regular expression will cause the resulting compiled object to use these C functions for\\w\n; this is slower, but also enables\\w+\nto match French words as you\u2019d expect. The use of this flag is discouraged in Python 3 as the locale mechanism is very unreliable, it only handles one \u201cculture\u201d at a time, and it only works with 8-bit locales. Unicode matching is already enabled by default in Python 3 for Unicode (str) patterns, and it is able to handle different locales/languages.\n- re.M\n- re.MULTILINE\n(\n^\nand$\nhaven\u2019t been explained yet; they\u2019ll be introduced in section More Metacharacters.)Usually\n^\nmatches only at the beginning of the string, and$\nmatches only at the end of the string and immediately before the newline (if any) at the end of the string. When this flag is specified,^\nmatches at the beginning of the string and at the beginning of each line within the string, immediately following each newline. Similarly, the$\nmetacharacter matches either at the end of the string and at the end of each line (immediately preceding each newline).\n- re.S\n- re.DOTALL\nMakes the\n'.'\nspecial character match any character at all, including a newline; without this flag,'.'\nwill match anything except a newline.\n- re.A\n- re.ASCII\nMake\n\\w\n,\\W\n,\\b\n,\\B\n,\\s\nand\\S\nperform ASCII-only matching instead of full Unicode matching. This is only meaningful for Unicode patterns, and is ignored for byte patterns.\n- re.X\n- re.VERBOSE\nThis flag allows you to write regular expressions that are more readable by granting you more flexibility in how you can format them. When this flag has been specified, whitespace within the RE string is ignored, except when the whitespace is in a character class or preceded by an unescaped backslash; this lets you organize and indent the RE more clearly. This flag also lets you put comments within a RE that will be ignored by the engine; comments are marked by a\n'#'\nthat\u2019s neither in a character class or preceded by an unescaped backslash.For example, here\u2019s a RE that uses\nre.VERBOSE\n; see how much easier it is to read?charref = re.compile(r\"\"\" &[#] # Start of a numeric entity reference ( 0[0-7]+ # Octal form | [0-9]+ # Decimal form | x[0-9a-fA-F]+ # Hexadecimal form ) ; # Trailing semicolon \"\"\", re.VERBOSE)\nWithout the verbose setting, the RE would look like this:\ncharref = re.compile(\"&#(0[0-7]+\" \"|[0-9]+\" \"|x[0-9a-fA-F]+);\")\nIn the above example, Python\u2019s automatic concatenation of string literals has been used to break up the RE into smaller pieces, but it\u2019s still more difficult to understand than the version using\nre.VERBOSE\n.\nMore Pattern Power\u00b6\nSo far we\u2019ve only covered a part of the features of regular expressions. In this section, we\u2019ll cover some new metacharacters, and how to use groups to retrieve portions of the text that was matched.\nMore Metacharacters\u00b6\nThere are some metacharacters that we haven\u2019t covered yet. Most of them will be covered in this section.\nSome of the remaining metacharacters to be discussed are zero-width\nassertions. They don\u2019t cause the engine to advance through the string;\ninstead, they consume no characters at all, and simply succeed or fail. For\nexample, \\b\nis an assertion that the current position is located at a word\nboundary; the position isn\u2019t changed by the \\b\nat all. This means that\nzero-width assertions should never be repeated, because if they match once at a\ngiven location, they can obviously be matched an infinite number of times.\n|\nAlternation, or the \u201cor\u201d operator. If A and B are regular expressions,\nA|B\nwill match any string that matches either A or B.|\nhas very low precedence in order to make it work reasonably when you\u2019re alternating multi-character strings.Crow|Servo\nwill match either'Crow'\nor'Servo'\n, not'Cro'\n, a'w'\nor an'S'\n, and'ervo'\n.To match a literal\n'|'\n, use\\|\n, or enclose it inside a character class, as in[|]\n.^\nMatches at the beginning of lines. Unless the\nMULTILINE\nflag has been set, this will only match at the beginning of the string. InMULTILINE\nmode, this also matches immediately after each newline within the string.For example, if you wish to match the word\nFrom\nonly at the beginning of a line, the RE to use is^From\n.>>> print(re.search('^From', 'From Here to Eternity')) >>> print(re.search('^From', 'Reciting From Memory')) None\nTo match a literal\n'^'\n, use\\^\n.$\nMatches at the end of a line, which is defined as either the end of the string, or any location followed by a newline character.\n>>> print(re.search('}$', '{block}')) >>> print(re.search('}$', '{block} ')) None >>> print(re.search('}$', '{block}\\n')) \nTo match a literal\n'$'\n, use\\$\nor enclose it inside a character class, as in[$]\n.\\A\nMatches only at the start of the string. When not in\nMULTILINE\nmode,\\A\nand^\nare effectively the same. InMULTILINE\nmode, they\u2019re different:\\A\nstill matches only at the beginning of the string, but^\nmay match at any location inside the string that follows a newline character.\\z\nMatches only at the end of the string.\n\\Z\nThe same as\n\\z\n. For compatibility with old Python versions.\\b\nWord boundary. This is a zero-width assertion that matches only at the beginning or end of a word. A word is defined as a sequence of alphanumeric characters, so the end of a word is indicated by whitespace or a non-alphanumeric character.\nThe following example matches\nclass\nonly when it\u2019s a complete word; it won\u2019t match when it\u2019s contained inside another word.>>> p = re.compile(r'\\bclass\\b') >>> print(p.search('no class at all')) >>> print(p.search('the declassified algorithm')) None >>> print(p.search('one subclass is')) None\nThere are two subtleties you should remember when using this special sequence. First, this is the worst collision between Python\u2019s string literals and regular expression sequences. In Python\u2019s string literals,\n\\b\nis the backspace character, ASCII value 8. If you\u2019re not using raw strings, then Python will convert the\\b\nto a backspace, and your RE won\u2019t match as you expect it to. The following example looks the same as our previous RE, but omits the'r'\nin front of the RE string.>>> p = re.compile('\\bclass\\b') >>> print(p.search('no class at all')) None >>> print(p.search('\\b' + 'class' + '\\b')) \nSecond, inside a character class, where there\u2019s no use for this assertion,\n\\b\nrepresents the backspace character, for compatibility with Python\u2019s string literals.\\B\nAnother zero-width assertion, this is the opposite of\n\\b\n, only matching when the current position is not at a word boundary.\nGrouping\u00b6\nFrequently you need to obtain more information than just whether the RE matched\nor not. Regular expressions are often used to dissect strings by writing a RE\ndivided into several subgroups which match different components of interest.\nFor example, an RFC-822 header line is divided into a header name and a value,\nseparated by a ':'\n, like this:\nFrom: author@example.com\nUser-Agent: Thunderbird 1.5.0.9 (X11/20061227)\nMIME-Version: 1.0\nTo: editor@example.com\nThis can be handled by writing a regular expression which matches an entire header line, and has one group which matches the header name, and another group which matches the header\u2019s value.\nGroups are marked by the '('\n, ')'\nmetacharacters. '('\nand ')'\nhave much the same meaning as they do in mathematical expressions; they group\ntogether the expressions contained inside them, and you can repeat the contents\nof a group with a quantifier, such as *\n, +\n, ?\n, or\n{m,n}\n. For example, (ab)*\nwill match zero or more repetitions of\nab\n.\n>>> p = re.compile('(ab)*')\n>>> print(p.match('ababababab').span())\n(0, 10)\nGroups indicated with '('\n, ')'\nalso capture the starting and ending\nindex of the text that they match; this can be retrieved by passing an argument\nto group()\n, start()\n, end()\n, and\nspan()\n. Groups are\nnumbered starting with 0. Group 0 is always present; it\u2019s the whole RE, so\nmatch object methods all have group 0 as their default\nargument. Later we\u2019ll see how to express groups that don\u2019t capture the span\nof text that they match.\n>>> p = re.compile('(a)b')\n>>> m = p.match('ab')\n>>> m.group()\n'ab'\n>>> m.group(0)\n'ab'\nSubgroups are numbered from left to right, from 1 upward. Groups can be nested; to determine the number, just count the opening parenthesis characters, going from left to right.\n>>> p = re.compile('(a(b)c)d')\n>>> m = p.match('abcd')\n>>> m.group(0)\n'abcd'\n>>> m.group(1)\n'abc'\n>>> m.group(2)\n'b'\ngroup()\ncan be passed multiple group numbers at a time, in which case it\nwill return a tuple containing the corresponding values for those groups.\n>>> m.group(2,1,2)\n('b', 'abc', 'b')\nThe groups()\nmethod returns a tuple containing the strings for all the\nsubgroups, from 1 up to however many there are.\n>>> m.groups()\n('abc', 'b')\nBackreferences in a pattern allow you to specify that the contents of an earlier\ncapturing group must also be found at the current location in the string. For\nexample, \\1\nwill succeed if the exact contents of group 1 can be found at\nthe current position, and fails otherwise. Remember that Python\u2019s string\nliterals also use a backslash followed by numbers to allow including arbitrary\ncharacters in a string, so be sure to use a raw string when incorporating\nbackreferences in a RE.\nFor example, the following RE detects doubled words in a string.\n>>> p = re.compile(r'\\b(\\w+)\\s+\\1\\b')\n>>> p.search('Paris in the the spring').group()\n'the the'\nBackreferences like this aren\u2019t often useful for just searching through a string \u2014 there are few text formats which repeat data in this way \u2014 but you\u2019ll soon find out that they\u2019re very useful when performing string substitutions.\nNon-capturing and Named Groups\u00b6\nElaborate REs may use many groups, both to capture substrings of interest, and to group and structure the RE itself. In complex REs, it becomes difficult to keep track of the group numbers. There are two features which help with this problem. Both of them use a common syntax for regular expression extensions, so we\u2019ll look at that first.\nPerl 5 is well known for its powerful additions to standard regular expressions.\nFor these new features the Perl developers couldn\u2019t choose new single-keystroke metacharacters\nor new special sequences beginning with \\\nwithout making Perl\u2019s regular\nexpressions confusingly different from standard REs. If they chose &\nas a\nnew metacharacter, for example, old expressions would be assuming that &\nwas\na regular character and wouldn\u2019t have escaped it by writing \\&\nor [&]\n.\nThe solution chosen by the Perl developers was to use (?...)\nas the\nextension syntax. ?\nimmediately after a parenthesis was a syntax error\nbecause the ?\nwould have nothing to repeat, so this didn\u2019t introduce any\ncompatibility problems. The characters immediately after the ?\nindicate\nwhat extension is being used, so (?=foo)\nis one thing (a positive lookahead\nassertion) and (?:foo)\nis something else (a non-capturing group containing\nthe subexpression foo\n).\nPython supports several of Perl\u2019s extensions and adds an extension\nsyntax to Perl\u2019s extension syntax. If the first character after the\nquestion mark is a P\n, you know that it\u2019s an extension that\u2019s\nspecific to Python.\nNow that we\u2019ve looked at the general extension syntax, we can return to the features that simplify working with groups in complex REs.\nSometimes you\u2019ll want to use a group to denote a part of a regular expression,\nbut aren\u2019t interested in retrieving the group\u2019s contents. You can make this fact\nexplicit by using a non-capturing group: (?:...)\n, where you can replace the\n...\nwith any other regular expression.\n>>> m = re.match(\"([abc])+\", \"abc\")\n>>> m.groups()\n('c',)\n>>> m = re.match(\"(?:[abc])+\", \"abc\")\n>>> m.groups()\n()\nExcept for the fact that you can\u2019t retrieve the contents of what the group\nmatched, a non-capturing group behaves exactly the same as a capturing group;\nyou can put anything inside it, repeat it with a repetition metacharacter such\nas *\n, and nest it within other groups (capturing or non-capturing).\n(?:...)\nis particularly useful when modifying an existing pattern, since you\ncan add new groups without changing how all the other groups are numbered. It\nshould be mentioned that there\u2019s no performance difference in searching between\ncapturing and non-capturing groups; neither form is any faster than the other.\nA more significant feature is named groups: instead of referring to them by numbers, groups can be referenced by a name.\nThe syntax for a named group is one of the Python-specific extensions:\n(?P...)\n. name is, obviously, the name of the group. Named groups\nbehave exactly like capturing groups, and additionally associate a name\nwith a group. The match object methods that deal with\ncapturing groups all accept either integers that refer to the group by number\nor strings that contain the desired group\u2019s name. Named groups are still\ngiven numbers, so you can retrieve information about a group in two ways:\n>>> p = re.compile(r'(?P\\b\\w+\\b)')\n>>> m = p.search( '(((( Lots of punctuation )))' )\n>>> m.group('word')\n'Lots'\n>>> m.group(1)\n'Lots'\nAdditionally, you can retrieve named groups as a dictionary with\ngroupdict()\n:\n>>> m = re.match(r'(?P\\w+) (?P\\w+)', 'Jane Doe')\n>>> m.groupdict()\n{'first': 'Jane', 'last': 'Doe'}\nNamed groups are handy because they let you use easily remembered names, instead\nof having to remember numbers. Here\u2019s an example RE from the imaplib\nmodule:\nInternalDate = re.compile(r'INTERNALDATE \"'\nr'(?P[ 123][0-9])-(?P[A-Z][a-z][a-z])-'\nr'(?P[0-9][0-9][0-9][0-9])'\nr' (?P[0-9][0-9]):(?P[0-9][0-9]):(?P[0-9][0-9])'\nr' (?P[-+])(?P[0-9][0-9])(?P[0-9][0-9])'\nr'\"')\nIt\u2019s obviously much easier to retrieve m.group('zonem')\n, instead of having\nto remember to retrieve group 9.\nThe syntax for backreferences in an expression such as (...)\\1\nrefers to the\nnumber of the group. There\u2019s naturally a variant that uses the group name\ninstead of the number. This is another Python extension: (?P=name)\nindicates\nthat the contents of the group called name should again be matched at the\ncurrent point. The regular expression for finding doubled words,\n\\b(\\w+)\\s+\\1\\b\ncan also be written as \\b(?P\\w+)\\s+(?P=word)\\b\n:\n>>> p = re.compile(r'\\b(?P\\w+)\\s+(?P=word)\\b')\n>>> p.search('Paris in the the spring').group()\n'the the'\nLookahead Assertions\u00b6\nAnother zero-width assertion is the lookahead assertion. Lookahead assertions are available in both positive and negative form, and look like this:\n(?=...)\nPositive lookahead assertion. This succeeds if the contained regular expression, represented here by\n...\n, successfully matches at the current location, and fails otherwise. But, once the contained expression has been tried, the matching engine doesn\u2019t advance at all; the rest of the pattern is tried right where the assertion started.(?!...)\nNegative lookahead assertion. This is the opposite of the positive assertion; it succeeds if the contained expression doesn\u2019t match at the current position in the string.\nTo make this concrete, let\u2019s look at a case where a lookahead is useful.\nConsider a simple pattern to match a filename and split it apart into a base\nname and an extension, separated by a .\n. For example, in news.rc\n,\nnews\nis the base name, and rc\nis the filename\u2019s extension.\nThe pattern to match this is quite simple:\n.*[.].*$\nNotice that the .\nneeds to be treated specially because it\u2019s a\nmetacharacter, so it\u2019s inside a character class to only match that\nspecific character. Also notice the trailing $\n; this is added to\nensure that all the rest of the string must be included in the\nextension. This regular expression matches foo.bar\nand\nautoexec.bat\nand sendmail.cf\nand printers.conf\n.\nNow, consider complicating the problem a bit; what if you want to match\nfilenames where the extension is not bat\n? Some incorrect attempts:\n.*[.][^b].*$\nThe first attempt above tries to exclude bat\nby requiring\nthat the first character of the extension is not a b\n. This is wrong,\nbecause the pattern also doesn\u2019t match foo.bar\n.\n.*[.]([^b]..|.[^a].|..[^t])$\nThe expression gets messier when you try to patch up the first solution by\nrequiring one of the following cases to match: the first character of the\nextension isn\u2019t b\n; the second character isn\u2019t a\n; or the third character\nisn\u2019t t\n. This accepts foo.bar\nand rejects autoexec.bat\n, but it\nrequires a three-letter extension and won\u2019t accept a filename with a two-letter\nextension such as sendmail.cf\n. We\u2019ll complicate the pattern again in an\neffort to fix it.\n.*[.]([^b].?.?|.[^a]?.?|..?[^t]?)$\nIn the third attempt, the second and third letters are all made optional in\norder to allow matching extensions shorter than three characters, such as\nsendmail.cf\n.\nThe pattern\u2019s getting really complicated now, which makes it hard to read and\nunderstand. Worse, if the problem changes and you want to exclude both bat\nand exe\nas extensions, the pattern would get even more complicated and\nconfusing.\nA negative lookahead cuts through all this confusion:\n.*[.](?!bat$)[^.]*$\nThe negative lookahead means: if the expression bat\ndoesn\u2019t match at this point, try the rest of the pattern; if bat$\ndoes\nmatch, the whole pattern will fail. The trailing $\nis required to ensure\nthat something like sample.batch\n, where the extension only starts with\nbat\n, will be allowed. The [^.]*\nmakes sure that the pattern works\nwhen there are multiple dots in the filename.\nExcluding another filename extension is now easy; simply add it as an\nalternative inside the assertion. The following pattern excludes filenames that\nend in either bat\nor exe\n:\n.*[.](?!bat$|exe$)[^.]*$\nModifying Strings\u00b6\nUp to this point, we\u2019ve simply performed searches against a static string. Regular expressions are also commonly used to modify strings in various ways, using the following pattern methods:\nMethod/Attribute |\nPurpose |\n|---|---|\n|\nSplit the string into a list, splitting it wherever the RE matches |\n|\nFind all substrings where the RE matches, and replace them with a different string |\n|\nDoes the same thing as |\nSplitting Strings\u00b6\nThe split()\nmethod of a pattern splits a string apart\nwherever the RE matches, returning a list of the pieces. It\u2019s similar to the\nsplit()\nmethod of strings but provides much more generality in the\ndelimiters that you can split by; string split()\nonly supports splitting by\nwhitespace or by a fixed string. As you\u2019d expect, there\u2019s a module-level\nre.split()\nfunction, too.\n- .split(string[, maxsplit=0])\nSplit string by the matches of the regular expression. If capturing parentheses are used in the RE, then their contents will also be returned as part of the resulting list. If maxsplit is nonzero, at most maxsplit splits are performed.\nYou can limit the number of splits made, by passing a value for maxsplit. When maxsplit is nonzero, at most maxsplit splits will be made, and the remainder of the string is returned as the final element of the list. In the following example, the delimiter is any sequence of non-alphanumeric characters.\n>>> p = re.compile(r'\\W+')\n>>> p.split('This is a test, short and sweet, of split().')\n['This', 'is', 'a', 'test', 'short', 'and', 'sweet', 'of', 'split', '']\n>>> p.split('This is a test, short and sweet, of split().', 3)\n['This', 'is', 'a', 'test, short and sweet, of split().']\nSometimes you\u2019re not only interested in what the text between delimiters is, but also need to know what the delimiter was. If capturing parentheses are used in the RE, then their values are also returned as part of the list. Compare the following calls:\n>>> p = re.compile(r'\\W+')\n>>> p2 = re.compile(r'(\\W+)')\n>>> p.split('This... is a test.')\n['This', 'is', 'a', 'test', '']\n>>> p2.split('This... is a test.')\n['This', '... ', 'is', ' ', 'a', ' ', 'test', '.', '']\nThe module-level function re.split()\nadds the RE to be used as the first\nargument, but is otherwise the same.\n>>> re.split(r'[\\W]+', 'Words, words, words.')\n['Words', 'words', 'words', '']\n>>> re.split(r'([\\W]+)', 'Words, words, words.')\n['Words', ', ', 'words', ', ', 'words', '.', '']\n>>> re.split(r'[\\W]+', 'Words, words, words.', 1)\n['Words', 'words, words.']\nSearch and Replace\u00b6\nAnother common task is to find all the matches for a pattern, and replace them\nwith a different string. The sub()\nmethod takes a replacement value,\nwhich can be either a string or a function, and the string to be processed.\n- .sub(replacement, string[, count=0])\nReturns the string obtained by replacing the leftmost non-overlapping occurrences of the RE in string by the replacement replacement. If the pattern isn\u2019t found, string is returned unchanged.\nThe optional argument count is the maximum number of pattern occurrences to be replaced; count must be a non-negative integer. The default value of 0 means to replace all occurrences.\nHere\u2019s a simple example of using the sub()\nmethod. It replaces colour\nnames with the word colour\n:\n>>> p = re.compile('(blue|white|red)')\n>>> p.sub('colour', 'blue socks and red shoes')\n'colour socks and colour shoes'\n>>> p.sub('colour', 'blue socks and red shoes', count=1)\n'colour socks and red shoes'\nThe subn()\nmethod does the same work, but returns a 2-tuple containing the\nnew string value and the number of replacements that were performed:\n>>> p = re.compile('(blue|white|red)')\n>>> p.subn('colour', 'blue socks and red shoes')\n('colour socks and colour shoes', 2)\n>>> p.subn('colour', 'no colours at all')\n('no colours at all', 0)\nEmpty matches are replaced only when they\u2019re not adjacent to a previous empty match.\n>>> p = re.compile('x*')\n>>> p.sub('-', 'abxd')\n'-a-b--d-'\nIf replacement is a string, any backslash escapes in it are processed. That\nis, \\n\nis converted to a single newline character, \\r\nis converted to a\ncarriage return, and so forth. Unknown escapes such as \\&\nare left alone.\nBackreferences, such as \\6\n, are replaced with the substring matched by the\ncorresponding group in the RE. This lets you incorporate portions of the\noriginal text in the resulting replacement string.\nThis example matches the word section\nfollowed by a string enclosed in\n{\n, }\n, and changes section\nto subsection\n:\n>>> p = re.compile('section{ ( [^}]* ) }', re.VERBOSE)\n>>> p.sub(r'subsection{\\1}','section{First} section{second}')\n'subsection{First} subsection{second}'\nThere\u2019s also a syntax for referring to named groups as defined by the\n(?P...)\nsyntax. \\g\nwill use the substring matched by the\ngroup named name\n, and \\g\nuses the corresponding group number.\n\\g<2>\nis therefore equivalent to \\2\n, but isn\u2019t ambiguous in a\nreplacement string such as \\g<2>0\n. (\\20\nwould be interpreted as a\nreference to group 20, not a reference to group 2 followed by the literal\ncharacter '0'\n.) The following substitutions are all equivalent, but use all\nthree variations of the replacement string.\n>>> p = re.compile('section{ (?P [^}]* ) }', re.VERBOSE)\n>>> p.sub(r'subsection{\\1}','section{First}')\n'subsection{First}'\n>>> p.sub(r'subsection{\\g<1>}','section{First}')\n'subsection{First}'\n>>> p.sub(r'subsection{\\g}','section{First}')\n'subsection{First}'\nreplacement can also be a function, which gives you even more control. If replacement is a function, the function is called for every non-overlapping occurrence of pattern. On each call, the function is passed a match object argument for the match and can use this information to compute the desired replacement string and return it.\nIn the following example, the replacement function translates decimals into hexadecimal:\n>>> def hexrepl(match):\n... \"Return the hex string for a decimal number\"\n... value = int(match.group())\n... return hex(value)\n...\n>>> p = re.compile(r'\\d+')\n>>> p.sub(hexrepl, 'Call 65490 for printing, 49152 for user code.')\n'Call 0xffd2 for printing, 0xc000 for user code.'\nWhen using the module-level re.sub()\nfunction, the pattern is passed as\nthe first argument. The pattern may be provided as an object or as a string; if\nyou need to specify regular expression flags, you must either use a\npattern object as the first parameter, or use embedded modifiers in the\npattern string, e.g. sub(\"(?i)b+\", \"x\", \"bbbb BBBB\")\nreturns 'x x'\n.\nCommon Problems\u00b6\nRegular expressions are a powerful tool for some applications, but in some ways their behaviour isn\u2019t intuitive and at times they don\u2019t behave the way you may expect them to. This section will point out some of the most common pitfalls.\nUse String Methods\u00b6\nSometimes using the re\nmodule is a mistake. If you\u2019re matching a fixed\nstring, or a single character class, and you\u2019re not using any re\nfeatures\nsuch as the IGNORECASE\nflag, then the full power of regular expressions\nmay not be required. Strings have several methods for performing operations with\nfixed strings and they\u2019re usually much faster, because the implementation is a\nsingle small C loop that\u2019s been optimized for the purpose, instead of the large,\nmore generalized regular expression engine.\nOne example might be replacing a single fixed string with another one; for\nexample, you might replace word\nwith deed\n. re.sub()\nseems like the\nfunction to use for this, but consider the replace()\nmethod. Note that\nreplace()\nwill also replace word\ninside words, turning swordfish\ninto sdeedfish\n, but the naive RE word\nwould have done that, too. (To\navoid performing the substitution on parts of words, the pattern would have to\nbe \\bword\\b\n, in order to require that word\nhave a word boundary on\neither side. This takes the job beyond replace()\n\u2019s abilities.)\nAnother common task is deleting every occurrence of a single character from a\nstring or replacing it with another single character. You might do this with\nsomething like re.sub('\\n', ' ', S)\n, but translate()\nis capable of\ndoing both tasks and will be faster than any regular expression operation can\nbe.\nIn short, before turning to the re\nmodule, consider whether your problem\ncan be solved with a faster and simpler string method.\nmatch() versus search()\u00b6\nThe match()\nfunction only checks if the RE matches at the beginning of the\nstring while search()\nwill scan forward through the string for a match.\nIt\u2019s important to keep this distinction in mind. Remember, match()\nwill\nonly report a successful match which will start at 0; if the match wouldn\u2019t\nstart at zero, match()\nwill not report it.\n>>> print(re.match('super', 'superstition').span())\n(0, 5)\n>>> print(re.match('super', 'insuperable'))\nNone\nOn the other hand, search()\nwill scan forward through the string,\nreporting the first match it finds.\n>>> print(re.search('super', 'superstition').span())\n(0, 5)\n>>> print(re.search('super', 'insuperable').span())\n(2, 7)\nSometimes you\u2019ll be tempted to keep using re.match()\n, and just add .*\nto the front of your RE. Resist this temptation and use re.search()\ninstead. The regular expression compiler does some analysis of REs in order to\nspeed up the process of looking for a match. One such analysis figures out what\nthe first character of a match must be; for example, a pattern starting with\nCrow\nmust match starting with a 'C'\n. The analysis lets the engine\nquickly scan through the string looking for the starting character, only trying\nthe full match if a 'C'\nis found.\nAdding .*\ndefeats this optimization, requiring scanning to the end of the\nstring and then backtracking to find a match for the rest of the RE. Use\nre.search()\ninstead.\nGreedy versus Non-Greedy\u00b6\nWhen repeating a regular expression, as in a*\n, the resulting action is to\nconsume as much of the pattern as possible. This fact often bites you when\nyou\u2019re trying to match a pair of balanced delimiters, such as the angle brackets\nsurrounding an HTML tag. The naive pattern for matching a single HTML tag\ndoesn\u2019t work because of the greedy nature of .*\n.\n>>> s = 'Title'\n>>> len(s)\n32\n>>> print(re.match('<.*>', s).span())\n(0, 32)\n>>> print(re.match('<.*>', s).group())\nTitle\nThe RE matches the '<'\nin ''\n, and the .*\nconsumes the rest of\nthe string. There\u2019s still more left in the RE, though, and the >\ncan\u2019t\nmatch at the end of the string, so the regular expression engine has to\nbacktrack character by character until it finds a match for the >\n. The\nfinal match extends from the '<'\nin ''\nto the '>'\nin\n''\n, which isn\u2019t what you want.\nIn this case, the solution is to use the non-greedy quantifiers *?\n, +?\n,\n??\n, or {m,n}?\n, which match as little text as possible. In the above\nexample, the '>'\nis tried immediately after the first '<'\nmatches, and\nwhen it fails, the engine advances a character at a time, retrying the '>'\nat every step. This produces just the right result:\n>>> print(re.match('<.*?>', s).group())\n\n(Note that parsing HTML or XML with regular expressions is painful. Quick-and-dirty patterns will handle common cases, but HTML and XML have special cases that will break the obvious regular expression; by the time you\u2019ve written a regular expression that handles all of the possible cases, the patterns will be very complicated. Use an HTML or XML parser module for such tasks.)\nUsing re.VERBOSE\u00b6\nBy now you\u2019ve probably noticed that regular expressions are a very compact notation, but they\u2019re not terribly readable. REs of moderate complexity can become lengthy collections of backslashes, parentheses, and metacharacters, making them difficult to read and understand.\nFor such REs, specifying the re.VERBOSE\nflag when compiling the regular\nexpression can be helpful, because it allows you to format the regular\nexpression more clearly.\nThe re.VERBOSE\nflag has several effects. Whitespace in the regular\nexpression that isn\u2019t inside a character class is ignored. This means that an\nexpression such as dog | cat\nis equivalent to the less readable dog|cat\n,\nbut [a b]\nwill still match the characters 'a'\n, 'b'\n, or a space. In\naddition, you can also put comments inside a RE; comments extend from a #\ncharacter to the next newline. When used with triple-quoted strings, this\nenables REs to be formatted more neatly:\npat = re.compile(r\"\"\"\n\\s* # Skip leading whitespace\n(?P
[^:]+) # Header name\n\\s* : # Whitespace, and a colon\n(?P.*?) # The header's value -- *? used to\n# lose the following trailing whitespace\n\\s*$ # Trailing whitespace to end-of-line\n\"\"\", re.VERBOSE)\nThis is far more readable than:\npat = re.compile(r\"\\s*(?P
[^:]+)\\s*:(?P.*?)\\s*$\")\nFeedback\u00b6\nRegular expressions are a complicated topic. Did this document help you understand them? Were there parts that were unclear, or Problems you encountered that weren\u2019t covered here? If so, please send suggestions for improvements to the author.\nThe most complete book on regular expressions is almost certainly Jeffrey\nFriedl\u2019s Mastering Regular Expressions, published by O\u2019Reilly. Unfortunately,\nit exclusively concentrates on Perl and Java\u2019s flavours of regular expressions,\nand doesn\u2019t contain any Python material at all, so it won\u2019t be useful as a\nreference for programming in Python. (The first edition covered Python\u2019s\nnow-removed regex\nmodule, which won\u2019t help you much.) Consider checking\nit out from your library.", "code_snippets": ["\n", " ", " ", "\n", "\n", "\n", " ", " ", " ", "\n", "\n", " ", " ", "\n", "\n", "\n", "\n", "\n", "\n", " ", " ", "\n", "\n", "\n", "\n", "\n", " ", "\n", "\n", "\n", "\n", "\n", "\n", " ", " ", " ", "\n", "\n", "\n", "\n", "\n", "\n", " ", " ", " ", " ", "\n", " ", " ", " ", " ", "\n", " ", "\n ", " ", "\n", "\n ", "\n", " ", " ", "\n", "\n", "\n", " ", " ", "\n", "\n", "\n", " ", " ", " ", "\n", " ", "\n", "\n", "\n", "\n", "\n", " ", "\n", "\n", " ", "\n", "\n", " ", " ", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", " ", "\n", " ", " ", "\n ", "\n ", "\n", " ", "\n", "\n", " ", "\n", "\n", " ", "\n", "\n", " ", "\n", "\n", " ", "\n", "\n", " ", " ", "\n", "\n", "\n", "\n", "\n", "\n", "\n", " ", " ", "\n", "\n", "\n", " ", " ", " ", " ", "\n", "\n", " ", " ", "\n", "\n", "\n", " ", " ", "\n", " ", " ", "\n", "\n", "\n", "\n", "\n", " ", " ", "\n", " ", " ", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", " ", " ", "\n", "\n", "\n", " ", " ", " ", "\n", "\n", "\n", " ", " ", " ", "\n", "\n", "\n", " ", " ", "\n", " ", " ", " ", " ", "\n", "\n", "\n", "\n", "\n", " ", " ", " ", "\n", "\n", "\n", " ", " ", "\n ", "\n ", "\n ", "\n ", "\n ", "\n", " ", " ", "\n", "\n", "\n", " ", " ", "\n", "\n", "\n", " ", "\n", "\n", " ", " ", "\n", " ", " ", "\n", "\n", "\n", "\n", "\n", " ", "\n", "\n", " ", "\n", "\n", " ", " ", "\n", "\n", " ", " ", "\n", " ", "\n", "\n", " ", " ", "\n", "\n", " ", " ", "\n", " ", "\n", "\n", " ", "\n", "\n", " ", " ", "\n", " ", "\n", "\n", " ", " ", " ", "\n", "\n", "\n", " ", " ", " ", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", " ", "\n", " ", " ", " ", "\n", " ", " ", "\n", "\n", " ", " ", "\n", " ", "\n", "\n", " ", "\n", "\n", " ", "\n", "\n", " ", "\n", "\n", " ", "\n", "\n", " ", " ", "\n", "\n", "\n", " ", "\n", "\n", " ", "\n", "\n", " ", "\n", "\n", " ", " ", "\n", "\n", "\n", "\n", "\n", "\n", "\n", " ", "\n", " ", " ", "\n"], "language": "Python", "source": "python.org", "token_count": 12852} +{"url": "https://docs.python.org/3/howto/isolating-extensions.html", "title": "Isolating Extension Modules", "content": "Isolating Extension Modules\u00b6\nWho should read this\u00b6\nThis guide is written for maintainers of C-API extensions who would like to make that extension safer to use in applications where Python itself is used as a library.\nBackground\u00b6\nAn interpreter is the context in which Python code runs. It contains configuration (e.g. the import path) and runtime state (e.g. the set of imported modules).\nPython supports running multiple interpreters in one process. There are two cases to think about\u2014users may run interpreters:\nin sequence, with several\nPy_InitializeEx()\n/Py_FinalizeEx()\ncycles, andin parallel, managing \u201csub-interpreters\u201d using\nPy_NewInterpreter()\n/Py_EndInterpreter()\n.\nBoth cases (and combinations of them) would be most useful when embedding Python within a library. Libraries generally shouldn\u2019t make assumptions about the application that uses them, which include assuming a process-wide \u201cmain Python interpreter\u201d.\nHistorically, Python extension modules don\u2019t handle this use case well.\nMany extension modules (and even some stdlib modules) use per-process\nglobal state, because C static\nvariables are extremely easy to use.\nThus, data that should be specific to an interpreter ends up being shared\nbetween interpreters. Unless the extension developer is careful, it is very\neasy to introduce edge cases that lead to crashes when a module is loaded in\nmore than one interpreter in the same process.\nUnfortunately, per-interpreter state is not easy to achieve. Extension authors tend to not keep multiple interpreters in mind when developing, and it is currently cumbersome to test the behavior.\nEnter Per-Module State\u00b6\nInstead of focusing on per-interpreter state, Python\u2019s C API is evolving to better support the more granular per-module state. This means that C-level data should be attached to a module object. Each interpreter creates its own module object, keeping the data separate. For testing the isolation, multiple module objects corresponding to a single extension can even be loaded in a single interpreter.\nPer-module state provides an easy way to think about lifetime and resource ownership: the extension module will initialize when a module object is created, and clean up when it\u2019s freed. In this regard, a module is just like any other PyObject*; there are no \u201con interpreter shutdown\u201d hooks to think\u2014or forget\u2014about.\nNote that there are use cases for different kinds of \u201cglobals\u201d: per-process, per-interpreter, per-thread or per-task state. With per-module state as the default, these are still possible, but you should treat them as exceptional cases: if you need them, you should give them additional care and testing. (Note that this guide does not cover them.)\nIsolated Module Objects\u00b6\nThe key point to keep in mind when developing an extension module is that several module objects can be created from a single shared library. For example:\n>>> import sys\n>>> import binascii\n>>> old_binascii = binascii\n>>> del sys.modules['binascii']\n>>> import binascii # create a new module object\n>>> old_binascii == binascii\nFalse\nAs a rule of thumb, the two modules should be completely independent. All objects and state specific to the module should be encapsulated within the module object, not shared with other module objects, and cleaned up when the module object is deallocated. Since this just is a rule of thumb, exceptions are possible (see Managing Global State), but they will need more thought and attention to edge cases.\nWhile some modules could do with less stringent restrictions, isolated modules make it easier to set clear expectations and guidelines that work across a variety of use cases.\nSurprising Edge Cases\u00b6\nNote that isolated modules do create some surprising edge cases. Most\nnotably, each module object will typically not share its classes and\nexceptions with other similar modules. Continuing from the\nexample above,\nnote that old_binascii.Error\nand binascii.Error\nare\nseparate objects. In the following code, the exception is not caught:\n>>> old_binascii.Error == binascii.Error\nFalse\n>>> try:\n... old_binascii.unhexlify(b'qwertyuiop')\n... except binascii.Error:\n... print('boo')\n...\nTraceback (most recent call last):\nFile \"\", line 2, in \nbinascii.Error: Non-hexadecimal digit found\nThis is expected. Notice that pure-Python modules behave the same way: it is a part of how Python works.\nThe goal is to make extension modules safe at the C level, not to make\nhacks behave intuitively. Mutating sys.modules\n\u201cmanually\u201d counts\nas a hack.\nMaking Modules Safe with Multiple Interpreters\u00b6\nManaging Global State\u00b6\nSometimes, the state associated with a Python module is not specific to that module, but to the entire process (or something else \u201cmore global\u201d than a module). For example:\nThe\nreadline\nmodule manages the terminal.A module running on a circuit board wants to control the on-board LED.\nIn these cases, the Python module should provide access to the global state, rather than own it. If possible, write the module so that multiple copies of it can access the state independently (along with other libraries, whether for Python or other languages). If that is not possible, consider explicit locking.\nIf it is necessary to use process-global state, the simplest way to avoid issues with multiple interpreters is to explicitly prevent a module from being loaded more than once per process\u2014see Opt-Out: Limiting to One Module Object per Process.\nManaging Per-Module State\u00b6\nTo use per-module state, use multi-phase extension module initialization. This signals that your module supports multiple interpreters correctly.\nSet PyModuleDef.m_size\nto a positive number to request that many\nbytes of storage local to the module. Usually, this will be set to the\nsize of some module-specific struct\n, which can store all of the\nmodule\u2019s C-level state. In particular, it is where you should put\npointers to classes (including exceptions, but excluding static types)\nand settings (e.g. csv\n\u2019s field_size_limit\n)\nwhich the C code needs to function.\nNote\nAnother option is to store state in the module\u2019s __dict__\n,\nbut you must avoid crashing when users modify __dict__\nfrom\nPython code. This usually means error- and type-checking at the C level,\nwhich is easy to get wrong and hard to test sufficiently.\nHowever, if module state is not needed in C code, storing it in\n__dict__\nonly is a good idea.\nIf the module state includes PyObject\npointers, the module object\nmust hold references to those objects and implement the module-level hooks\nm_traverse\n, m_clear\nand m_free\n. These work like\ntp_traverse\n, tp_clear\nand tp_free\nof a class. Adding them will\nrequire some work and make the code longer; this is the price for\nmodules which can be unloaded cleanly.\nAn example of a module with per-module state is currently available as xxlimited; example module initialization shown at the bottom of the file.\nOpt-Out: Limiting to One Module Object per Process\u00b6\nA non-negative PyModuleDef.m_size\nsignals that a module supports\nmultiple interpreters correctly. If this is not yet the case for your\nmodule, you can explicitly make your module loadable only once per\nprocess. For example:\n// A process-wide flag\nstatic int loaded = 0;\n// Mutex to provide thread safety (only needed for free-threaded Python)\nstatic PyMutex modinit_mutex = {0};\nstatic int\nexec_module(PyObject* module)\n{\nPyMutex_Lock(&modinit_mutex);\nif (loaded) {\nPyMutex_Unlock(&modinit_mutex);\nPyErr_SetString(PyExc_ImportError,\n\"cannot load module more than once per process\");\nreturn -1;\n}\nloaded = 1;\nPyMutex_Unlock(&modinit_mutex);\n// ... rest of initialization\n}\nIf your module\u2019s PyModuleDef.m_clear\nfunction is able to prepare\nfor future re-initialization, it should clear the loaded\nflag.\nIn this case, your module won\u2019t support multiple instances existing\nconcurrently, but it will, for example, support being loaded after\nPython runtime shutdown (Py_FinalizeEx()\n) and re-initialization\n(Py_Initialize()\n).\nModule State Access from Functions\u00b6\nAccessing the state from module-level functions is straightforward.\nFunctions get the module object as their first argument; for extracting\nthe state, you can use PyModule_GetState\n:\nstatic PyObject *\nfunc(PyObject *module, PyObject *args)\n{\nmy_struct *state = (my_struct*)PyModule_GetState(module);\nif (state == NULL) {\nreturn NULL;\n}\n// ... rest of logic\n}\nNote\nPyModule_GetState\nmay return NULL\nwithout setting an\nexception if there is no module state, i.e. PyModuleDef.m_size\nwas\nzero. In your own module, you\u2019re in control of m_size\n, so this is\neasy to prevent.\nHeap Types\u00b6\nTraditionally, types defined in C code are static; that is,\nstatic PyTypeObject\nstructures defined directly in code and\ninitialized using PyType_Ready()\n.\nSuch types are necessarily shared across the process. Sharing them\nbetween module objects requires paying attention to any state they own\nor access. To limit the possible issues, static types are immutable at\nthe Python level: for example, you can\u2019t set str.myattribute = 123\n.\nCPython implementation detail: Sharing truly immutable objects between interpreters is fine, as long as they don\u2019t provide access to mutable objects. However, in CPython, every Python object has a mutable implementation detail: the reference count. Changes to the refcount are guarded by the GIL. Thus, code that shares any Python objects across interpreters implicitly depends on CPython\u2019s current, process-wide GIL.\nBecause they are immutable and process-global, static types cannot access\n\u201ctheir\u201d module state.\nIf any method of such a type requires access to module state,\nthe type must be converted to a heap-allocated type, or heap type\nfor short. These correspond more closely to classes created by Python\u2019s\nclass\nstatement.\nFor new modules, using heap types by default is a good rule of thumb.\nChanging Static Types to Heap Types\u00b6\nStatic types can be converted to heap types, but note that the heap type API was not designed for \u201clossless\u201d conversion from static types\u2014that is, creating a type that works exactly like a given static type. So, when rewriting the class definition in a new API, you are likely to unintentionally change a few details (e.g. pickleability or inherited slots). Always test the details that are important to you.\nWatch out for the following two points in particular (but note that this is not a comprehensive list):\nUnlike static types, heap type objects are mutable by default. Use the\nPy_TPFLAGS_IMMUTABLETYPE\nflag to prevent mutability.Heap types inherit\ntp_new\nby default, so it may become possible to instantiate them from Python code. You can prevent this with thePy_TPFLAGS_DISALLOW_INSTANTIATION\nflag.\nDefining Heap Types\u00b6\nHeap types can be created by filling a PyType_Spec\nstructure, a\ndescription or \u201cblueprint\u201d of a class, and calling\nPyType_FromModuleAndSpec()\nto construct a new class object.\nNote\nOther functions, like PyType_FromSpec()\n, can also create\nheap types, but PyType_FromModuleAndSpec()\nassociates the module\nwith the class, allowing access to the module state from methods.\nThe class should generally be stored in both the module state (for\nsafe access from C) and the module\u2019s __dict__\n(for access from\nPython code).\nGarbage-Collection Protocol\u00b6\nInstances of heap types hold a reference to their type. This ensures that the type isn\u2019t destroyed before all its instances are, but may result in reference cycles that need to be broken by the garbage collector.\nTo avoid memory leaks, instances of heap types must implement the garbage collection protocol. That is, heap types should:\nHave the\nPy_TPFLAGS_HAVE_GC\nflag.Define a traverse function using\nPy_tp_traverse\n, which visits the type (e.g. usingPy_VISIT(Py_TYPE(self))\n).\nPlease refer to the documentation of\nPy_TPFLAGS_HAVE_GC\nand tp_traverse\nfor additional considerations.\nThe API for defining heap types grew organically, leaving it somewhat awkward to use in its current state. The following sections will guide you through common issues.\ntp_traverse\nin Python 3.8 and lower\u00b6\nThe requirement to visit the type from tp_traverse\nwas added in Python 3.9.\nIf you support Python 3.8 and lower, the traverse function must not\nvisit the type, so it must be more complicated:\nstatic int my_traverse(PyObject *self, visitproc visit, void *arg)\n{\nif (Py_Version >= 0x03090000) {\nPy_VISIT(Py_TYPE(self));\n}\nreturn 0;\n}\nUnfortunately, Py_Version\nwas only added in Python 3.11.\nAs a replacement, use:\nPY_VERSION_HEX\n, if not using the stable ABI, orsys.version_info\n(viaPySys_GetObject()\nandPyArg_ParseTuple()\n).\nDelegating tp_traverse\n\u00b6\nIf your traverse function delegates to the tp_traverse\nof its base class (or another type), ensure that Py_TYPE(self)\nis visited\nonly once.\nNote that only heap type are expected to visit the type in tp_traverse\n.\nFor example, if your traverse function includes:\nbase->tp_traverse(self, visit, arg)\n\u2026and base\nmay be a static type, then it should also include:\nif (base->tp_flags & Py_TPFLAGS_HEAPTYPE) {\n// a heap type's tp_traverse already visited Py_TYPE(self)\n} else {\nif (Py_Version >= 0x03090000) {\nPy_VISIT(Py_TYPE(self));\n}\n}\nIt is not necessary to handle the type\u2019s reference count in\ntp_new\nand tp_clear\n.\nDefining tp_dealloc\n\u00b6\nIf your type has a custom tp_dealloc\nfunction,\nit needs to:\ncall\nPyObject_GC_UnTrack()\nbefore any fields are invalidated, anddecrement the reference count of the type.\nTo keep the type valid while tp_free\nis called, the type\u2019s refcount needs\nto be decremented after the instance is deallocated. For example:\nstatic void my_dealloc(PyObject *self)\n{\nPyObject_GC_UnTrack(self);\n...\nPyTypeObject *type = Py_TYPE(self);\ntype->tp_free(self);\nPy_DECREF(type);\n}\nThe default tp_dealloc\nfunction does this, so\nif your type does not override\ntp_dealloc\nyou don\u2019t need to add it.\nNot overriding tp_free\n\u00b6\nThe tp_free\nslot of a heap type must be set to\nPyObject_GC_Del()\n.\nThis is the default; do not override it.\nAvoiding PyObject_New\n\u00b6\nGC-tracked objects need to be allocated using GC-aware functions.\nIf you use PyObject_New()\nor PyObject_NewVar()\n:\nGet and call type\u2019s\ntp_alloc\nslot, if possible. That is, replaceTYPE *o = PyObject_New(TYPE, typeobj)\nwith:TYPE *o = typeobj->tp_alloc(typeobj, 0);\nReplace\no = PyObject_NewVar(TYPE, typeobj, size)\nwith the same, but use size instead of the 0.If the above is not possible (e.g. inside a custom\ntp_alloc\n), callPyObject_GC_New()\norPyObject_GC_NewVar()\n:TYPE *o = PyObject_GC_New(TYPE, typeobj); TYPE *o = PyObject_GC_NewVar(TYPE, typeobj, size);\nModule State Access from Classes\u00b6\nIf you have a type object defined with PyType_FromModuleAndSpec()\n,\nyou can call PyType_GetModule()\nto get the associated module, and then\nPyModule_GetState()\nto get the module\u2019s state.\nTo save a some tedious error-handling boilerplate code, you can combine\nthese two steps with PyType_GetModuleState()\n, resulting in:\nmy_struct *state = (my_struct*)PyType_GetModuleState(type);\nif (state == NULL) {\nreturn NULL;\n}\nModule State Access from Regular Methods\u00b6\nAccessing the module-level state from methods of a class is somewhat more complicated, but is possible thanks to API introduced in Python 3.9. To get the state, you need to first get the defining class, and then get the module state from it.\nThe largest roadblock is getting the class a method was defined in, or that method\u2019s \u201cdefining class\u201d for short. The defining class can have a reference to the module it is part of.\nDo not confuse the defining class with Py_TYPE(self)\n. If the method\nis called on a subclass of your type, Py_TYPE(self)\nwill refer to\nthat subclass, which may be defined in different module than yours.\nNote\nThe following Python code can illustrate the concept.\nBase.get_defining_class\nreturns Base\neven\nif type(self) == Sub\n:\nclass Base:\ndef get_type_of_self(self):\nreturn type(self)\ndef get_defining_class(self):\nreturn __class__\nclass Sub(Base):\npass\nFor a method to get its \u201cdefining class\u201d, it must use the\nMETH_METHOD | METH_FASTCALL | METH_KEYWORDS\ncalling convention\nand the corresponding PyCMethod\nsignature:\nPyObject *PyCMethod(\nPyObject *self, // object the method was called on\nPyTypeObject *defining_class, // defining class\nPyObject *const *args, // C array of arguments\nPy_ssize_t nargs, // length of \"args\"\nPyObject *kwnames) // NULL, or dict of keyword arguments\nOnce you have the defining class, call PyType_GetModuleState()\nto get\nthe state of its associated module.\nFor example:\nstatic PyObject *\nexample_method(PyObject *self,\nPyTypeObject *defining_class,\nPyObject *const *args,\nPy_ssize_t nargs,\nPyObject *kwnames)\n{\nmy_struct *state = (my_struct*)PyType_GetModuleState(defining_class);\nif (state == NULL) {\nreturn NULL;\n}\n... // rest of logic\n}\nPyDoc_STRVAR(example_method_doc, \"...\");\nstatic PyMethodDef my_methods[] = {\n{\"example_method\",\n(PyCFunction)(void(*)(void))example_method,\nMETH_METHOD|METH_FASTCALL|METH_KEYWORDS,\nexample_method_doc}\n{NULL},\n}\nModule State Access from Slot Methods, Getters and Setters\u00b6\nNote\nThis is new in Python 3.11.\nSlot methods\u2014the fast C equivalents for special methods, such as\nnb_add\nfor __add__\nor\ntp_new\nfor initialization\u2014have a very simple API that\ndoesn\u2019t allow passing in the defining class, unlike with PyCMethod\n.\nThe same goes for getters and setters defined with\nPyGetSetDef\n.\nTo access the module state in these cases, use the\nPyType_GetModuleByDef()\nfunction, and pass in the module definition.\nOnce you have the module, call PyModule_GetState()\nto get the state:\nPyObject *module = PyType_GetModuleByDef(Py_TYPE(self), &module_def);\nmy_struct *state = (my_struct*)PyModule_GetState(module);\nif (state == NULL) {\nreturn NULL;\n}\nPyType_GetModuleByDef()\nworks by searching the\nmethod resolution order (i.e. all superclasses) for the first\nsuperclass that has a corresponding module.\nNote\nIn very exotic cases (inheritance chains spanning multiple modules\ncreated from the same definition), PyType_GetModuleByDef()\nmight not\nreturn the module of the true defining class. However, it will always\nreturn a module with the same definition, ensuring a compatible\nC memory layout.\nLifetime of the Module State\u00b6\nWhen a module object is garbage-collected, its module state is freed. For each pointer to (a part of) the module state, you must hold a reference to the module object.\nUsually this is not an issue, because types created with\nPyType_FromModuleAndSpec()\n, and their instances, hold a reference\nto the module.\nHowever, you must be careful in reference counting when you reference\nmodule state from other places, such as callbacks for external\nlibraries.\nOpen Issues\u00b6\nSeveral issues around per-module state and heap types are still open.\nDiscussions about improving the situation are best held on the discuss forum under c-api tag.\nPer-Class Scope\u00b6\nIt is currently (as of Python 3.11) not possible to attach state to individual types without relying on CPython implementation details (which may change in the future\u2014perhaps, ironically, to allow a proper solution for per-class scope).\nLossless Conversion to Heap Types\u00b6\nThe heap type API was not designed for \u201clossless\u201d conversion from static types; that is, creating a type that works exactly like a given static type.", "code_snippets": [], "language": "Python", "source": "python.org", "token_count": 4777} +{"url": "https://docs.python.org/3/library/mailcap.html", "title": " \u2014 Mailcap file handling", "content": "mailcap\n\u2014 Mailcap file handling\u00b6\nDeprecated since version 3.11, removed in version 3.13.\nThis module is no longer part of the Python standard library. It was removed in Python 3.13 after being deprecated in Python 3.11. The removal was decided in PEP 594.\nThe last version of Python that provided the mailcap\nmodule was\nPython 3.12.", "code_snippets": [], "language": "Python", "source": "python.org", "token_count": 83} +{"url": "https://docs.python.org/3/tutorial/interpreter.html", "title": "Using the Python Interpreter", "content": "2. Using the Python Interpreter\u00b6\n2.1. Invoking the Interpreter\u00b6\nThe Python interpreter is usually installed as /usr/local/bin/python3.14\non those machines where it is available; putting /usr/local/bin\nin your\nUnix shell\u2019s search path makes it possible to start it by typing the command:\npython3.14\nto the shell. [1] Since the choice of the directory where the interpreter lives\nis an installation option, other places are possible; check with your local\nPython guru or system administrator. (E.g., /usr/local/python\nis a\npopular alternative location.)\nOn Windows machines where you have installed Python from the Microsoft Store, the python3.14\ncommand will be available. If you have\nthe py.exe launcher installed, you can use the py\ncommand. See Python install manager for other ways to launch Python.\nTyping an end-of-file character (Control-D on Unix, Control-Z on\nWindows) at the primary prompt causes the interpreter to exit with a zero exit\nstatus. If that doesn\u2019t work, you can exit the interpreter by typing the\nfollowing command: quit()\n.\nThe interpreter\u2019s line-editing features include interactive editing, history\nsubstitution and code completion on most systems.\nPerhaps the quickest check to see whether command line editing is supported is\ntyping a word in on the Python prompt, then pressing Left arrow (or Control-b).\nIf the cursor moves, you have command line editing; see Appendix\nInteractive Input Editing and History Substitution for an introduction to the keys.\nIf nothing appears to happen, or if a sequence like ^[[D\nor ^B\nappears,\ncommand line editing isn\u2019t available; you\u2019ll only be able to use\nbackspace to remove characters from the current line.\nThe interpreter operates somewhat like the Unix shell: when called with standard input connected to a tty device, it reads and executes commands interactively; when called with a file name argument or with a file as standard input, it reads and executes a script from that file.\nA second way of starting the interpreter is python -c command [arg] ...\n,\nwhich executes the statement(s) in command, analogous to the shell\u2019s\n-c\noption. Since Python statements often contain spaces or other\ncharacters that are special to the shell, it is usually advised to quote\ncommand in its entirety.\nSome Python modules are also useful as scripts. These can be invoked using\npython -m module [arg] ...\n, which executes the source file for module as\nif you had spelled out its full name on the command line.\nWhen a script file is used, it is sometimes useful to be able to run the script\nand enter interactive mode afterwards. This can be done by passing -i\nbefore the script.\nAll command line options are described in Command line and environment.\n2.1.1. Argument Passing\u00b6\nWhen known to the interpreter, the script name and additional arguments\nthereafter are turned into a list of strings and assigned to the argv\nvariable in the sys\nmodule. You can access this list by executing import\nsys\n. The length of the list is at least one; when no script and no arguments\nare given, sys.argv[0]\nis an empty string. When the script name is given as\n'-'\n(meaning standard input), sys.argv[0]\nis set to '-'\n. When\n-c\ncommand is used, sys.argv[0]\nis set to '-c'\n. When\n-m\nmodule is used, sys.argv[0]\nis set to the full name of the\nlocated module. Options found after -c\ncommand or -m\nmodule are not consumed by the Python interpreter\u2019s option processing but\nleft in sys.argv\nfor the command or module to handle.\n2.1.2. Interactive Mode\u00b6\nWhen commands are read from a tty, the interpreter is said to be in interactive\nmode. In this mode it prompts for the next command with the primary prompt,\nusually three greater-than signs (>>>\n); for continuation lines it prompts\nwith the secondary prompt, by default three dots (...\n). The interpreter\nprints a welcome message stating its version number and a copyright notice\nbefore printing the first prompt:\n$ python3.14\nPython 3.14 (default, April 4 2024, 09:25:04)\n[GCC 10.2.0] on linux\nType \"help\", \"copyright\", \"credits\" or \"license\" for more information.\n>>>\nContinuation lines are needed when entering a multi-line construct. As an\nexample, take a look at this if\nstatement:\n>>> the_world_is_flat = True\n>>> if the_world_is_flat:\n... print(\"Be careful not to fall off!\")\n...\nBe careful not to fall off!\nFor more on interactive mode, see Interactive Mode.\n2.2. The Interpreter and Its Environment\u00b6\n2.2.1. Source Code Encoding\u00b6\nBy default, Python source files are treated as encoded in UTF-8. In that encoding, characters of most languages in the world can be used simultaneously in string literals, identifiers and comments \u2014 although the standard library only uses ASCII characters for identifiers, a convention that any portable code should follow. To display all these characters properly, your editor must recognize that the file is UTF-8, and it must use a font that supports all the characters in the file.\nTo declare an encoding other than the default one, a special comment line should be added as the first line of the file. The syntax is as follows:\n# -*- coding: encoding -*-\nwhere encoding is one of the valid codecs\nsupported by Python.\nFor example, to declare that Windows-1252 encoding is to be used, the first line of your source code file should be:\n# -*- coding: cp1252 -*-\nOne exception to the first line rule is when the source code starts with a UNIX \u201cshebang\u201d line. In this case, the encoding declaration should be added as the second line of the file. For example:\n#!/usr/bin/env python3\n# -*- coding: cp1252 -*-\nFootnotes", "code_snippets": [" ", " ", "\n", " ", "\n", " ", "\n", "\n", "\n", "\n", "\n", "\n", "\n"], "language": "Python", "source": "python.org", "token_count": 1385} +{"url": "https://docs.python.org/3/library/security_warnings.html", "title": "Security Considerations", "content": "Security Considerations\u00b6\nThe following modules have specific security considerations:\nhashlib\n: all constructors take a \u201cusedforsecurity\u201d keyword-only argument disabling known insecure and blocked algorithmshttp.server\nis not suitable for production use, only implementing basic security checks. See the security considerations.random\nshouldn\u2019t be used for security purposes, usesecrets\ninsteadshelve\n: shelve is based on pickle and thus unsuitable for dealing with untrusted sourcestempfile\n: mktemp is deprecated due to vulnerability to race conditionszipfile\n: maliciously prepared .zip files can cause disk volume exhaustion\nThe -I\ncommand line option can be used to run Python in isolated\nmode. When it cannot be used, the -P\noption or the\nPYTHONSAFEPATH\nenvironment variable can be used to not prepend a\npotentially unsafe path to sys.path\nsuch as the current directory, the\nscript\u2019s directory or an empty string.", "code_snippets": [], "language": "Python", "source": "python.org", "token_count": 230} +{"url": "https://docs.python.org/3/faq/library.html", "title": null, "content": "Library and Extension FAQ\u00b6\nGeneral Library Questions\u00b6\nHow do I find a module or application to perform task X?\u00b6\nCheck the Library Reference to see if there\u2019s a relevant standard library module. (Eventually you\u2019ll learn what\u2019s in the standard library and will be able to skip this step.)\nFor third-party packages, search the Python Package Index or try Google or another web search engine. Searching for \u201cPython\u201d plus a keyword or two for your topic of interest will usually find something helpful.\nWhere is the math.py (socket.py, regex.py, etc.) source file?\u00b6\nIf you can\u2019t find a source file for a module it may be a built-in or\ndynamically loaded module implemented in C, C++ or other compiled language.\nIn this case you may not have the source file or it may be something like\nmathmodule.c\n, somewhere in a C source directory (not on the Python Path).\nThere are (at least) three kinds of modules in Python:\nmodules written in Python (.py);\nmodules written in C and dynamically loaded (.dll, .pyd, .so, .sl, etc);\nmodules written in C and linked with the interpreter; to get a list of these, type:\nimport sys print(sys.builtin_module_names)\nHow do I make a Python script executable on Unix?\u00b6\nYou need to do two things: the script file\u2019s mode must be executable and the\nfirst line must begin with #!\nfollowed by the path of the Python\ninterpreter.\nThe first is done by executing chmod +x scriptfile\nor perhaps chmod 755\nscriptfile\n.\nThe second can be done in a number of ways. The most straightforward way is to write\n#!/usr/local/bin/python\nas the very first line of your file, using the pathname for where the Python interpreter is installed on your platform.\nIf you would like the script to be independent of where the Python interpreter\nlives, you can use the env program. Almost all Unix variants support\nthe following, assuming the Python interpreter is in a directory on the user\u2019s\nPATH\n:\n#!/usr/bin/env python\nDon\u2019t do this for CGI scripts. The PATH\nvariable for CGI scripts is\noften very minimal, so you need to use the actual absolute pathname of the\ninterpreter.\nOccasionally, a user\u2019s environment is so full that the /usr/bin/env program fails; or there\u2019s no env program at all. In that case, you can try the following hack (due to Alex Rezinsky):\n#! /bin/sh\n\"\"\":\"\nexec python $0 ${1+\"$@\"}\n\"\"\"\nThe minor disadvantage is that this defines the script\u2019s __doc__ string. However, you can fix that by adding\n__doc__ = \"\"\"...Whatever...\"\"\"\nIs there a curses/termcap package for Python?\u00b6\nFor Unix variants: The standard Python source distribution comes with a curses module in the Modules subdirectory, though it\u2019s not compiled by default. (Note that this is not available in the Windows distribution \u2013 there is no curses module for Windows.)\nThe curses\nmodule supports basic curses features as well as many additional\nfunctions from ncurses and SYSV curses such as colour, alternative character set\nsupport, pads, and mouse support. This means the module isn\u2019t compatible with\noperating systems that only have BSD curses, but there don\u2019t seem to be any\ncurrently maintained OSes that fall into this category.\nIs there an equivalent to C\u2019s onexit() in Python?\u00b6\nThe atexit\nmodule provides a register function that is similar to C\u2019s\nonexit()\n.\nWhy don\u2019t my signal handlers work?\u00b6\nThe most common problem is that the signal handler is declared with the wrong argument list. It is called as\nhandler(signum, frame)\nso it should be declared with two parameters:\ndef handler(signum, frame):\n...\nCommon tasks\u00b6\nHow do I test a Python program or component?\u00b6\nPython comes with two testing frameworks. The doctest\nmodule finds\nexamples in the docstrings for a module and runs them, comparing the output with\nthe expected output given in the docstring.\nThe unittest\nmodule is a fancier testing framework modelled on Java and\nSmalltalk testing frameworks.\nTo make testing easier, you should use good modular design in your program. Your program should have almost all functionality encapsulated in either functions or class methods \u2013 and this sometimes has the surprising and delightful effect of making the program run faster (because local variable accesses are faster than global accesses). Furthermore the program should avoid depending on mutating global variables, since this makes testing much more difficult to do.\nThe \u201cglobal main logic\u201d of your program may be as simple as\nif __name__ == \"__main__\":\nmain_logic()\nat the bottom of the main module of your program.\nOnce your program is organized as a tractable collection of function and class behaviours, you should write test functions that exercise the behaviours. A test suite that automates a sequence of tests can be associated with each module. This sounds like a lot of work, but since Python is so terse and flexible it\u2019s surprisingly easy. You can make coding much more pleasant and fun by writing your test functions in parallel with the \u201cproduction code\u201d, since this makes it easy to find bugs and even design flaws earlier.\n\u201cSupport modules\u201d that are not intended to be the main module of a program may include a self-test of the module.\nif __name__ == \"__main__\":\nself_test()\nEven programs that interact with complex external interfaces may be tested when the external interfaces are unavailable by using \u201cfake\u201d interfaces implemented in Python.\nHow do I create documentation from doc strings?\u00b6\nThe pydoc\nmodule can create HTML from the doc strings in your Python\nsource code. An alternative for creating API documentation purely from\ndocstrings is epydoc. Sphinx can also include docstring content.\nHow do I get a single keypress at a time?\u00b6\nFor Unix variants there are several solutions. It\u2019s straightforward to do this using curses, but curses is a fairly large module to learn.\nThreads\u00b6\nHow do I program using threads?\u00b6\nBe sure to use the threading\nmodule and not the _thread\nmodule.\nThe threading\nmodule builds convenient abstractions on top of the\nlow-level primitives provided by the _thread\nmodule.\nNone of my threads seem to run: why?\u00b6\nAs soon as the main thread exits, all threads are killed. Your main thread is running too quickly, giving the threads no time to do any work.\nA simple fix is to add a sleep to the end of the program that\u2019s long enough for all the threads to finish:\nimport threading, time\ndef thread_task(name, n):\nfor i in range(n):\nprint(name, i)\nfor i in range(10):\nT = threading.Thread(target=thread_task, args=(str(i), i))\nT.start()\ntime.sleep(10) # <---------------------------!\nBut now (on many platforms) the threads don\u2019t run in parallel, but appear to run sequentially, one at a time! The reason is that the OS thread scheduler doesn\u2019t start a new thread until the previous thread is blocked.\nA simple fix is to add a tiny sleep to the start of the run function:\ndef thread_task(name, n):\ntime.sleep(0.001) # <--------------------!\nfor i in range(n):\nprint(name, i)\nfor i in range(10):\nT = threading.Thread(target=thread_task, args=(str(i), i))\nT.start()\ntime.sleep(10)\nInstead of trying to guess a good delay value for time.sleep()\n,\nit\u2019s better to use some kind of semaphore mechanism. One idea is to use the\nqueue\nmodule to create a queue object, let each thread append a token to\nthe queue when it finishes, and let the main thread read as many tokens from the\nqueue as there are threads.\nHow do I parcel out work among a bunch of worker threads?\u00b6\nThe easiest way is to use the concurrent.futures\nmodule,\nespecially the ThreadPoolExecutor\nclass.\nOr, if you want fine control over the dispatching algorithm, you can write\nyour own logic manually. Use the queue\nmodule to create a queue\ncontaining a list of jobs. The Queue\nclass maintains a\nlist of objects and has a .put(obj)\nmethod that adds items to the queue and\na .get()\nmethod to return them. The class will take care of the locking\nnecessary to ensure that each job is handed out exactly once.\nHere\u2019s a trivial example:\nimport threading, queue, time\n# The worker thread gets jobs off the queue. When the queue is empty, it\n# assumes there will be no more work and exits.\n# (Realistically workers will run until terminated.)\ndef worker():\nprint('Running worker')\ntime.sleep(0.1)\nwhile True:\ntry:\narg = q.get(block=False)\nexcept queue.Empty:\nprint('Worker', threading.current_thread(), end=' ')\nprint('queue empty')\nbreak\nelse:\nprint('Worker', threading.current_thread(), end=' ')\nprint('running with argument', arg)\ntime.sleep(0.5)\n# Create queue\nq = queue.Queue()\n# Start a pool of 5 workers\nfor i in range(5):\nt = threading.Thread(target=worker, name='worker %i' % (i+1))\nt.start()\n# Begin adding work to the queue\nfor i in range(50):\nq.put(i)\n# Give threads time to run\nprint('Main thread sleeping')\ntime.sleep(5)\nWhen run, this will produce the following output:\nRunning worker\nRunning worker\nRunning worker\nRunning worker\nRunning worker\nMain thread sleeping\nWorker running with argument 0\nWorker running with argument 1\nWorker running with argument 2\nWorker running with argument 3\nWorker running with argument 4\nWorker running with argument 5\n...\nConsult the module\u2019s documentation for more details; the Queue\nclass provides a featureful interface.\nWhat kinds of global value mutation are thread-safe?\u00b6\nA global interpreter lock (GIL) is used internally to ensure that only one\nthread runs in the Python VM at a time. In general, Python offers to switch\namong threads only between bytecode instructions; how frequently it switches can\nbe set via sys.setswitchinterval()\n. Each bytecode instruction and\ntherefore all the C implementation code reached from each instruction is\ntherefore atomic from the point of view of a Python program.\nIn theory, this means an exact accounting requires an exact understanding of the PVM bytecode implementation. In practice, it means that operations on shared variables of built-in data types (ints, lists, dicts, etc) that \u201clook atomic\u201d really are.\nFor example, the following operations are all atomic (L, L1, L2 are lists, D, D1, D2 are dicts, x, y are objects, i, j are ints):\nL.append(x)\nL1.extend(L2)\nx = L[i]\nx = L.pop()\nL1[i:j] = L2\nL.sort()\nx = y\nx.field = y\nD[x] = y\nD1.update(D2)\nD.keys()\nThese aren\u2019t:\ni = i+1\nL.append(L[-1])\nL[i] = L[j]\nD[x] = D[x] + 1\nOperations that replace other objects may invoke those other objects\u2019\n__del__()\nmethod when their reference count reaches zero, and that can\naffect things. This is especially true for the mass updates to dictionaries and\nlists. When in doubt, use a mutex!\nCan\u2019t we get rid of the Global Interpreter Lock?\u00b6\nThe global interpreter lock (GIL) is often seen as a hindrance to Python\u2019s deployment on high-end multiprocessor server machines, because a multi-threaded Python program effectively only uses one CPU, due to the insistence that (almost) all Python code can only run while the GIL is held.\nWith the approval of PEP 703 work is now underway to remove the GIL from the CPython implementation of Python. Initially it will be implemented as an optional compiler flag when building the interpreter, and so separate builds will be available with and without the GIL. Long-term, the hope is to settle on a single build, once the performance implications of removing the GIL are fully understood. Python 3.13 is likely to be the first release containing this work, although it may not be completely functional in this release.\nThe current work to remove the GIL is based on a fork of Python 3.9 with the GIL removed by Sam Gross. Prior to that, in the days of Python 1.5, Greg Stein actually implemented a comprehensive patch set (the \u201cfree threading\u201d patches) that removed the GIL and replaced it with fine-grained locking. Adam Olsen did a similar experiment in his python-safethread project. Unfortunately, both of these earlier experiments exhibited a sharp drop in single-thread performance (at least 30% slower), due to the amount of fine-grained locking necessary to compensate for the removal of the GIL. The Python 3.9 fork is the first attempt at removing the GIL with an acceptable performance impact.\nThe presence of the GIL in current Python releases\ndoesn\u2019t mean that you can\u2019t make good use of Python on multi-CPU machines!\nYou just have to be creative with dividing the work up between multiple\nprocesses rather than multiple threads. The\nProcessPoolExecutor\nclass in the new\nconcurrent.futures\nmodule provides an easy way of doing so; the\nmultiprocessing\nmodule provides a lower-level API in case you want\nmore control over dispatching of tasks.\nJudicious use of C extensions will also help; if you use a C extension to\nperform a time-consuming task, the extension can release the GIL while the\nthread of execution is in the C code and allow other threads to get some work\ndone. Some standard library modules such as zlib\nand hashlib\nalready do this.\nAn alternative approach to reducing the impact of the GIL is to make the GIL a per-interpreter-state lock rather than truly global. This was first implemented in Python 3.12 and is available in the C API. A Python interface to it is expected in Python 3.13. The main limitation to it at the moment is likely to be 3rd party extension modules, since these must be written with multiple interpreters in mind in order to be usable, so many older extension modules will not be usable.\nInput and Output\u00b6\nHow do I delete a file? (And other file questions\u2026)\u00b6\nUse os.remove(filename)\nor os.unlink(filename)\n; for documentation, see\nthe os\nmodule. The two functions are identical; unlink()\nis simply\nthe name of the Unix system call for this function.\nTo remove a directory, use os.rmdir()\n; use os.mkdir()\nto create one.\nos.makedirs(path)\nwill create any intermediate directories in path\nthat\ndon\u2019t exist. os.removedirs(path)\nwill remove intermediate directories as\nlong as they\u2019re empty; if you want to delete an entire directory tree and its\ncontents, use shutil.rmtree()\n.\nTo rename a file, use os.rename(old_path, new_path)\n.\nTo truncate a file, open it using f = open(filename, \"rb+\")\n, and use\nf.truncate(offset)\n; offset defaults to the current seek position. There\u2019s\nalso os.ftruncate(fd, offset)\nfor files opened with os.open()\n, where\nfd is the file descriptor (a small integer).\nThe shutil\nmodule also contains a number of functions to work on files\nincluding copyfile()\n, copytree()\n, and\nrmtree()\n.\nHow do I copy a file?\u00b6\nThe shutil\nmodule contains a copyfile()\nfunction.\nNote that on Windows NTFS volumes, it does not copy\nalternate data streams\nnor resource forks\non macOS HFS+ volumes, though both are now rarely used.\nIt also doesn\u2019t copy file permissions and metadata, though using\nshutil.copy2()\ninstead will preserve most (though not all) of it.\nHow do I read (or write) binary data?\u00b6\nTo read or write complex binary data formats, it\u2019s best to use the struct\nmodule. It allows you to take a string containing binary data (usually numbers)\nand convert it to Python objects; and vice versa.\nFor example, the following code reads two 2-byte integers and one 4-byte integer in big-endian format from a file:\nimport struct\nwith open(filename, \"rb\") as f:\ns = f.read(8)\nx, y, z = struct.unpack(\">hhl\", s)\nThe \u2018>\u2019 in the format string forces big-endian data; the letter \u2018h\u2019 reads one \u201cshort integer\u201d (2 bytes), and \u2018l\u2019 reads one \u201clong integer\u201d (4 bytes) from the string.\nFor data that is more regular (e.g. a homogeneous list of ints or floats),\nyou can also use the array\nmodule.\nI can\u2019t seem to use os.read() on a pipe created with os.popen(); why?\u00b6\nos.read()\nis a low-level function which takes a file descriptor, a small\ninteger representing the opened file. os.popen()\ncreates a high-level\nfile object, the same type returned by the built-in open()\nfunction.\nThus, to read n bytes from a pipe p created with os.popen()\n, you need to\nuse p.read(n)\n.\nHow do I access the serial (RS232) port?\u00b6\nFor Win32, OSX, Linux, BSD, Jython, IronPython:\nFor Unix, see a Usenet post by Mitch Chapman:\nWhy doesn\u2019t closing sys.stdout (stdin, stderr) really close it?\u00b6\nPython file objects are a high-level layer of abstraction on low-level C file descriptors.\nFor most file objects you create in Python via the built-in open()\nfunction, f.close()\nmarks the Python file object as being closed from\nPython\u2019s point of view, and also arranges to close the underlying C file\ndescriptor. This also happens automatically in f\n\u2019s destructor, when\nf\nbecomes garbage.\nBut stdin, stdout and stderr are treated specially by Python, because of the\nspecial status also given to them by C. Running sys.stdout.close()\nmarks\nthe Python-level file object as being closed, but does not close the\nassociated C file descriptor.\nTo close the underlying C file descriptor for one of these three, you should\nfirst be sure that\u2019s what you really want to do (e.g., you may confuse\nextension modules trying to do I/O). If it is, use os.close()\n:\nos.close(stdin.fileno())\nos.close(stdout.fileno())\nos.close(stderr.fileno())\nOr you can use the numeric constants 0, 1 and 2, respectively.\nNetwork/Internet Programming\u00b6\nWhat WWW tools are there for Python?\u00b6\nSee the chapters titled Internet Protocols and Support and Internet Data Handling in the Library Reference Manual. Python has many modules that will help you build server-side and client-side web systems.\nA summary of available frameworks is maintained by Paul Boddie at https://wiki.python.org/moin/WebProgramming.\nWhat module should I use to help with generating HTML?\u00b6\nYou can find a collection of useful links on the Web Programming wiki page.\nHow do I send mail from a Python script?\u00b6\nUse the standard library module smtplib\n.\nHere\u2019s a very simple interactive mail sender that uses it. This method will work on any host that supports an SMTP listener.\nimport sys, smtplib\nfromaddr = input(\"From: \")\ntoaddrs = input(\"To: \").split(',')\nprint(\"Enter message, end with ^D:\")\nmsg = ''\nwhile True:\nline = sys.stdin.readline()\nif not line:\nbreak\nmsg += line\n# The actual mail send\nserver = smtplib.SMTP('localhost')\nserver.sendmail(fromaddr, toaddrs, msg)\nserver.quit()\nA Unix-only alternative uses sendmail. The location of the sendmail program\nvaries between systems; sometimes it is /usr/lib/sendmail\n, sometimes\n/usr/sbin/sendmail\n. The sendmail manual page will help you out. Here\u2019s\nsome sample code:\nimport os\nSENDMAIL = \"/usr/sbin/sendmail\" # sendmail location\np = os.popen(\"%s -t -i\" % SENDMAIL, \"w\")\np.write(\"To: receiver@example.com\\n\")\np.write(\"Subject: test\\n\")\np.write(\"\\n\") # blank line separating headers from body\np.write(\"Some text\\n\")\np.write(\"some more text\\n\")\nsts = p.close()\nif sts != 0:\nprint(\"Sendmail exit status\", sts)\nHow do I avoid blocking in the connect() method of a socket?\u00b6\nThe select\nmodule is commonly used to help with asynchronous I/O on\nsockets.\nTo prevent the TCP connect from blocking, you can set the socket to non-blocking\nmode. Then when you do the connect()\n,\nyou will either connect immediately\n(unlikely) or get an exception that contains the error number as .errno\n.\nerrno.EINPROGRESS\nindicates that the connection is in progress, but hasn\u2019t\nfinished yet. Different OSes will return different values, so you\u2019re going to\nhave to check what\u2019s returned on your system.\nYou can use the connect_ex()\nmethod\nto avoid creating an exception.\nIt will just return the errno value.\nTo poll, you can call connect_ex()\nagain later\n\u2013 0\nor errno.EISCONN\nindicate that you\u2019re connected \u2013 or you can pass this\nsocket to select.select()\nto check if it\u2019s writable.\nDatabases\u00b6\nAre there any interfaces to database packages in Python?\u00b6\nYes.\nInterfaces to disk-based hashes such as DBM\nand GDBM\nare also included with standard Python. There is also the\nsqlite3\nmodule, which provides a lightweight disk-based relational\ndatabase.\nSupport for most relational databases is available. See the DatabaseProgramming wiki page for details.\nHow do you implement persistent objects in Python?\u00b6\nThe pickle\nlibrary module solves this in a very general way (though you\nstill can\u2019t store things like open files, sockets or windows), and the\nshelve\nlibrary module uses pickle and (g)dbm to create persistent\nmappings containing arbitrary Python objects.\nMathematics and Numerics\u00b6\nHow do I generate random numbers in Python?\u00b6\nThe standard module random\nimplements a random number generator. Usage\nis simple:\nimport random\nrandom.random()\nThis returns a random floating-point number in the range [0, 1).\nThere are also many other specialized generators in this module, such as:\nrandrange(a, b)\nchooses an integer in the range [a, b).uniform(a, b)\nchooses a floating-point number in the range [a, b).normalvariate(mean, sdev)\nsamples the normal (Gaussian) distribution.\nSome higher-level functions operate on sequences directly, such as:\nchoice(S)\nchooses a random element from a given sequence.shuffle(L)\nshuffles a list in-place, i.e. permutes it randomly.\nThere\u2019s also a Random\nclass you can instantiate to create independent\nmultiple random number generators.", "code_snippets": ["\n", "\n", "\n", "\n", " ", " ", "\n", " ", "\n", " ", "\n ", "\n", " ", " ", " ", "\n ", "\n", " ", " ", " ", "\n ", "\n", "\n\n", " ", "\n ", " ", " ", " ", "\n ", " ", "\n\n", " ", " ", " ", "\n ", " ", " ", " ", " ", "\n ", "\n\n", " ", "\n", " ", "\n ", " ", "\n ", " ", " ", " ", "\n ", " ", "\n\n", " ", " ", " ", "\n ", " ", " ", " ", " ", "\n ", "\n\n", "\n", "\n\n", "\n", "\n", "\n", "\n ", "\n ", "\n ", " ", "\n ", "\n ", " ", " ", "\n ", " ", "\n ", " ", " ", "\n ", "\n ", "\n ", "\n ", " ", " ", "\n ", " ", "\n ", "\n\n", "\n", " ", " ", "\n\n", "\n", " ", " ", " ", "\n ", " ", " ", " ", " ", " ", "\n ", "\n\n", "\n", " ", " ", " ", "\n ", "\n\n", "\n", "\n", "\n", "\n", "\n", " ", " ", "\n", " ", " ", "\n", " ", " ", "\n", "\n", " ", " ", "\n", " ", " ", "\n", " ", " ", "\n", "\n", "\n", " ", " ", "\n", "\n", " ", " ", "\n", " ", " ", " ", " ", "\n", "\n\n", " ", " ", " ", " ", "\n ", " ", " ", "\n ", " ", " ", " ", " ", " ", "\n", "\n", "\n", "\n", "\n\n", " ", " ", "\n", " ", " ", "\n", "\n", " ", " ", "\n", " ", "\n ", " ", " ", "\n ", " ", " ", "\n ", "\n ", " ", " ", "\n\n", "\n", " ", " ", "\n", " ", " ", "\n", "\n", "\n\n", " ", " ", " ", "\n", " ", " ", " ", " ", " ", "\n", "\n", "\n", " ", "\n", "\n", "\n", " ", " ", "\n", " ", " ", " ", "\n ", " ", "\n", "\n", "\n"], "language": "Python", "source": "python.org", "token_count": 5270} +{"url": "https://docs.python.org/3/howto/sockets.html", "title": "Socket Programming HOWTO", "content": "Socket Programming HOWTO\u00b6\n- Author:\nGordon McMillan\nSockets\u00b6\nI\u2019m only going to talk about INET (i.e. IPv4) sockets, but they account for at least 99% of the sockets in use. And I\u2019ll only talk about STREAM (i.e. TCP) sockets - unless you really know what you\u2019re doing (in which case this HOWTO isn\u2019t for you!), you\u2019ll get better behavior and performance from a STREAM socket than anything else. I will try to clear up the mystery of what a socket is, as well as some hints on how to work with blocking and non-blocking sockets. But I\u2019ll start by talking about blocking sockets. You\u2019ll need to know how they work before dealing with non-blocking sockets.\nPart of the trouble with understanding these things is that \u201csocket\u201d can mean a number of subtly different things, depending on context. So first, let\u2019s make a distinction between a \u201cclient\u201d socket - an endpoint of a conversation, and a \u201cserver\u201d socket, which is more like a switchboard operator. The client application (your browser, for example) uses \u201cclient\u201d sockets exclusively; the web server it\u2019s talking to uses both \u201cserver\u201d sockets and \u201cclient\u201d sockets.\nHistory\u00b6\nOf the various forms of IPC, sockets are by far the most popular. On any given platform, there are likely to be other forms of IPC that are faster, but for cross-platform communication, sockets are about the only game in town.\nThey were invented in Berkeley as part of the BSD flavor of Unix. They spread like wildfire with the internet. With good reason \u2014 the combination of sockets with INET makes talking to arbitrary machines around the world unbelievably easy (at least compared to other schemes).\nCreating a Socket\u00b6\nRoughly speaking, when you clicked on the link that brought you to this page, your browser did something like the following:\n# create an INET, STREAMing socket\ns = socket.socket(socket.AF_INET, socket.SOCK_STREAM)\n# now connect to the web server on port 80 - the normal http port\ns.connect((\"www.python.org\", 80))\nWhen the connect\ncompletes, the socket s\ncan be used to send\nin a request for the text of the page. The same socket will read the\nreply, and then be destroyed. That\u2019s right, destroyed. Client sockets\nare normally only used for one exchange (or a small set of sequential\nexchanges).\nWhat happens in the web server is a bit more complex. First, the web server creates a \u201cserver socket\u201d:\n# create an INET, STREAMing socket\nserversocket = socket.socket(socket.AF_INET, socket.SOCK_STREAM)\n# bind the socket to a public host, and a well-known port\nserversocket.bind((socket.gethostname(), 80))\n# become a server socket\nserversocket.listen(5)\nA couple things to notice: we used socket.gethostname()\nso that the socket\nwould be visible to the outside world. If we had used s.bind(('localhost',\n80))\nor s.bind(('127.0.0.1', 80))\nwe would still have a \u201cserver\u201d socket,\nbut one that was only visible within the same machine. s.bind(('', 80))\nspecifies that the socket is reachable by any address the machine happens to\nhave.\nA second thing to note: low number ports are usually reserved for \u201cwell known\u201d services (HTTP, SNMP etc). If you\u2019re playing around, use a nice high number (4 digits).\nFinally, the argument to listen\ntells the socket library that we want it to\nqueue up as many as 5 connect requests (the normal max) before refusing outside\nconnections. If the rest of the code is written properly, that should be plenty.\nNow that we have a \u201cserver\u201d socket, listening on port 80, we can enter the mainloop of the web server:\nwhile True:\n# accept connections from outside\n(clientsocket, address) = serversocket.accept()\n# now do something with the clientsocket\n# in this case, we'll pretend this is a threaded server\nct = make_client_thread(clientsocket)\nct.start()\nThere\u2019s actually 3 general ways in which this loop could work - dispatching a\nthread to handle clientsocket\n, create a new process to handle\nclientsocket\n, or restructure this app to use non-blocking sockets, and\nmultiplex between our \u201cserver\u201d socket and any active clientsocket\ns using\nselect\n. More about that later. The important thing to understand now is\nthis: this is all a \u201cserver\u201d socket does. It doesn\u2019t send any data. It doesn\u2019t\nreceive any data. It just produces \u201cclient\u201d sockets. Each clientsocket\nis\ncreated in response to some other \u201cclient\u201d socket doing a connect()\nto the\nhost and port we\u2019re bound to. As soon as we\u2019ve created that clientsocket\n, we\ngo back to listening for more connections. The two \u201cclients\u201d are free to chat it\nup - they are using some dynamically allocated port which will be recycled when\nthe conversation ends.\nIPC\u00b6\nIf you need fast IPC between two processes on one machine, you should look into\npipes or shared memory. If you do decide to use AF_INET sockets, bind the\n\u201cserver\u201d socket to 'localhost'\n. On most platforms, this will take a\nshortcut around a couple of layers of network code and be quite a bit faster.\nSee also\nThe multiprocessing\nintegrates cross-platform IPC into a higher-level\nAPI.\nUsing a Socket\u00b6\nThe first thing to note, is that the web browser\u2019s \u201cclient\u201d socket and the web\nserver\u2019s \u201cclient\u201d socket are identical beasts. That is, this is a \u201cpeer to peer\u201d\nconversation. Or to put it another way, as the designer, you will have to\ndecide what the rules of etiquette are for a conversation. Normally, the\nconnect\ning socket starts the conversation, by sending in a request, or\nperhaps a signon. But that\u2019s a design decision - it\u2019s not a rule of sockets.\nNow there are two sets of verbs to use for communication. You can use send\nand recv\n, or you can transform your client socket into a file-like beast and\nuse read\nand write\n. The latter is the way Java presents its sockets.\nI\u2019m not going to talk about it here, except to warn you that you need to use\nflush\non sockets. These are buffered \u201cfiles\u201d, and a common mistake is to\nwrite\nsomething, and then read\nfor a reply. Without a flush\nin\nthere, you may wait forever for the reply, because the request may still be in\nyour output buffer.\nNow we come to the major stumbling block of sockets - send\nand recv\noperate\non the network buffers. They do not necessarily handle all the bytes you hand\nthem (or expect from them), because their major focus is handling the network\nbuffers. In general, they return when the associated network buffers have been\nfilled (send\n) or emptied (recv\n). They then tell you how many bytes they\nhandled. It is your responsibility to call them again until your message has\nbeen completely dealt with.\nWhen a recv\nreturns 0 bytes, it means the other side has closed (or is in\nthe process of closing) the connection. You will not receive any more data on\nthis connection. Ever. You may be able to send data successfully; I\u2019ll talk\nmore about this later.\nA protocol like HTTP uses a socket for only one transfer. The client sends a request, then reads a reply. That\u2019s it. The socket is discarded. This means that a client can detect the end of the reply by receiving 0 bytes.\nBut if you plan to reuse your socket for further transfers, you need to realize\nthat there is no EOT on a socket. I repeat: if a socket\nsend\nor recv\nreturns after handling 0 bytes, the connection has been\nbroken. If the connection has not been broken, you may wait on a recv\nforever, because the socket will not tell you that there\u2019s nothing more to\nread (for now). Now if you think about that a bit, you\u2019ll come to realize a\nfundamental truth of sockets: messages must either be fixed length (yuck), or\nbe delimited (shrug), or indicate how long they are (much better), or end by\nshutting down the connection. The choice is entirely yours, (but some ways are\nrighter than others).\nAssuming you don\u2019t want to end the connection, the simplest solution is a fixed length message:\nclass MySocket:\n\"\"\"demonstration class only\n- coded for clarity, not efficiency\n\"\"\"\ndef __init__(self, sock=None):\nif sock is None:\nself.sock = socket.socket(\nsocket.AF_INET, socket.SOCK_STREAM)\nelse:\nself.sock = sock\ndef connect(self, host, port):\nself.sock.connect((host, port))\ndef mysend(self, msg):\ntotalsent = 0\nwhile totalsent < MSGLEN:\nsent = self.sock.send(msg[totalsent:])\nif sent == 0:\nraise RuntimeError(\"socket connection broken\")\ntotalsent = totalsent + sent\ndef myreceive(self):\nchunks = []\nbytes_recd = 0\nwhile bytes_recd < MSGLEN:\nchunk = self.sock.recv(min(MSGLEN - bytes_recd, 2048))\nif chunk == b'':\nraise RuntimeError(\"socket connection broken\")\nchunks.append(chunk)\nbytes_recd = bytes_recd + len(chunk)\nreturn b''.join(chunks)\nThe sending code here is usable for almost any messaging scheme - in Python you\nsend strings, and you can use len()\nto determine its length (even if it has\nembedded \\0\ncharacters). It\u2019s mostly the receiving code that gets more\ncomplex. (And in C, it\u2019s not much worse, except you can\u2019t use strlen\nif the\nmessage has embedded \\0\ns.)\nThe easiest enhancement is to make the first character of the message an\nindicator of message type, and have the type determine the length. Now you have\ntwo recv\ns - the first to get (at least) that first character so you can\nlook up the length, and the second in a loop to get the rest. If you decide to\ngo the delimited route, you\u2019ll be receiving in some arbitrary chunk size, (4096\nor 8192 is frequently a good match for network buffer sizes), and scanning what\nyou\u2019ve received for a delimiter.\nOne complication to be aware of: if your conversational protocol allows multiple\nmessages to be sent back to back (without some kind of reply), and you pass\nrecv\nan arbitrary chunk size, you may end up reading the start of a\nfollowing message. You\u2019ll need to put that aside and hold onto it, until it\u2019s\nneeded.\nPrefixing the message with its length (say, as 5 numeric characters) gets more\ncomplex, because (believe it or not), you may not get all 5 characters in one\nrecv\n. In playing around, you\u2019ll get away with it; but in high network loads,\nyour code will very quickly break unless you use two recv\nloops - the first\nto determine the length, the second to get the data part of the message. Nasty.\nThis is also when you\u2019ll discover that send\ndoes not always manage to get\nrid of everything in one pass. And despite having read this, you will eventually\nget bit by it!\nIn the interests of space, building your character, (and preserving my competitive position), these enhancements are left as an exercise for the reader. Lets move on to cleaning up.\nBinary Data\u00b6\nIt is perfectly possible to send binary data over a socket. The major problem is\nthat not all machines use the same formats for binary data. For example,\nnetwork byte order\nis big-endian, with the most significant byte first,\nso a 16 bit integer with the value 1\nwould be the two hex bytes 00 01\n.\nHowever, most common processors (x86/AMD64, ARM, RISC-V), are little-endian,\nwith the least significant byte first - that same 1\nwould be 01 00\n.\nSocket libraries have calls for converting 16 and 32 bit integers - ntohl,\nhtonl, ntohs, htons\nwhere \u201cn\u201d means network and \u201ch\u201d means host, \u201cs\u201d means\nshort and \u201cl\u201d means long. Where network order is host order, these do\nnothing, but where the machine is byte-reversed, these swap the bytes around\nappropriately.\nIn these days of 64-bit machines, the ASCII representation of binary data is\nfrequently smaller than the binary representation. That\u2019s because a surprising\namount of the time, most integers have the value 0, or maybe 1.\nThe string \"0\"\nwould be two bytes, while a full 64-bit integer would be 8.\nOf course, this doesn\u2019t fit well with fixed-length messages.\nDecisions, decisions.\nDisconnecting\u00b6\nStrictly speaking, you\u2019re supposed to use shutdown\non a socket before you\nclose\nit. The shutdown\nis an advisory to the socket at the other end.\nDepending on the argument you pass it, it can mean \u201cI\u2019m not going to send\nanymore, but I\u2019ll still listen\u201d, or \u201cI\u2019m not listening, good riddance!\u201d. Most\nsocket libraries, however, are so used to programmers neglecting to use this\npiece of etiquette that normally a close\nis the same as shutdown();\nclose()\n. So in most situations, an explicit shutdown\nis not needed.\nOne way to use shutdown\neffectively is in an HTTP-like exchange. The client\nsends a request and then does a shutdown(1)\n. This tells the server \u201cThis\nclient is done sending, but can still receive.\u201d The server can detect \u201cEOF\u201d by\na receive of 0 bytes. It can assume it has the complete request. The server\nsends a reply. If the send\ncompletes successfully then, indeed, the client\nwas still receiving.\nPython takes the automatic shutdown a step further, and says that when a socket\nis garbage collected, it will automatically do a close\nif it\u2019s needed. But\nrelying on this is a very bad habit. If your socket just disappears without\ndoing a close\n, the socket at the other end may hang indefinitely, thinking\nyou\u2019re just being slow. Please close\nyour sockets when you\u2019re done.\nWhen Sockets Die\u00b6\nProbably the worst thing about using blocking sockets is what happens when the\nother side comes down hard (without doing a close\n). Your socket is likely to\nhang. TCP is a reliable protocol, and it will wait a long, long time\nbefore giving up on a connection. If you\u2019re using threads, the entire thread is\nessentially dead. There\u2019s not much you can do about it. As long as you aren\u2019t\ndoing something dumb, like holding a lock while doing a blocking read, the\nthread isn\u2019t really consuming much in the way of resources. Do not try to kill\nthe thread - part of the reason that threads are more efficient than processes\nis that they avoid the overhead associated with the automatic recycling of\nresources. In other words, if you do manage to kill the thread, your whole\nprocess is likely to be screwed up.\nNon-blocking Sockets\u00b6\nIf you\u2019ve understood the preceding, you already know most of what you need to know about the mechanics of using sockets. You\u2019ll still use the same calls, in much the same ways. It\u2019s just that, if you do it right, your app will be almost inside-out.\nIn Python, you use socket.setblocking(False)\nto make it non-blocking. In C, it\u2019s\nmore complex, (for one thing, you\u2019ll need to choose between the BSD flavor\nO_NONBLOCK\nand the almost indistinguishable POSIX flavor O_NDELAY\n, which\nis completely different from TCP_NODELAY\n), but it\u2019s the exact same idea. You\ndo this after creating the socket, but before using it. (Actually, if you\u2019re\nnuts, you can switch back and forth.)\nThe major mechanical difference is that send\n, recv\n, connect\nand\naccept\ncan return without having done anything. You have (of course) a\nnumber of choices. You can check return code and error codes and generally drive\nyourself crazy. If you don\u2019t believe me, try it sometime. Your app will grow\nlarge, buggy and suck CPU. So let\u2019s skip the brain-dead solutions and do it\nright.\nUse select\n.\nIn C, coding select\nis fairly complex. In Python, it\u2019s a piece of cake, but\nit\u2019s close enough to the C version that if you understand select\nin Python,\nyou\u2019ll have little trouble with it in C:\nready_to_read, ready_to_write, in_error = \\\nselect.select(\npotential_readers,\npotential_writers,\npotential_errs,\ntimeout)\nYou pass select\nthree lists: the first contains all sockets that you might\nwant to try reading; the second all the sockets you might want to try writing\nto, and the last (normally left empty) those that you want to check for errors.\nYou should note that a socket can go into more than one list. The select\ncall is blocking, but you can give it a timeout. This is generally a sensible\nthing to do - give it a nice long timeout (say a minute) unless you have good\nreason to do otherwise.\nIn return, you will get three lists. They contain the sockets that are actually readable, writable and in error. Each of these lists is a subset (possibly empty) of the corresponding list you passed in.\nIf a socket is in the output readable list, you can be\nas-close-to-certain-as-we-ever-get-in-this-business that a recv\non that\nsocket will return something. Same idea for the writable list. You\u2019ll be able\nto send something. Maybe not all you want to, but something is better than\nnothing. (Actually, any reasonably healthy socket will return as writable - it\njust means outbound network buffer space is available.)\nIf you have a \u201cserver\u201d socket, put it in the potential_readers list. If it comes\nout in the readable list, your accept\nwill (almost certainly) work. If you\nhave created a new socket to connect\nto someone else, put it in the\npotential_writers list. If it shows up in the writable list, you have a decent\nchance that it has connected.\nActually, select\ncan be handy even with blocking sockets. It\u2019s one way of\ndetermining whether you will block - the socket returns as readable when there\u2019s\nsomething in the buffers. However, this still doesn\u2019t help with the problem of\ndetermining whether the other end is done, or just busy with something else.\nPortability alert: On Unix, select\nworks both with the sockets and\nfiles. Don\u2019t try this on Windows. On Windows, select\nworks with sockets\nonly. Also note that in C, many of the more advanced socket options are done\ndifferently on Windows. In fact, on Windows I usually use threads (which work\nvery, very well) with my sockets.", "code_snippets": ["\n", " ", " ", " ", "\n", "\n", " ", "\n", "\n", " ", " ", " ", "\n", "\n", " ", "\n", "\n", "\n", " ", "\n ", "\n ", " ", " ", " ", "\n ", "\n ", "\n ", " ", " ", "\n ", "\n", "\n", "\n", "\n", "\n\n ", " ", "\n ", " ", " ", " ", "\n ", " ", " ", "\n ", " ", "\n ", "\n ", " ", " ", "\n\n ", " ", " ", "\n ", " ", "\n\n ", " ", "\n ", " ", " ", "\n ", " ", " ", " ", "\n ", " ", " ", "\n ", " ", " ", " ", "\n ", " ", "\n ", " ", " ", " ", " ", "\n\n ", "\n ", " ", " ", "\n ", " ", " ", "\n ", " ", " ", " ", "\n ", " ", " ", " ", " ", " ", "\n ", " ", " ", " ", "\n ", " ", "\n ", "\n ", " ", " ", " ", " ", "\n ", " ", "\n", " ", " ", " ", " \\\n ", "\n ", "\n ", "\n ", "\n ", "\n"], "language": "Python", "source": "python.org", "token_count": 4271} +{"url": "https://docs.python.org/3/whatsnew/2.6.html", "title": "What\u2019s New in Python 2.6", "content": "What\u2019s New in Python 2.6\u00b6\n- Author:\nA.M. Kuchling (amk at amk.ca)\nThis article explains the new features in Python 2.6, released on October 1, 2008. The release schedule is described in PEP 361.\nThe major theme of Python 2.6 is preparing the migration path to\nPython 3.0, a major redesign of the language. Whenever possible,\nPython 2.6 incorporates new features and syntax from 3.0 while\nremaining compatible with existing code by not removing older features\nor syntax. When it\u2019s not possible to do that, Python 2.6 tries to do\nwhat it can, adding compatibility functions in a\nfuture_builtins\nmodule and a -3\nswitch to warn about\nusages that will become unsupported in 3.0.\nSome significant new packages have been added to the standard library,\nsuch as the multiprocessing\nand json\nmodules, but\nthere aren\u2019t many new features that aren\u2019t related to Python 3.0 in\nsome way.\nPython 2.6 also sees a number of improvements and bugfixes throughout the source. A search through the change logs finds there were 259 patches applied and 612 bugs fixed between Python 2.5 and 2.6. Both figures are likely to be underestimates.\nThis article doesn\u2019t attempt to provide a complete specification of the new features, but instead provides a convenient overview. For full details, you should refer to the documentation for Python 2.6. If you want to understand the rationale for the design and implementation, refer to the PEP for a particular new feature. Whenever possible, \u201cWhat\u2019s New in Python\u201d links to the bug/patch item for each change.\nPython 3.0\u00b6\nThe development cycle for Python versions 2.6 and 3.0 was synchronized, with the alpha and beta releases for both versions being made on the same days. The development of 3.0 has influenced many features in 2.6.\nPython 3.0 is a far-ranging redesign of Python that breaks compatibility with the 2.x series. This means that existing Python code will need some conversion in order to run on Python 3.0. However, not all the changes in 3.0 necessarily break compatibility. In cases where new features won\u2019t cause existing code to break, they\u2019ve been backported to 2.6 and are described in this document in the appropriate place. Some of the 3.0-derived features are:\nA\n__complex__()\nmethod for converting objects to a complex number.Alternate syntax for catching exceptions:\nexcept TypeError as exc\n.The addition of\nfunctools.reduce()\nas a synonym for the built-inreduce()\nfunction.\nPython 3.0 adds several new built-in functions and changes the\nsemantics of some existing builtins. Functions that are new in 3.0\nsuch as bin()\nhave simply been added to Python 2.6, but existing\nbuiltins haven\u2019t been changed; instead, the future_builtins\nmodule has versions with the new 3.0 semantics. Code written to be\ncompatible with 3.0 can do from future_builtins import hex, map\nas\nnecessary.\nA new command-line switch, -3\n, enables warnings\nabout features that will be removed in Python 3.0. You can run code\nwith this switch to see how much work will be necessary to port\ncode to 3.0. The value of this switch is available\nto Python code as the boolean variable sys.py3kwarning\n,\nand to C extension code as Py_Py3kWarningFlag\n.\nChanges to the Development Process\u00b6\nWhile 2.6 was being developed, the Python development process underwent two significant changes: we switched from SourceForge\u2019s issue tracker to a customized Roundup installation, and the documentation was converted from LaTeX to reStructuredText.\nNew Issue Tracker: Roundup\u00b6\nFor a long time, the Python developers had been growing increasingly annoyed by SourceForge\u2019s bug tracker. SourceForge\u2019s hosted solution doesn\u2019t permit much customization; for example, it wasn\u2019t possible to customize the life cycle of issues.\nThe infrastructure committee of the Python Software Foundation therefore posted a call for issue trackers, asking volunteers to set up different products and import some of the bugs and patches from SourceForge. Four different trackers were examined: Jira, Launchpad, Roundup, and Trac. The committee eventually settled on Jira and Roundup as the two candidates. Jira is a commercial product that offers no-cost hosted instances to free-software projects; Roundup is an open-source project that requires volunteers to administer it and a server to host it.\nAfter posting a call for volunteers, a new Roundup installation was set up at https://bugs.python.org. One installation of Roundup can host multiple trackers, and this server now also hosts issue trackers for Jython and for the Python web site. It will surely find other uses in the future. Where possible, this edition of \u201cWhat\u2019s New in Python\u201d links to the bug/patch item for each change.\nHosting of the Python bug tracker is kindly provided by\nUpfront Systems\nof Stellenbosch, South Africa. Martin von L\u00f6wis put a\nlot of effort into importing existing bugs and patches from\nSourceForge; his scripts for this import operation are at\nhttps://svn.python.org/view/tracker/importer/\nand may be useful to\nother projects wishing to move from SourceForge to Roundup.\nSee also\n- https://bugs.python.org\nThe Python bug tracker.\n- https://bugs.jython.org:\nThe Jython bug tracker.\n- https://roundup.sourceforge.io/\nRoundup downloads and documentation.\n- https://svn.python.org/view/tracker/importer/\nMartin von L\u00f6wis\u2019s conversion scripts.\nNew Documentation Format: reStructuredText Using Sphinx\u00b6\nThe Python documentation was written using LaTeX since the project started around 1989. In the 1980s and early 1990s, most documentation was printed out for later study, not viewed online. LaTeX was widely used because it provided attractive printed output while remaining straightforward to write once the basic rules of the markup were learned.\nToday LaTeX is still used for writing publications destined for printing, but the landscape for programming tools has shifted. We no longer print out reams of documentation; instead, we browse through it online and HTML has become the most important format to support. Unfortunately, converting LaTeX to HTML is fairly complicated and Fred L. Drake Jr., the long-time Python documentation editor, spent a lot of time maintaining the conversion process. Occasionally people would suggest converting the documentation into SGML and later XML, but performing a good conversion is a major task and no one ever committed the time required to finish the job.\nDuring the 2.6 development cycle, Georg Brandl put a lot of effort into building a new toolchain for processing the documentation. The resulting package is called Sphinx, and is available from https://www.sphinx-doc.org/.\nSphinx concentrates on HTML output, producing attractively styled and modern HTML; printed output is still supported through conversion to LaTeX. The input format is reStructuredText, a markup syntax supporting custom extensions and directives that is commonly used in the Python community.\nSphinx is a standalone package that can be used for writing, and almost two dozen other projects (listed on the Sphinx web site) have adopted Sphinx as their documentation tool.\nSee also\n- Documenting Python\nDescribes how to write for Python\u2019s documentation.\n- Sphinx\nDocumentation and code for the Sphinx toolchain.\n- Docutils\nThe underlying reStructuredText parser and toolset.\nPEP 343: The \u2018with\u2019 statement\u00b6\nThe previous version, Python 2.5, added the \u2018with\n\u2019\nstatement as an optional feature, to be enabled by a from __future__\nimport with_statement\ndirective. In 2.6 the statement no longer needs to\nbe specially enabled; this means that with\nis now always a\nkeyword. The rest of this section is a copy of the corresponding\nsection from the \u201cWhat\u2019s New in Python 2.5\u201d document; if you\u2019re\nfamiliar with the \u2018with\n\u2019 statement\nfrom Python 2.5, you can skip this section.\nThe \u2018with\n\u2019 statement clarifies code that previously would use\ntry...finally\nblocks to ensure that clean-up code is executed. In this\nsection, I\u2019ll discuss the statement as it will commonly be used. In the next\nsection, I\u2019ll examine the implementation details and show how to write objects\nfor use with this statement.\nThe \u2018with\n\u2019 statement is a control-flow structure whose basic\nstructure is:\nwith expression [as variable]:\nwith-block\nThe expression is evaluated, and it should result in an object that supports the\ncontext management protocol (that is, has __enter__()\nand __exit__()\nmethods).\nThe object\u2019s __enter__()\nis called before with-block is executed and\ntherefore can run set-up code. It also may return a value that is bound to the\nname variable, if given. (Note carefully that variable is not assigned\nthe result of expression.)\nAfter execution of the with-block is finished, the object\u2019s __exit__()\nmethod is called, even if the block raised an exception, and can therefore run\nclean-up code.\nSome standard Python objects now support the context management protocol and can\nbe used with the \u2018with\n\u2019 statement. File objects are one example:\nwith open('/etc/passwd', 'r') as f:\nfor line in f:\nprint line\n... more processing code ...\nAfter this statement has executed, the file object in f will have been\nautomatically closed, even if the for\nloop raised an exception\npart-way through the block.\nNote\nIn this case, f is the same object created by open()\n, because\n__enter__()\nreturns self.\nThe threading\nmodule\u2019s locks and condition variables also support the\n\u2018with\n\u2019 statement:\nlock = threading.Lock()\nwith lock:\n# Critical section of code\n...\nThe lock is acquired before the block is executed and always released once the block is complete.\nThe localcontext()\nfunction in the decimal\nmodule makes\nit easy to save and restore the current decimal context, which encapsulates\nthe desired precision and rounding characteristics for computations:\nfrom decimal import Decimal, Context, localcontext\n# Displays with default precision of 28 digits\nv = Decimal('578')\nprint v.sqrt()\nwith localcontext(Context(prec=16)):\n# All code in this block uses a precision of 16 digits.\n# The original context is restored on exiting the block.\nprint v.sqrt()\nWriting Context Managers\u00b6\nUnder the hood, the \u2018with\n\u2019 statement is fairly complicated. Most\npeople will only use \u2018with\n\u2019 in company with existing objects and\ndon\u2019t need to know these details, so you can skip the rest of this section if\nyou like. Authors of new objects will need to understand the details of the\nunderlying implementation and should keep reading.\nA high-level explanation of the context management protocol is:\nThe expression is evaluated and should result in an object called a \u201ccontext manager\u201d. The context manager must have\n__enter__()\nand__exit__()\nmethods.The context manager\u2019s\n__enter__()\nmethod is called. The value returned is assigned to VAR. If noas VAR\nclause is present, the value is simply discarded.The code in BLOCK is executed.\nIf BLOCK raises an exception, the context manager\u2019s\n__exit__()\nmethod is called with three arguments, the exception details (type, value, traceback\n, the same values returned bysys.exc_info()\n, which can also beNone\nif no exception occurred). The method\u2019s return value controls whether an exception is re-raised: any false value re-raises the exception, andTrue\nwill result in suppressing it. You\u2019ll only rarely want to suppress the exception, because if you do the author of the code containing the \u2018with\n\u2019 statement will never realize anything went wrong.If BLOCK didn\u2019t raise an exception, the\n__exit__()\nmethod is still called, but type, value, and traceback are allNone\n.\nLet\u2019s think through an example. I won\u2019t present detailed code but will only sketch the methods necessary for a database that supports transactions.\n(For people unfamiliar with database terminology: a set of changes to the database are grouped into a transaction. Transactions can be either committed, meaning that all the changes are written into the database, or rolled back, meaning that the changes are all discarded and the database is unchanged. See any database textbook for more information.)\nLet\u2019s assume there\u2019s an object representing a database connection. Our goal will be to let the user write code like this:\ndb_connection = DatabaseConnection()\nwith db_connection as cursor:\ncursor.execute('insert into ...')\ncursor.execute('delete from ...')\n# ... more operations ...\nThe transaction should be committed if the code in the block runs flawlessly or\nrolled back if there\u2019s an exception. Here\u2019s the basic interface for\nDatabaseConnection\nthat I\u2019ll assume:\nclass DatabaseConnection:\n# Database interface\ndef cursor(self):\n\"Returns a cursor object and starts a new transaction\"\ndef commit(self):\n\"Commits current transaction\"\ndef rollback(self):\n\"Rolls back current transaction\"\nThe __enter__()\nmethod is pretty easy, having only to start a new\ntransaction. For this application the resulting cursor object would be a useful\nresult, so the method will return it. The user can then add as cursor\nto\ntheir \u2018with\n\u2019 statement to bind the cursor to a variable name.\nclass DatabaseConnection:\n...\ndef __enter__(self):\n# Code to start a new transaction\ncursor = self.cursor()\nreturn cursor\nThe __exit__()\nmethod is the most complicated because it\u2019s where most of\nthe work has to be done. The method has to check if an exception occurred. If\nthere was no exception, the transaction is committed. The transaction is rolled\nback if there was an exception.\nIn the code below, execution will just fall off the end of the function,\nreturning the default value of None\n. None\nis false, so the exception\nwill be re-raised automatically. If you wished, you could be more explicit and\nadd a return\nstatement at the marked location.\nclass DatabaseConnection:\n...\ndef __exit__(self, type, value, tb):\nif tb is None:\n# No exception, so commit\nself.commit()\nelse:\n# Exception occurred, so rollback.\nself.rollback()\n# return False\nThe contextlib module\u00b6\nThe contextlib\nmodule provides some functions and a decorator that\nare useful when writing objects for use with the \u2018with\n\u2019 statement.\nThe decorator is called contextmanager()\n, and lets you write\na single generator function instead of defining a new class. The generator\nshould yield exactly one value. The code up to the yield\nwill be\nexecuted as the __enter__()\nmethod, and the value yielded will\nbe the method\u2019s return value that will get bound to the variable in the\n\u2018with\n\u2019 statement\u2019s as\nclause, if any. The code after\nthe yield\nwill be executed in the __exit__()\nmethod.\nAny exception raised in the block will be raised by the yield\nstatement.\nUsing this decorator, our database example from the previous section could be written as:\nfrom contextlib import contextmanager\n@contextmanager\ndef db_transaction(connection):\ncursor = connection.cursor()\ntry:\nyield cursor\nexcept:\nconnection.rollback()\nraise\nelse:\nconnection.commit()\ndb = DatabaseConnection()\nwith db_transaction(db) as cursor:\n...\nThe contextlib\nmodule also has a nested(mgr1, mgr2, ...)\nfunction\nthat combines a number of context managers so you don\u2019t need to write nested\n\u2018with\n\u2019 statements. In this example, the single \u2018with\n\u2019\nstatement both starts a database transaction and acquires a thread lock:\nlock = threading.Lock()\nwith nested (db_transaction(db), lock) as (cursor, locked):\n...\nFinally, the closing()\nfunction returns its argument so that it can be\nbound to a variable, and calls the argument\u2019s .close()\nmethod at the end\nof the block.\nimport urllib, sys\nfrom contextlib import closing\nwith closing(urllib.urlopen('http://www.yahoo.com')) as f:\nfor line in f:\nsys.stdout.write(line)\nSee also\n- PEP 343 - The \u201cwith\u201d statement\nPEP written by Guido van Rossum and Nick Coghlan; implemented by Mike Bland, Guido van Rossum, and Neal Norwitz. The PEP shows the code generated for a \u2018\nwith\n\u2019 statement, which can be helpful in learning how the statement works.\nThe documentation for the contextlib\nmodule.\nPEP 366: Explicit Relative Imports From a Main Module\u00b6\nPython\u2019s -m\nswitch allows running a module as a script.\nWhen you ran a module that was located inside a package, relative\nimports didn\u2019t work correctly.\nThe fix for Python 2.6 adds a module.__package__\nattribute.\nWhen this attribute is present, relative imports will be\nrelative to the value of this attribute instead of the\n__name__\nattribute.\nPEP 302-style importers can then set __package__\nas necessary.\nThe runpy\nmodule that implements the -m\nswitch now\ndoes this, so relative imports will now work correctly in scripts\nrunning from inside a package.\nPEP 370: Per-user site-packages\nDirectory\u00b6\nWhen you run Python, the module search path sys.path\nusually\nincludes a directory whose path ends in \"site-packages\"\n. This\ndirectory is intended to hold locally installed packages available to\nall users using a machine or a particular site installation.\nPython 2.6 introduces a convention for user-specific site directories. The directory varies depending on the platform:\nUnix and Mac OS X:\n~/.local/\nWindows:\n%APPDATA%/Python\nWithin this directory, there will be version-specific subdirectories,\nsuch as lib/python2.6/site-packages\non Unix/Mac OS and\nPython26/site-packages\non Windows.\nIf you don\u2019t like the default directory, it can be overridden by an\nenvironment variable. PYTHONUSERBASE\nsets the root\ndirectory used for all Python versions supporting this feature. On\nWindows, the directory for application-specific data can be changed by\nsetting the APPDATA\nenvironment variable. You can also\nmodify the site.py\nfile for your Python installation.\nThe feature can be disabled entirely by running Python with the\n-s\noption or setting the PYTHONNOUSERSITE\nenvironment variable.\nSee also\n- PEP 370 - Per-user\nsite-packages\nDirectory PEP written and implemented by Christian Heimes.\nPEP 371: The multiprocessing\nPackage\u00b6\nThe new multiprocessing\npackage lets Python programs create new\nprocesses that will perform a computation and return a result to the\nparent. The parent and child processes can communicate using queues\nand pipes, synchronize their operations using locks and semaphores,\nand can share simple arrays of data.\nThe multiprocessing\nmodule started out as an exact emulation of\nthe threading\nmodule using processes instead of threads. That\ngoal was discarded along the path to Python 2.6, but the general\napproach of the module is still similar. The fundamental class\nis the Process\n, which is passed a callable object and\na collection of arguments. The start()\nmethod\nsets the callable running in a subprocess, after which you can call\nthe is_alive()\nmethod to check whether the\nsubprocess is still running and the join()\nmethod to wait for the process to exit.\nHere\u2019s a simple example where the subprocess will calculate a factorial. The function doing the calculation is written strangely so that it takes significantly longer when the input argument is a multiple of 4.\nimport time\nfrom multiprocessing import Process, Queue\ndef factorial(queue, N):\n\"Compute a factorial.\"\n# If N is a multiple of 4, this function will take much longer.\nif (N % 4) == 0:\ntime.sleep(.05 * N/4)\n# Calculate the result\nfact = 1L\nfor i in range(1, N+1):\nfact = fact * i\n# Put the result on the queue\nqueue.put(fact)\nif __name__ == '__main__':\nqueue = Queue()\nN = 5\np = Process(target=factorial, args=(queue, N))\np.start()\np.join()\nresult = queue.get()\nprint 'Factorial', N, '=', result\nA Queue\nis used to communicate the result of the factorial.\nThe Queue\nobject is stored in a global variable.\nThe child process will use the value of the variable when the child\nwas created; because it\u2019s a Queue\n, parent and child can use\nthe object to communicate. (If the parent were to change the value of\nthe global variable, the child\u2019s value would be unaffected, and vice\nversa.)\nTwo other classes, Pool\nand\nManager\n, provide higher-level interfaces.\nPool\nwill create a fixed number of worker\nprocesses, and requests can then be distributed to the workers by calling\napply()\nor\napply_async()\nto add a single request, and\nmap()\nor\nmap_async()\nto add a number of\nrequests. The following code uses a Pool\nto\nspread requests across 5 worker processes and retrieve a list of results:\nfrom multiprocessing import Pool\ndef factorial(N, dictionary):\n\"Compute a factorial.\"\n...\np = Pool(5)\nresult = p.map(factorial, range(1, 1000, 10))\nfor v in result:\nprint v\nThis produces the following output:\n1\n39916800\n51090942171709440000\n8222838654177922817725562880000000\n33452526613163807108170062053440751665152000000000\n...\nThe other high-level interface, the Manager\nclass,\ncreates a separate server process that can hold master copies of Python data\nstructures. Other processes can then access and modify these data\nstructures using proxy objects. The following example creates a\nshared dictionary by calling the dict()\nmethod; the worker\nprocesses then insert values into the dictionary. (Locking is not\ndone for you automatically, which doesn\u2019t matter in this example.\nManager\n\u2019s methods also include\nLock()\n,\nRLock()\n,\nand Semaphore()\nto create\nshared locks.)\nimport time\nfrom multiprocessing import Pool, Manager\ndef factorial(N, dictionary):\n\"Compute a factorial.\"\n# Calculate the result\nfact = 1L\nfor i in range(1, N+1):\nfact = fact * i\n# Store result in dictionary\ndictionary[N] = fact\nif __name__ == '__main__':\np = Pool(5)\nmgr = Manager()\nd = mgr.dict() # Create shared dictionary\n# Run tasks using the pool\nfor N in range(1, 1000, 10):\np.apply_async(factorial, (N, d))\n# Mark pool as closed -- no more tasks can be added.\np.close()\n# Wait for tasks to exit\np.join()\n# Output results\nfor k, v in sorted(d.items()):\nprint k, v\nThis will produce the output:\n1 1\n11 39916800\n21 51090942171709440000\n31 8222838654177922817725562880000000\n41 33452526613163807108170062053440751665152000000000\n51 15511187532873822802242430164693032110632597200169861120000...\nSee also\nThe documentation for the multiprocessing\nmodule.\n- PEP 371 - Addition of the multiprocessing package\nPEP written by Jesse Noller and Richard Oudkerk; implemented by Richard Oudkerk and Jesse Noller.\nPEP 3101: Advanced String Formatting\u00b6\nIn Python 3.0, the %\noperator is supplemented by a more powerful string\nformatting method, format()\n. Support for the str.format()\nmethod\nhas been backported to Python 2.6.\nIn 2.6, both 8-bit and Unicode strings have a .format()\nmethod that\ntreats the string as a template and takes the arguments to be formatted.\nThe formatting template uses curly brackets ({\n, }\n) as special characters:\n>>> # Substitute positional argument 0 into the string.\n>>> \"User ID: {0}\".format(\"root\")\n'User ID: root'\n>>> # Use the named keyword arguments\n>>> \"User ID: {uid} Last seen: {last_login}\".format(\n... uid=\"root\",\n... last_login = \"5 Mar 2008 07:20\")\n'User ID: root Last seen: 5 Mar 2008 07:20'\nCurly brackets can be escaped by doubling them:\n>>> \"Empty dict: {{}}\".format()\n\"Empty dict: {}\"\nField names can be integers indicating positional arguments, such as\n{0}\n, {1}\n, etc. or names of keyword arguments. You can also\nsupply compound field names that read attributes or access dictionary keys:\n>>> import sys\n>>> print 'Platform: {0.platform}\\nPython version: {0.version}'.format(sys)\nPlatform: darwin\nPython version: 2.6a1+ (trunk:61261M, Mar 5 2008, 20:29:41)\n[GCC 4.0.1 (Apple Computer, Inc. build 5367)]'\n>>> import mimetypes\n>>> 'Content-type: {0[.mp4]}'.format(mimetypes.types_map)\n'Content-type: video/mp4'\nNote that when using dictionary-style notation such as [.mp4]\n, you\ndon\u2019t need to put any quotation marks around the string; it will look\nup the value using .mp4\nas the key. Strings beginning with a\nnumber will be converted to an integer. You can\u2019t write more\ncomplicated expressions inside a format string.\nSo far we\u2019ve shown how to specify which field to substitute into the resulting string. The precise formatting used is also controllable by adding a colon followed by a format specifier. For example:\n>>> # Field 0: left justify, pad to 15 characters\n>>> # Field 1: right justify, pad to 6 characters\n>>> fmt = '{0:15} ${1:>6}'\n>>> fmt.format('Registration', 35)\n'Registration $ 35'\n>>> fmt.format('Tutorial', 50)\n'Tutorial $ 50'\n>>> fmt.format('Banquet', 125)\n'Banquet $ 125'\nFormat specifiers can reference other fields through nesting:\n>>> fmt = '{0:{1}}'\n>>> width = 15\n>>> fmt.format('Invoice #1234', width)\n'Invoice #1234 '\n>>> width = 35\n>>> fmt.format('Invoice #1234', width)\n'Invoice #1234 '\nThe alignment of a field within the desired width can be specified:\nCharacter |\nEffect |\n|---|---|\n< (default) |\nLeft-align |\n> |\nRight-align |\n^ |\nCenter |\n= |\n(For numeric types only) Pad after the sign. |\nFormat specifiers can also include a presentation type, which controls how the value is formatted. For example, floating-point numbers can be formatted as a general number or in exponential notation:\n>>> '{0:g}'.format(3.75)\n'3.75'\n>>> '{0:e}'.format(3.75)\n'3.750000e+00'\nA variety of presentation types are available. Consult the 2.6 documentation for a complete list; here\u2019s a sample:\n|\nBinary. Outputs the number in base 2. |\n|\nCharacter. Converts the integer to the corresponding Unicode character before printing. |\n|\nDecimal Integer. Outputs the number in base 10. |\n|\nOctal format. Outputs the number in base 8. |\n|\nHex format. Outputs the number in base 16, using lower-case letters for the digits above 9. |\n|\nExponent notation. Prints the number in scientific notation using the letter \u2018e\u2019 to indicate the exponent. |\n|\nGeneral format. This prints the number as a fixed-point number, unless the number is too large, in which case it switches to \u2018e\u2019 exponent notation. |\n|\nNumber. This is the same as \u2018g\u2019 (for floats) or \u2018d\u2019 (for integers), except that it uses the current locale setting to insert the appropriate number separator characters. |\n|\nPercentage. Multiplies the number by 100 and displays in fixed (\u2018f\u2019) format, followed by a percent sign. |\nClasses and types can define a __format__()\nmethod to control how they\u2019re\nformatted. It receives a single argument, the format specifier:\ndef __format__(self, format_spec):\nif isinstance(format_spec, unicode):\nreturn unicode(str(self))\nelse:\nreturn str(self)\nThere\u2019s also a format()\nbuiltin that will format a single\nvalue. It calls the type\u2019s __format__()\nmethod with the\nprovided specifier:\n>>> format(75.6564, '.2f')\n'75.66'\nSee also\n- Format String Syntax\nThe reference documentation for format fields.\n- PEP 3101 - Advanced String Formatting\nPEP written by Talin. Implemented by Eric Smith.\nPEP 3105: print\nAs a Function\u00b6\nThe print\nstatement becomes the print()\nfunction in Python 3.0.\nMaking print()\na function makes it possible to replace the function\nby doing def print(...)\nor importing a new function from somewhere else.\nPython 2.6 has a __future__\nimport that removes print\nas language\nsyntax, letting you use the functional form instead. For example:\n>>> from __future__ import print_function\n>>> print('# of entries', len(dictionary), file=sys.stderr)\nThe signature of the new function is:\ndef print(*args, sep=' ', end='\\n', file=None)\nThe parameters are:\nargs: positional arguments whose values will be printed out.\nsep: the separator, which will be printed between arguments.\nend: the ending text, which will be printed after all of the arguments have been output.\nfile: the file object to which the output will be sent.\nSee also\n- PEP 3105 - Make print a function\nPEP written by Georg Brandl.\nPEP 3110: Exception-Handling Changes\u00b6\nOne error that Python programmers occasionally make is writing the following code:\ntry:\n...\nexcept TypeError, ValueError: # Wrong!\n...\nThe author is probably trying to catch both TypeError\nand\nValueError\nexceptions, but this code actually does something\ndifferent: it will catch TypeError\nand bind the resulting\nexception object to the local name \"ValueError\"\n. The\nValueError\nexception will not be caught at all. The correct\ncode specifies a tuple of exceptions:\ntry:\n...\nexcept (TypeError, ValueError):\n...\nThis error happens because the use of the comma here is ambiguous: does it indicate two different nodes in the parse tree, or a single node that\u2019s a tuple?\nPython 3.0 makes this unambiguous by replacing the comma with the word\n\u201cas\u201d. To catch an exception and store the exception object in the\nvariable exc\n, you must write:\ntry:\n...\nexcept TypeError as exc:\n...\nPython 3.0 will only support the use of \u201cas\u201d, and therefore interprets the first example as catching two different exceptions. Python 2.6 supports both the comma and \u201cas\u201d, so existing code will continue to work. We therefore suggest using \u201cas\u201d when writing new Python code that will only be executed with 2.6.\nSee also\n- PEP 3110 - Catching Exceptions in Python 3000\nPEP written and implemented by Collin Winter.\nPEP 3112: Byte Literals\u00b6\nPython 3.0 adopts Unicode as the language\u2019s fundamental string type and\ndenotes 8-bit literals differently, either as b'string'\nor using a bytes\nconstructor. For future compatibility,\nPython 2.6 adds bytes\nas a synonym for the str\ntype,\nand it also supports the b''\nnotation.\nThe 2.6 str\ndiffers from 3.0\u2019s bytes\ntype in various\nways; most notably, the constructor is completely different. In 3.0,\nbytes([65, 66, 67])\nis 3 elements long, containing the bytes\nrepresenting ABC\n; in 2.6, bytes([65, 66, 67])\nreturns the\n12-byte string representing the str()\nof the list.\nThe primary use of bytes\nin 2.6 will be to write tests of\nobject type such as isinstance(x, bytes)\n. This will help the 2to3\nconverter, which can\u2019t tell whether 2.x code intends strings to\ncontain either characters or 8-bit bytes; you can now\nuse either bytes\nor str\nto represent your intention\nexactly, and the resulting code will also be correct in Python 3.0.\nThere\u2019s also a __future__\nimport that causes all string literals\nto become Unicode strings. This means that \\u\nescape sequences\ncan be used to include Unicode characters:\nfrom __future__ import unicode_literals\ns = ('\\u751f\\u3080\\u304e\\u3000\\u751f\\u3054'\n'\\u3081\\u3000\\u751f\\u305f\\u307e\\u3054')\nprint len(s) # 12 Unicode characters\nAt the C level, Python 3.0 will rename the existing 8-bit\nstring type, called PyStringObject\nin Python 2.x,\nto PyBytesObject\n. Python 2.6 uses #define\nto support using the names PyBytesObject()\n,\nPyBytes_Check()\n, PyBytes_FromStringAndSize()\n,\nand all the other functions and macros used with strings.\nInstances of the bytes\ntype are immutable just\nas strings are. A new bytearray\ntype stores a mutable\nsequence of bytes:\n>>> bytearray([65, 66, 67])\nbytearray(b'ABC')\n>>> b = bytearray(u'\\u21ef\\u3244', 'utf-8')\n>>> b\nbytearray(b'\\xe2\\x87\\xaf\\xe3\\x89\\x84')\n>>> b[0] = '\\xe3'\n>>> b\nbytearray(b'\\xe3\\x87\\xaf\\xe3\\x89\\x84')\n>>> unicode(str(b), 'utf-8')\nu'\\u31ef \\u3244'\nByte arrays support most of the methods of string types, such as\nstartswith()\n/endswith()\n,\nfind()\n/rfind()\n,\nand some of the methods of lists, such as append()\n,\npop()\n, and reverse()\n.\n>>> b = bytearray('ABC')\n>>> b.append('d')\n>>> b.append(ord('e'))\n>>> b\nbytearray(b'ABCde')\nThere\u2019s also a corresponding C API, with\nPyByteArray_FromObject()\n,\nPyByteArray_FromStringAndSize()\n,\nand various other functions.\nSee also\n- PEP 3112 - Bytes literals in Python 3000\nPEP written by Jason Orendorff; backported to 2.6 by Christian Heimes.\nPEP 3116: New I/O Library\u00b6\nPython\u2019s built-in file objects support a number of methods, but\nfile-like objects don\u2019t necessarily support all of them. Objects that\nimitate files usually support read()\nand\nwrite()\n, but they may not support readline()\n,\nfor example. Python 3.0 introduces a layered I/O library in the io\nmodule that separates buffering and text-handling features from the\nfundamental read and write operations.\nThere are three levels of abstract base classes provided by\nthe io\nmodule:\nRawIOBase\ndefines raw I/O operations:read()\n,readinto()\n,write()\n,seek()\n,tell()\n,truncate()\n, andclose()\n. Most of the methods of this class will often map to a single system call. There are alsoreadable()\n,writable()\n, andseekable()\nmethods for determining what operations a given object will allow.Python 3.0 has concrete implementations of this class for files and sockets, but Python 2.6 hasn\u2019t restructured its file and socket objects in this way.\nBufferedIOBase\nis an abstract base class that buffers data in memory to reduce the number of system calls used, making I/O processing more efficient. It supports all of the methods ofRawIOBase\n, and adds araw\nattribute holding the underlying raw object.There are five concrete classes implementing this ABC.\nBufferedWriter\nandBufferedReader\nare for objects that support write-only or read-only usage that have aseek()\nmethod for random access.BufferedRandom\nobjects support read and write access upon the same underlying stream, andBufferedRWPair\nis for objects such as TTYs that have both read and write operations acting upon unconnected streams of data. TheBytesIO\nclass supports reading, writing, and seeking over an in-memory buffer.TextIOBase\n: Provides functions for reading and writing strings (remember, strings will be Unicode in Python 3.0), and supporting universal newlines.TextIOBase\ndefines thereadline()\nmethod and supports iteration upon objects.There are two concrete implementations.\nTextIOWrapper\nwraps a buffered I/O object, supporting all of the methods for text I/O and adding abuffer\nattribute for access to the underlying object.StringIO\nsimply buffers everything in memory without ever writing anything to disk.(In Python 2.6,\nio.StringIO\nis implemented in pure Python, so it\u2019s pretty slow. You should therefore stick with the existingStringIO\nmodule orcStringIO\nfor now. At some point Python 3.0\u2019sio\nmodule will be rewritten into C for speed, and perhaps the C implementation will be backported to the 2.x releases.)\nIn Python 2.6, the underlying implementations haven\u2019t been\nrestructured to build on top of the io\nmodule\u2019s classes. The\nmodule is being provided to make it easier to write code that\u2019s\nforward-compatible with 3.0, and to save developers the effort of writing\ntheir own implementations of buffering and text I/O.\nSee also\n- PEP 3116 - New I/O\nPEP written by Daniel Stutzbach, Mike Verdone, and Guido van Rossum. Code by Guido van Rossum, Georg Brandl, Walter Doerwald, Jeremy Hylton, Martin von L\u00f6wis, Tony Lownds, and others.\nPEP 3118: Revised Buffer Protocol\u00b6\nThe buffer protocol is a C-level API that lets Python types\nexchange pointers into their internal representations. A\nmemory-mapped file can be viewed as a buffer of characters, for\nexample, and this lets another module such as re\ntreat memory-mapped files as a string of characters to be searched.\nThe primary users of the buffer protocol are numeric-processing packages such as NumPy, which expose the internal representation of arrays so that callers can write data directly into an array instead of going through a slower API. This PEP updates the buffer protocol in light of experience from NumPy development, adding a number of new features such as indicating the shape of an array or locking a memory region.\nThe most important new C API function is\nPyObject_GetBuffer(PyObject *obj, Py_buffer *view, int flags)\n, which\ntakes an object and a set of flags, and fills in the\nPy_buffer\nstructure with information\nabout the object\u2019s memory representation. Objects\ncan use this operation to lock memory in place\nwhile an external caller could be modifying the contents,\nso there\u2019s a corresponding PyBuffer_Release(Py_buffer *view)\nto\nindicate that the external caller is done.\nThe flags argument to PyObject_GetBuffer()\nspecifies\nconstraints upon the memory returned. Some examples are:\nPyBUF_WRITABLE\nindicates that the memory must be writable.PyBUF_LOCK\nrequests a read-only or exclusive lock on the memory.PyBUF_C_CONTIGUOUS\nandPyBUF_F_CONTIGUOUS\nrequests a C-contiguous (last dimension varies the fastest) or Fortran-contiguous (first dimension varies the fastest) array layout.\nTwo new argument codes for PyArg_ParseTuple()\n,\ns*\nand z*\n, return locked buffer objects for a parameter.\nSee also\n- PEP 3118 - Revising the buffer protocol\nPEP written by Travis Oliphant and Carl Banks; implemented by Travis Oliphant.\nPEP 3119: Abstract Base Classes\u00b6\nSome object-oriented languages such as Java support interfaces,\ndeclaring that a class has a given set of methods or supports a given\naccess protocol. Abstract Base Classes (or ABCs) are an equivalent\nfeature for Python. The ABC support consists of an abc\nmodule\ncontaining a metaclass called ABCMeta\n, special handling of\nthis metaclass by the isinstance()\nand issubclass()\nbuiltins, and a collection of basic ABCs that the Python developers\nthink will be widely useful. Future versions of Python will probably\nadd more ABCs.\nLet\u2019s say you have a particular class and wish to know whether it supports\ndictionary-style access. The phrase \u201cdictionary-style\u201d is vague, however.\nIt probably means that accessing items with obj[1]\nworks.\nDoes it imply that setting items with obj[2] = value\nworks?\nOr that the object will have keys()\n, values()\n, and items()\nmethods? What about the iterative variants such as iterkeys()\n?\ncopy`and :meth:()\n!update`? Iterating over the object with iter()\n?\nThe Python 2.6 collections\nmodule includes a number of\ndifferent ABCs that represent these distinctions. Iterable\nindicates that a class defines __iter__()\n, and\nContainer\nmeans the class defines a __contains__()\nmethod and therefore supports x in y\nexpressions. The basic\ndictionary interface of getting items, setting items, and\nkeys()\n, values()\n, and items()\n, is defined by the\nMutableMapping\nABC.\nYou can derive your own classes from a particular ABC to indicate they support that ABC\u2019s interface:\nimport collections\nclass Storage(collections.MutableMapping):\n...\nAlternatively, you could write the class without deriving from\nthe desired ABC and instead register the class by\ncalling the ABC\u2019s register()\nmethod:\nimport collections\nclass Storage:\n...\ncollections.MutableMapping.register(Storage)\nFor classes that you write, deriving from the ABC is probably clearer.\nThe register()\nmethod is useful when you\u2019ve written a new\nABC that can describe an existing type or class, or if you want\nto declare that some third-party class implements an ABC.\nFor example, if you defined a PrintableType\nABC,\nit\u2019s legal to do:\n# Register Python's types\nPrintableType.register(int)\nPrintableType.register(float)\nPrintableType.register(str)\nClasses should obey the semantics specified by an ABC, but Python can\u2019t check this; it\u2019s up to the class author to understand the ABC\u2019s requirements and to implement the code accordingly.\nTo check whether an object supports a particular interface, you can now write:\ndef func(d):\nif not isinstance(d, collections.MutableMapping):\nraise ValueError(\"Mapping object expected, not %r\" % d)\nDon\u2019t feel that you must now begin writing lots of checks as in the above example. Python has a strong tradition of duck-typing, where explicit type-checking is never done and code simply calls methods on an object, trusting that those methods will be there and raising an exception if they aren\u2019t. Be judicious in checking for ABCs and only do it where it\u2019s absolutely necessary.\nYou can write your own ABCs by using abc.ABCMeta\nas the\nmetaclass in a class definition:\nfrom abc import ABCMeta, abstractmethod\nclass Drawable():\n__metaclass__ = ABCMeta\n@abstractmethod\ndef draw(self, x, y, scale=1.0):\npass\ndef draw_doubled(self, x, y):\nself.draw(x, y, scale=2.0)\nclass Square(Drawable):\ndef draw(self, x, y, scale):\n...\nIn the Drawable\nABC above, the draw_doubled()\nmethod\nrenders the object at twice its size and can be implemented in terms\nof other methods described in Drawable\n. Classes implementing\nthis ABC therefore don\u2019t need to provide their own implementation\nof draw_doubled()\n, though they can do so. An implementation\nof draw()\nis necessary, though; the ABC can\u2019t provide\na useful generic implementation.\nYou can apply the @~abc.abstractmethod\ndecorator to methods such as\ndraw()\nthat must be implemented; Python will then raise an\nexception for classes that don\u2019t define the method.\nNote that the exception is only raised when you actually\ntry to create an instance of a subclass lacking the method:\n>>> class Circle(Drawable):\n... pass\n...\n>>> c = Circle()\nTraceback (most recent call last):\nFile \"\", line 1, in \nTypeError: Can't instantiate abstract class Circle with abstract methods draw\n>>>\nAbstract data attributes can be declared using the\n@abstractproperty\ndecorator:\nfrom abc import abstractproperty\n...\n@abstractproperty\ndef readonly(self):\nreturn self._x\nSubclasses must then define a readonly\nproperty.\nSee also\n- PEP 3119 - Introducing Abstract Base Classes\nPEP written by Guido van Rossum and Talin. Implemented by Guido van Rossum. Backported to 2.6 by Benjamin Aranguren, with Alex Martelli.\nPEP 3127: Integer Literal Support and Syntax\u00b6\nPython 3.0 changes the syntax for octal (base-8) integer literals, prefixing them with \u201c0o\u201d or \u201c0O\u201d instead of a leading zero, and adds support for binary (base-2) integer literals, signalled by a \u201c0b\u201d or \u201c0B\u201d prefix.\nPython 2.6 doesn\u2019t drop support for a leading 0 signalling an octal number, but it does add support for \u201c0o\u201d and \u201c0b\u201d:\n>>> 0o21, 2*8 + 1\n(17, 17)\n>>> 0b101111\n47\nThe oct()\nbuiltin still returns numbers\nprefixed with a leading zero, and a new bin()\nbuiltin returns the binary representation for a number:\n>>> oct(42)\n'052'\n>>> future_builtins.oct(42)\n'0o52'\n>>> bin(173)\n'0b10101101'\nThe int()\nand long()\nbuiltins will now accept the \u201c0o\u201d\nand \u201c0b\u201d prefixes when base-8 or base-2 are requested, or when the\nbase argument is zero (signalling that the base used should be\ndetermined from the string):\n>>> int ('0o52', 0)\n42\n>>> int('1101', 2)\n13\n>>> int('0b1101', 2)\n13\n>>> int('0b1101', 0)\n13\nSee also\n- PEP 3127 - Integer Literal Support and Syntax\nPEP written by Patrick Maupin; backported to 2.6 by Eric Smith.\nPEP 3129: Class Decorators\u00b6\nDecorators have been extended from functions to classes. It\u2019s now legal to write:\n@foo\n@bar\nclass A:\npass\nThis is equivalent to:\nclass A:\npass\nA = foo(bar(A))\nSee also\n- PEP 3129 - Class Decorators\nPEP written by Collin Winter.\nPEP 3141: A Type Hierarchy for Numbers\u00b6\nPython 3.0 adds several abstract base classes for numeric types\ninspired by Scheme\u2019s numeric tower. These classes were backported to\n2.6 as the numbers\nmodule.\nThe most general ABC is Number\n. It defines no operations at\nall, and only exists to allow checking if an object is a number by\ndoing isinstance(obj, Number)\n.\nComplex\nis a subclass of Number\n. Complex numbers\ncan undergo the basic operations of addition, subtraction,\nmultiplication, division, and exponentiation, and you can retrieve the\nreal and imaginary parts and obtain a number\u2019s conjugate. Python\u2019s built-in\ncomplex type is an implementation of Complex\n.\nReal\nfurther derives from Complex\n, and adds\noperations that only work on real numbers: floor()\n, trunc()\n,\nrounding, taking the remainder mod N, floor division,\nand comparisons.\nRational\nnumbers derive from Real\n, have\nnumerator\nand denominator\nproperties, and can be\nconverted to floats. Python 2.6 adds a simple rational-number class,\nFraction\n, in the fractions\nmodule. (It\u2019s called\nFraction\ninstead of Rational\nto avoid\na name clash with numbers.Rational\n.)\nIntegral\nnumbers derive from Rational\n, and\ncan be shifted left and right with <<\nand >>\n,\ncombined using bitwise operations such as &\nand |\n,\nand can be used as array indexes and slice boundaries.\nIn Python 3.0, the PEP slightly redefines the existing builtins\nround()\n, math.floor()\n, math.ceil()\n, and adds a new\none, math.trunc()\n, that\u2019s been backported to Python 2.6.\nmath.trunc()\nrounds toward zero, returning the closest\nIntegral\nthat\u2019s between the function\u2019s argument and zero.\nSee also\n- PEP 3141 - A Type Hierarchy for Numbers\nPEP written by Jeffrey Yasskin.\nScheme\u2019s numerical tower, from the Guile manual.\nScheme\u2019s number datatypes from the R5RS Scheme specification.\nThe fractions\nModule\u00b6\nTo fill out the hierarchy of numeric types, the fractions\nmodule provides a rational-number class. Rational numbers store their\nvalues as a numerator and denominator forming a fraction, and can\nexactly represent numbers such as 2/3\nthat floating-point numbers\ncan only approximate.\nThe Fraction\nconstructor takes two Integral\nvalues\nthat will be the numerator and denominator of the resulting fraction.\n>>> from fractions import Fraction\n>>> a = Fraction(2, 3)\n>>> b = Fraction(2, 5)\n>>> float(a), float(b)\n(0.66666666666666663, 0.40000000000000002)\n>>> a+b\nFraction(16, 15)\n>>> a/b\nFraction(5, 3)\nFor converting floating-point numbers to rationals,\nthe float type now has an as_integer_ratio()\nmethod that returns\nthe numerator and denominator for a fraction that evaluates to the same\nfloating-point value:\n>>> (2.5) .as_integer_ratio()\n(5, 2)\n>>> (3.1415) .as_integer_ratio()\n(7074029114692207L, 2251799813685248L)\n>>> (1./3) .as_integer_ratio()\n(6004799503160661L, 18014398509481984L)\nNote that values that can only be approximated by floating-point numbers, such as 1./3, are not simplified to the number being approximated; the fraction attempts to match the floating-point value exactly.\nThe fractions\nmodule is based upon an implementation by Sjoerd\nMullender that was in Python\u2019s Demo/classes/\ndirectory for a\nlong time. This implementation was significantly updated by Jeffrey\nYasskin.\nOther Language Changes\u00b6\nSome smaller changes made to the core Python language are:\nDirectories and zip archives containing a\n__main__.py\nfile can now be executed directly by passing their name to the interpreter. The directory or zip archive is automatically inserted as the first entry in sys.path. (Suggestion and initial patch by Andy Chu, subsequently revised by Phillip J. Eby and Nick Coghlan; bpo-1739468.)The\nhasattr()\nfunction was catching and ignoring all errors, under the assumption that they meant a__getattr__()\nmethod was failing somehow and the return value ofhasattr()\nwould therefore beFalse\n. This logic shouldn\u2019t be applied toKeyboardInterrupt\nandSystemExit\n, however; Python 2.6 will no longer discard such exceptions whenhasattr()\nencounters them. (Fixed by Benjamin Peterson; bpo-2196.)When calling a function using the\n**\nsyntax to provide keyword arguments, you are no longer required to use a Python dictionary; any mapping will now work:>>> def f(**kw): ... print sorted(kw) ... >>> ud=UserDict.UserDict() >>> ud['a'] = 1 >>> ud['b'] = 'string' >>> f(**ud) ['a', 'b']\n(Contributed by Alexander Belopolsky; bpo-1686487.)\nIt\u2019s also become legal to provide keyword arguments after a\n*args\nargument to a function call.>>> def f(*args, **kw): ... print args, kw ... >>> f(1,2,3, *(4,5,6), keyword=13) (1, 2, 3, 4, 5, 6) {'keyword': 13}\nPreviously this would have been a syntax error. (Contributed by Amaury Forgeot d\u2019Arc; bpo-3473.)\nA new builtin,\nnext(iterator, [default])\nreturns the next item from the specified iterator. If the default argument is supplied, it will be returned if iterator has been exhausted; otherwise, theStopIteration\nexception will be raised. (Backported in bpo-2719.)Tuples now have\nindex()\nandcount()\nmethods matching the list type\u2019sindex()\nandcount()\nmethods:>>> t = (0,1,2,3,4,0,1,2) >>> t.index(3) 3 >>> t.count(0) 2\n(Contributed by Raymond Hettinger)\nThe built-in types now have improved support for extended slicing syntax, accepting various combinations of\n(start, stop, step)\n. Previously, the support was partial and certain corner cases wouldn\u2019t work. (Implemented by Thomas Wouters.)Properties now have three attributes,\ngetter\n,setter\nanddeleter\n, that are decorators providing useful shortcuts for adding a getter, setter or deleter function to an existing property. You would use them like this:class C(object): @property def x(self): return self._x @x.setter def x(self, value): self._x = value @x.deleter def x(self): del self._x class D(C): @C.x.getter def x(self): return self._x * 2 @x.setter def x(self, value): self._x = value / 2\nSeveral methods of the built-in set types now accept multiple iterables:\nintersection()\n,intersection_update()\n,union()\n,update()\n,difference()\nanddifference_update()\n.>>> s=set('1234567890') >>> s.intersection('abc123', 'cdf246') # Intersection between all inputs set(['2']) >>> s.difference('246', '789') set(['1', '0', '3', '5'])\n(Contributed by Raymond Hettinger.)\nMany floating-point features were added. The\nfloat()\nfunction will now turn the stringnan\ninto an IEEE 754 Not A Number value, and+inf\nand-inf\ninto positive or negative infinity. This works on any platform with IEEE 754 semantics. (Contributed by Christian Heimes; bpo-1635.)Other functions in the\nmath\nmodule,isinf()\nandisnan()\n, return true if their floating-point argument is infinite or Not A Number. (bpo-1640)Conversion functions were added to convert floating-point numbers into hexadecimal strings (bpo-3008). These functions convert floats to and from a string representation without introducing rounding errors from the conversion between decimal and binary. Floats have a\nhex()\nmethod that returns a string representation, and thefloat.fromhex()\nmethod converts a string back into a number:>>> a = 3.75 >>> a.hex() '0x1.e000000000000p+1' >>> float.fromhex('0x1.e000000000000p+1') 3.75 >>> b=1./3 >>> b.hex() '0x1.5555555555555p-2'\nA numerical nicety: when creating a complex number from two floats on systems that support signed zeros (-0 and +0), the\ncomplex()\nconstructor will now preserve the sign of the zero. (Fixed by Mark T. Dickinson; bpo-1507.)Classes that inherit a\n__hash__()\nmethod from a parent class can set__hash__ = None\nto indicate that the class isn\u2019t hashable. This will makehash(obj)\nraise aTypeError\nand the class will not be indicated as implementing theHashable\nABC.You should do this when you\u2019ve defined a\n__cmp__()\nor__eq__()\nmethod that compares objects by their value rather than by identity. All objects have a default hash method that usesid(obj)\nas the hash value. There\u2019s no tidy way to remove the__hash__()\nmethod inherited from a parent class, so assigningNone\nwas implemented as an override. At the C level, extensions can settp_hash\ntoPyObject_HashNotImplemented()\n. (Fixed by Nick Coghlan and Amaury Forgeot d\u2019Arc; bpo-2235.)The\nGeneratorExit\nexception now subclassesBaseException\ninstead ofException\n. This means that an exception handler that doesexcept Exception:\nwill not inadvertently catchGeneratorExit\n. (Contributed by Chad Austin; bpo-1537.)Generator objects now have a\ngi_code\nattribute that refers to the original code object backing the generator. (Contributed by Collin Winter; bpo-1473257.)The\ncompile()\nbuilt-in function now accepts keyword arguments as well as positional parameters. (Contributed by Thomas Wouters; bpo-1444529.)The\ncomplex()\nconstructor now accepts strings containing parenthesized complex numbers, meaning thatcomplex(repr(cplx))\nwill now round-trip values. For example,complex('(3+4j)')\nnow returns the value (3+4j). (bpo-1491866)The string\ntranslate()\nmethod now acceptsNone\nas the translation table parameter, which is treated as the identity transformation. This makes it easier to carry out operations that only delete characters. (Contributed by Bengt Richter and implemented by Raymond Hettinger; bpo-1193128.)The built-in\ndir()\nfunction now checks for a__dir__()\nmethod on the objects it receives. This method must return a list of strings containing the names of valid attributes for the object, and lets the object control the value thatdir()\nproduces. Objects that have__getattr__()\nor__getattribute__()\nmethods can use this to advertise pseudo-attributes they will honor. (bpo-1591665)Instance method objects have new attributes for the object and function comprising the method; the new synonym for\nim_self\nis__self__\n, andim_func\nis also available as__func__\n. The old names are still supported in Python 2.6, but are gone in 3.0.An obscure change: when you use the\nlocals()\nfunction inside aclass\nstatement, the resulting dictionary no longer returns free variables. (Free variables, in this case, are variables referenced in theclass\nstatement that aren\u2019t attributes of the class.)\nOptimizations\u00b6\nThe\nwarnings\nmodule has been rewritten in C. This makes it possible to invoke warnings from the parser, and may also make the interpreter\u2019s startup faster. (Contributed by Neal Norwitz and Brett Cannon; bpo-1631171.)Type objects now have a cache of methods that can reduce the work required to find the correct method implementation for a particular class; once cached, the interpreter doesn\u2019t need to traverse base classes to figure out the right method to call. The cache is cleared if a base class or the class itself is modified, so the cache should remain correct even in the face of Python\u2019s dynamic nature. (Original optimization implemented by Armin Rigo, updated for Python 2.6 by Kevin Jacobs; bpo-1700288.)\nBy default, this change is only applied to types that are included with the Python core. Extension modules may not necessarily be compatible with this cache, so they must explicitly add\nPy_TPFLAGS_HAVE_VERSION_TAG\nto the module\u2019stp_flags\nfield to enable the method cache. (To be compatible with the method cache, the extension module\u2019s code must not directly access and modify thetp_dict\nmember of any of the types it implements. Most modules don\u2019t do this, but it\u2019s impossible for the Python interpreter to determine that. See bpo-1878 for some discussion.)Function calls that use keyword arguments are significantly faster by doing a quick pointer comparison, usually saving the time of a full string comparison. (Contributed by Raymond Hettinger, after an initial implementation by Antoine Pitrou; bpo-1819.)\nAll of the functions in the\nstruct\nmodule have been rewritten in C, thanks to work at the Need For Speed sprint. (Contributed by Raymond Hettinger.)Some of the standard built-in types now set a bit in their type objects. This speeds up checking whether an object is a subclass of one of these types. (Contributed by Neal Norwitz.)\nUnicode strings now use faster code for detecting whitespace and line breaks; this speeds up the\nsplit()\nmethod by about 25% andsplitlines()\nby 35%. (Contributed by Antoine Pitrou.) Memory usage is reduced by using pymalloc for the Unicode string\u2019s data.The\nwith\nstatement now stores the__exit__()\nmethod on the stack, producing a small speedup. (Implemented by Jeffrey Yasskin.)To reduce memory usage, the garbage collector will now clear internal free lists when garbage-collecting the highest generation of objects. This may return memory to the operating system sooner.\nInterpreter Changes\u00b6\nTwo command-line options have been reserved for use by other Python\nimplementations. The -J\nswitch has been reserved for use by\nJython for Jython-specific options, such as switches that are passed to\nthe underlying JVM. -X\nhas been reserved for options\nspecific to a particular implementation of Python such as CPython,\nJython, or IronPython. If either option is used with Python 2.6, the\ninterpreter will report that the option isn\u2019t currently used.\nPython can now be prevented from writing .pyc\nor .pyo\nfiles by supplying the -B\nswitch to the Python interpreter,\nor by setting the PYTHONDONTWRITEBYTECODE\nenvironment\nvariable before running the interpreter. This setting is available to\nPython programs as the sys.dont_write_bytecode\nvariable, and\nPython code can change the value to modify the interpreter\u2019s\nbehaviour. (Contributed by Neal Norwitz and Georg Brandl.)\nThe encoding used for standard input, output, and standard error can\nbe specified by setting the PYTHONIOENCODING\nenvironment\nvariable before running the interpreter. The value should be a string\nin the form \nor :\n.\nThe encoding part specifies the encoding\u2019s name, e.g. utf-8\nor\nlatin-1\n; the optional errorhandler part specifies\nwhat to do with characters that can\u2019t be handled by the encoding,\nand should be one of \u201cerror\u201d, \u201cignore\u201d, or \u201creplace\u201d. (Contributed\nby Martin von L\u00f6wis.)\nNew and Improved Modules\u00b6\nAs in every release, Python\u2019s standard library received a number of\nenhancements and bug fixes. Here\u2019s a partial list of the most notable\nchanges, sorted alphabetically by module name. Consult the\nMisc/NEWS\nfile in the source tree for a more complete list of\nchanges, or look through the Subversion logs for all the details.\nThe\nasyncore\nandasynchat\nmodules are being actively maintained again, and a number of patches and bugfixes were applied. (Maintained by Josiah Carlson; see bpo-1736190 for one patch.)The\nbsddb\nmodule also has a new maintainer, Jes\u00fas Cea Avi\u00f3n, and the package is now available as a standalone package. The web page for the package is www.jcea.es/programacion/pybsddb.htm. The plan is to remove the package from the standard library in Python 3.0, because its pace of releases is much more frequent than Python\u2019s.The\nbsddb.dbshelve\nmodule now uses the highest pickling protocol available, instead of restricting itself to protocol 1. (Contributed by W. Barnes.)The\ncgi\nmodule will now read variables from the query string of an HTTP POST request. This makes it possible to use form actions with URLs that include query strings such as \u201c/cgi-bin/add.py?category=1\u201d. (Contributed by Alexandre Fiori and Nubis; bpo-1817.)The\nparse_qs()\nandparse_qsl()\nfunctions have been relocated from thecgi\nmodule to theurlparse\nmodule. The versions still available in thecgi\nmodule will triggerPendingDeprecationWarning\nmessages in 2.6 (bpo-600362).The\ncmath\nmodule underwent extensive revision, contributed by Mark Dickinson and Christian Heimes. Five new functions were added:polar()\nconverts a complex number to polar form, returning the modulus and argument of the complex number.rect()\ndoes the opposite, turning a modulus, argument pair back into the corresponding complex number.phase()\nreturns the argument (also called the angle) of a complex number.isnan()\nreturns True if either the real or imaginary part of its argument is a NaN.isinf()\nreturns True if either the real or imaginary part of its argument is infinite.\nThe revisions also improved the numerical soundness of the\ncmath\nmodule. For all functions, the real and imaginary parts of the results are accurate to within a few units of least precision (ulps) whenever possible. See bpo-1381 for the details. The branch cuts forasinh()\n,atanh()\n: andatan()\nhave also been corrected.The tests for the module have been greatly expanded; nearly 2000 new test cases exercise the algebraic functions.\nOn IEEE 754 platforms, the\ncmath\nmodule now handles IEEE 754 special values and floating-point exceptions in a manner consistent with Annex \u2018G\u2019 of the C99 standard.A new data type in the\ncollections\nmodule:namedtuple(typename, fieldnames)\nis a factory function that creates subclasses of the standard tuple whose fields are accessible by name as well as index. For example:>>> var_type = collections.namedtuple('variable', ... 'id name type size') >>> # Names are separated by spaces or commas. >>> # 'id, name, type, size' would also work. >>> var_type._fields ('id', 'name', 'type', 'size') >>> var = var_type(1, 'frequency', 'int', 4) >>> print var[0], var.id # Equivalent 1 1 >>> print var[2], var.type # Equivalent int int >>> var._asdict() {'size': 4, 'type': 'int', 'id': 1, 'name': 'frequency'} >>> v2 = var._replace(name='amplitude') >>> v2 variable(id=1, name='amplitude', type='int', size=4)\nSeveral places in the standard library that returned tuples have been modified to return\nnamedtuple()\ninstances. For example, theDecimal.as_tuple()\nmethod now returns a named tuple withsign\n,digits\n, andexponent\nfields.(Contributed by Raymond Hettinger.)\nAnother change to the\ncollections\nmodule is that thedeque\ntype now supports an optional maxlen parameter; if supplied, the deque\u2019s size will be restricted to no more than maxlen items. Adding more items to a full deque causes old items to be discarded.>>> from collections import deque >>> dq=deque(maxlen=3) >>> dq deque([], maxlen=3) >>> dq.append(1); dq.append(2); dq.append(3) >>> dq deque([1, 2, 3], maxlen=3) >>> dq.append(4) >>> dq deque([2, 3, 4], maxlen=3)\n(Contributed by Raymond Hettinger.)\nThe\nCookie\nmodule\u2019sMorsel\nobjects now support anhttponly\nattribute. In some browsers. cookies with this attribute set cannot be accessed or manipulated by JavaScript code. (Contributed by Arvin Schnell; bpo-1638033.)A new window method in the\ncurses\nmodule,chgat()\n, changes the display attributes for a certain number of characters on a single line. (Contributed by Fabian Kreutz.)# Boldface text starting at y=0,x=21 # and affecting the rest of the line. stdscr.chgat(0, 21, curses.A_BOLD)\nThe\nTextbox\nclass in thecurses.textpad\nmodule now supports editing in insert mode as well as overwrite mode. Insert mode is enabled by supplying a true value for the insert_mode parameter when creating theTextbox\ninstance.The\ndatetime\nmodule\u2019sstrftime()\nmethods now support a%f\nformat code that expands to the number of microseconds in the object, zero-padded on the left to six places. (Contributed by Skip Montanaro; bpo-1158.)The\ndecimal\nmodule was updated to version 1.66 of the General Decimal Specification. New features include some methods for some basic mathematical functions such asexp()\nandlog10()\n:>>> Decimal(1).exp() Decimal(\"2.718281828459045235360287471\") >>> Decimal(\"2.7182818\").ln() Decimal(\"0.9999999895305022877376682436\") >>> Decimal(1000).log10() Decimal(\"3\")\nThe\nas_tuple()\nmethod ofDecimal\nobjects now returns a named tuple withsign\n,digits\n, andexponent\nfields.(Implemented by Facundo Batista and Mark Dickinson. Named tuple support added by Raymond Hettinger.)\nThe\ndifflib\nmodule\u2019sSequenceMatcher\nclass now returns named tuples representing matches, witha\n,b\n, andsize\nattributes. (Contributed by Raymond Hettinger.)An optional\ntimeout\nparameter, specifying a timeout measured in seconds, was added to theftplib.FTP\nclass constructor as well as theconnect()\nmethod. (Added by Facundo Batista.) Also, theFTP\nclass\u2019sstorbinary()\nandstorlines()\nnow take an optional callback parameter that will be called with each block of data after the data has been sent. (Contributed by Phil Schwartz; bpo-1221598.)The\nreduce()\nbuilt-in function is also available in thefunctools\nmodule. In Python 3.0, the builtin has been dropped andreduce()\nis only available fromfunctools\n; currently there are no plans to drop the builtin in the 2.x series. (Patched by Christian Heimes; bpo-1739906.)When possible, the\ngetpass\nmodule will now use/dev/tty\nto print a prompt message and read the password, falling back to standard error and standard input. If the password may be echoed to the terminal, a warning is printed before the prompt is displayed. (Contributed by Gregory P. Smith.)The\nglob.glob()\nfunction can now return Unicode filenames if a Unicode path was used and Unicode filenames are matched within the directory. (bpo-1001604)A new function in the\nheapq\nmodule,merge(iter1, iter2, ...)\n, takes any number of iterables returning data in sorted order, and returns a new generator that returns the contents of all the iterators, also in sorted order. For example:>>> list(heapq.merge([1, 3, 5, 9], [2, 8, 16])) [1, 2, 3, 5, 8, 9, 16]\nAnother new function,\nheappushpop(heap, item)\n, pushes item onto heap, then pops off and returns the smallest item. This is more efficient than making a call toheappush()\nand thenheappop()\n.heapq\nis now implemented to only use less-than comparison, instead of the less-than-or-equal comparison it previously used. This makesheapq\n\u2019s usage of a type match thelist.sort()\nmethod. (Contributed by Raymond Hettinger.)An optional\ntimeout\nparameter, specifying a timeout measured in seconds, was added to thehttplib.HTTPConnection\nandHTTPSConnection\nclass constructors. (Added by Facundo Batista.)Most of the\ninspect\nmodule\u2019s functions, such asgetmoduleinfo()\nandgetargs()\n, now return named tuples. In addition to behaving like tuples, the elements of the return value can also be accessed as attributes. (Contributed by Raymond Hettinger.)Some new functions in the module include\nisgenerator()\n,isgeneratorfunction()\n, andisabstract()\n.The\nitertools\nmodule gained several new functions.izip_longest(iter1, iter2, ...[, fillvalue])\nmakes tuples from each of the elements; if some of the iterables are shorter than others, the missing values are set to fillvalue. For example:>>> tuple(itertools.izip_longest([1,2,3], [1,2,3,4,5])) ((1, 1), (2, 2), (3, 3), (None, 4), (None, 5))\nproduct(iter1, iter2, ..., [repeat=N])\nreturns the Cartesian product of the supplied iterables, a set of tuples containing every possible combination of the elements returned from each iterable.>>> list(itertools.product([1,2,3], [4,5,6])) [(1, 4), (1, 5), (1, 6), (2, 4), (2, 5), (2, 6), (3, 4), (3, 5), (3, 6)]\nThe optional repeat keyword argument is used for taking the product of an iterable or a set of iterables with themselves, repeated N times. With a single iterable argument, N-tuples are returned:\n>>> list(itertools.product([1,2], repeat=3)) [(1, 1, 1), (1, 1, 2), (1, 2, 1), (1, 2, 2), (2, 1, 1), (2, 1, 2), (2, 2, 1), (2, 2, 2)]\nWith two iterables, 2N-tuples are returned.\n>>> list(itertools.product([1,2], [3,4], repeat=2)) [(1, 3, 1, 3), (1, 3, 1, 4), (1, 3, 2, 3), (1, 3, 2, 4), (1, 4, 1, 3), (1, 4, 1, 4), (1, 4, 2, 3), (1, 4, 2, 4), (2, 3, 1, 3), (2, 3, 1, 4), (2, 3, 2, 3), (2, 3, 2, 4), (2, 4, 1, 3), (2, 4, 1, 4), (2, 4, 2, 3), (2, 4, 2, 4)]\ncombinations(iterable, r)\nreturns sub-sequences of length r from the elements of iterable.>>> list(itertools.combinations('123', 2)) [('1', '2'), ('1', '3'), ('2', '3')] >>> list(itertools.combinations('123', 3)) [('1', '2', '3')] >>> list(itertools.combinations('1234', 3)) [('1', '2', '3'), ('1', '2', '4'), ('1', '3', '4'), ('2', '3', '4')]\npermutations(iter[, r])\nreturns all the permutations of length r of the iterable\u2019s elements. If r is not specified, it will default to the number of elements produced by the iterable.>>> list(itertools.permutations([1,2,3,4], 2)) [(1, 2), (1, 3), (1, 4), (2, 1), (2, 3), (2, 4), (3, 1), (3, 2), (3, 4), (4, 1), (4, 2), (4, 3)]\nitertools.chain(*iterables)\nis an existing function initertools\nthat gained a new constructor in Python 2.6.itertools.chain.from_iterable(iterable)\ntakes a single iterable that should return other iterables.chain()\nwill then return all the elements of the first iterable, then all the elements of the second, and so on.>>> list(itertools.chain.from_iterable([[1,2,3], [4,5,6]])) [1, 2, 3, 4, 5, 6]\n(All contributed by Raymond Hettinger.)\nThe\nlogging\nmodule\u2019sFileHandler\nclass and its subclassesWatchedFileHandler\n,RotatingFileHandler\n, andTimedRotatingFileHandler\nnow have an optional delay parameter to their constructors. If delay is true, opening of the log file is deferred until the firstemit()\ncall is made. (Contributed by Vinay Sajip.)TimedRotatingFileHandler\nalso has a utc constructor parameter. If the argument is true, UTC time will be used in determining when midnight occurs and in generating filenames; otherwise local time will be used.Several new functions were added to the\nmath\nmodule:isinf()\nandisnan()\ndetermine whether a given float is a (positive or negative) infinity or a NaN (Not a Number), respectively.copysign()\ncopies the sign bit of an IEEE 754 number, returning the absolute value of x combined with the sign bit of y. For example,math.copysign(1, -0.0)\nreturns -1.0. (Contributed by Christian Heimes.)factorial()\ncomputes the factorial of a number. (Contributed by Raymond Hettinger; bpo-2138.)fsum()\nadds up the stream of numbers from an iterable, and is careful to avoid loss of precision through using partial sums. (Contributed by Jean Brouwers, Raymond Hettinger, and Mark Dickinson; bpo-2819.)acosh()\n,asinh()\nandatanh()\ncompute the inverse hyperbolic functions.log1p()\nreturns the natural logarithm of 1+x (base e).trunc()\nrounds a number toward zero, returning the closestIntegral\nthat\u2019s between the function\u2019s argument and zero. Added as part of the backport of PEP 3141\u2019s type hierarchy for numbers.\nThe\nmath\nmodule has been improved to give more consistent behaviour across platforms, especially with respect to handling of floating-point exceptions and IEEE 754 special values.Whenever possible, the module follows the recommendations of the C99 standard about 754\u2019s special values. For example,\nsqrt(-1.)\nshould now give aValueError\nacross almost all platforms, whilesqrt(float('NaN'))\nshould return a NaN on all IEEE 754 platforms. Where Annex \u2018F\u2019 of the C99 standard recommends signaling \u2018divide-by-zero\u2019 or \u2018invalid\u2019, Python will raiseValueError\n. Where Annex \u2018F\u2019 of the C99 standard recommends signaling \u2018overflow\u2019, Python will raiseOverflowError\n. (See bpo-711019 and bpo-1640.)(Contributed by Christian Heimes and Mark Dickinson.)\nmmap\nobjects now have arfind()\nmethod that searches for a substring beginning at the end of the string and searching backwards. Thefind()\nmethod also gained an end parameter giving an index at which to stop searching. (Contributed by John Lenton.)The\noperator\nmodule gained amethodcaller()\nfunction that takes a name and an optional set of arguments, returning a callable that will call the named function on any arguments passed to it. For example:>>> # Equivalent to lambda s: s.replace('old', 'new') >>> replacer = operator.methodcaller('replace', 'old', 'new') >>> replacer('old wine in old bottles') 'new wine in new bottles'\n(Contributed by Georg Brandl, after a suggestion by Gregory Petrosyan.)\nThe\nattrgetter()\nfunction now accepts dotted names and performs the corresponding attribute lookups:>>> inst_name = operator.attrgetter( ... '__class__.__name__') >>> inst_name('') 'str' >>> inst_name(help) '_Helper'\n(Contributed by Georg Brandl, after a suggestion by Barry Warsaw.)\nThe\nos\nmodule now wraps several new system calls.fchmod(fd, mode)\nandfchown(fd, uid, gid)\nchange the mode and ownership of an opened file, andlchmod(path, mode)\nchanges the mode of a symlink. (Contributed by Georg Brandl and Christian Heimes.)chflags()\nandlchflags()\nare wrappers for the corresponding system calls (where they\u2019re available), changing the flags set on a file. Constants for the flag values are defined in thestat\nmodule; some possible values includeUF_IMMUTABLE\nto signal the file may not be changed andUF_APPEND\nto indicate that data can only be appended to the file. (Contributed by M. Levinson.)os.closerange(low, high)\nefficiently closes all file descriptors from low to high, ignoring any errors and not including high itself. This function is now used by thesubprocess\nmodule to make starting processes faster. (Contributed by Georg Brandl; bpo-1663329.)The\nos.environ\nobject\u2019sclear()\nmethod will now unset the environment variables usingos.unsetenv()\nin addition to clearing the object\u2019s keys. (Contributed by Martin Horcicka; bpo-1181.)The\nos.walk()\nfunction now has afollowlinks\nparameter. If set to True, it will follow symlinks pointing to directories and visit the directory\u2019s contents. For backward compatibility, the parameter\u2019s default value is false. Note that the function can fall into an infinite recursion if there\u2019s a symlink that points to a parent directory. (bpo-1273829)In the\nos.path\nmodule, thesplitext()\nfunction has been changed to not split on leading period characters. This produces better results when operating on Unix\u2019s dot-files. For example,os.path.splitext('.ipython')\nnow returns('.ipython', '')\ninstead of('', '.ipython')\n. (bpo-1115886)A new function,\nos.path.relpath(path, start='.')\n, returns a relative path from thestart\npath, if it\u2019s supplied, or from the current working directory to the destinationpath\n. (Contributed by Richard Barran; bpo-1339796.)On Windows,\nos.path.expandvars()\nwill now expand environment variables given in the form \u201c%var%\u201d, and \u201c~user\u201d will be expanded into the user\u2019s home directory path. (Contributed by Josiah Carlson; bpo-957650.)The Python debugger provided by the\npdb\nmodule gained a new command: \u201crun\u201d restarts the Python program being debugged and can optionally take new command-line arguments for the program. (Contributed by Rocky Bernstein; bpo-1393667.)The\npdb.post_mortem()\nfunction, used to begin debugging a traceback, will now use the traceback returned bysys.exc_info()\nif no traceback is supplied. (Contributed by Facundo Batista; bpo-1106316.)The\npickletools\nmodule now has anoptimize()\nfunction that takes a string containing a pickle and removes some unused opcodes, returning a shorter pickle that contains the same data structure. (Contributed by Raymond Hettinger.)A\nget_data()\nfunction was added to thepkgutil\nmodule that returns the contents of resource files included with an installed Python package. For example:>>> import pkgutil >>> print pkgutil.get_data('test', 'exception_hierarchy.txt') BaseException +-- SystemExit +-- KeyboardInterrupt +-- GeneratorExit +-- Exception +-- StopIteration +-- StandardError ...\n(Contributed by Paul Moore; bpo-2439.)\nThe\npyexpat\nmodule\u2019sParser\nobjects now allow setting theirbuffer_size\nattribute to change the size of the buffer used to hold character data. (Contributed by Achim Gaedke; bpo-1137.)The\nQueue\nmodule now provides queue variants that retrieve entries in different orders. ThePriorityQueue\nclass stores queued items in a heap and retrieves them in priority order, andLifoQueue\nretrieves the most recently added entries first, meaning that it behaves like a stack. (Contributed by Raymond Hettinger.)The\nrandom\nmodule\u2019sRandom\nobjects can now be pickled on a 32-bit system and unpickled on a 64-bit system, and vice versa. Unfortunately, this change also means that Python 2.6\u2019sRandom\nobjects can\u2019t be unpickled correctly on earlier versions of Python. (Contributed by Shawn Ligocki; bpo-1727780.)The new\ntriangular(low, high, mode)\nfunction returns random numbers following a triangular distribution. The returned values are between low and high, not including high itself, and with mode as the most frequently occurring value in the distribution. (Contributed by Wladmir van der Laan and Raymond Hettinger; bpo-1681432.)Long regular expression searches carried out by the\nre\nmodule will check for signals being delivered, so time-consuming searches can now be interrupted. (Contributed by Josh Hoyt and Ralf Schmitt; bpo-846388.)The regular expression module is implemented by compiling bytecodes for a tiny regex-specific virtual machine. Untrusted code could create malicious strings of bytecode directly and cause crashes, so Python 2.6 includes a verifier for the regex bytecode. (Contributed by Guido van Rossum from work for Google App Engine; bpo-3487.)\nThe\nrlcompleter\nmodule\u2019sCompleter.complete()\nmethod will now ignore exceptions triggered while evaluating a name. (Fixed by Lorenz Quack; bpo-2250.)The\nsched\nmodule\u2019sscheduler\ninstances now have a read-onlyqueue\nattribute that returns the contents of the scheduler\u2019s queue, represented as a list of named tuples with the fields(time, priority, action, argument)\n. (Contributed by Raymond Hettinger; bpo-1861.)The\nselect\nmodule now has wrapper functions for the Linuxepoll()\nand BSDkqueue()\nsystem calls.modify()\nmethod was added to the existingpoll\nobjects;pollobj.modify(fd, eventmask)\ntakes a file descriptor or file object and an event mask, modifying the recorded event mask for that file. (Contributed by Christian Heimes; bpo-1657.)The\nshutil.copytree()\nfunction now has an optional ignore argument that takes a callable object. This callable will receive each directory path and a list of the directory\u2019s contents, and returns a list of names that will be ignored, not copied.The\nshutil\nmodule also provides anignore_patterns()\nfunction for use with this new parameter.ignore_patterns()\ntakes an arbitrary number of glob-style patterns and returns a callable that will ignore any files and directories that match any of these patterns. The following example copies a directory tree, but skips both.svn\ndirectories and Emacs backup files, which have names ending with \u2018~\u2019:shutil.copytree('Doc/library', '/tmp/library', ignore=shutil.ignore_patterns('*~', '.svn'))\n(Contributed by Tarek Ziad\u00e9; bpo-2663.)\nIntegrating signal handling with GUI handling event loops like those used by Tkinter or GTk+ has long been a problem; most software ends up polling, waking up every fraction of a second to check if any GUI events have occurred. The\nsignal\nmodule can now make this more efficient. Callingsignal.set_wakeup_fd(fd)\nsets a file descriptor to be used; when a signal is received, a byte is written to that file descriptor. There\u2019s also a C-level function,PySignal_SetWakeupFd()\n, for setting the descriptor.Event loops will use this by opening a pipe to create two descriptors, one for reading and one for writing. The writable descriptor will be passed to\nset_wakeup_fd()\n, and the readable descriptor will be added to the list of descriptors monitored by the event loop viaselect()\norpoll()\n. On receiving a signal, a byte will be written and the main event loop will be woken up, avoiding the need to poll.(Contributed by Adam Olsen; bpo-1583.)\nThe\nsiginterrupt()\nfunction is now available from Python code, and allows changing whether signals can interrupt system calls or not. (Contributed by Ralf Schmitt.)The\nsetitimer()\nandgetitimer()\nfunctions have also been added (where they\u2019re available).setitimer()\nallows setting interval timers that will cause a signal to be delivered to the process after a specified time, measured in wall-clock time, consumed process time, or combined process+system time. (Contributed by Guilherme Polo; bpo-2240.)The\nsmtplib\nmodule now supports SMTP over SSL thanks to the addition of theSMTP_SSL\nclass. This class supports an interface identical to the existingSMTP\nclass. (Contributed by Monty Taylor.) Both class constructors also have an optionaltimeout\nparameter that specifies a timeout for the initial connection attempt, measured in seconds. (Contributed by Facundo Batista.)An implementation of the LMTP protocol (RFC 2033) was also added to the module. LMTP is used in place of SMTP when transferring e-mail between agents that don\u2019t manage a mail queue. (LMTP implemented by Leif Hedstrom; bpo-957003.)\nSMTP.starttls()\nnow complies with RFC 3207 and forgets any knowledge obtained from the server not obtained from the TLS negotiation itself. (Patch contributed by Bill Fenner; bpo-829951.)The\nsocket\nmodule now supports TIPC (https://tipc.sourceforge.net/), a high-performance non-IP-based protocol designed for use in clustered environments. TIPC addresses are 4- or 5-tuples. (Contributed by Alberto Bertogli; bpo-1646.)A new function,\ncreate_connection()\n, takes an address and connects to it using an optional timeout value, returning the connected socket object. This function also looks up the address\u2019s type and connects to it using IPv4 or IPv6 as appropriate. Changing your code to usecreate_connection()\ninstead ofsocket(socket.AF_INET, ...)\nmay be all that\u2019s required to make your code work with IPv6.The base classes in the\nSocketServer\nmodule now support calling ahandle_timeout()\nmethod after a span of inactivity specified by the server\u2019stimeout\nattribute. (Contributed by Michael Pomraning.) Theserve_forever()\nmethod now takes an optional poll interval measured in seconds, controlling how often the server will check for a shutdown request. (Contributed by Pedro Werneck and Jeffrey Yasskin; bpo-742598, bpo-1193577.)The\nsqlite3\nmodule, maintained by Gerhard H\u00e4ring, has been updated from version 2.3.2 in Python 2.5 to version 2.4.1.The\nstruct\nmodule now supports the C99 _Bool type, using the format character'?'\n. (Contributed by David Remahl.)The\nPopen\nobjects provided by thesubprocess\nmodule now haveterminate()\n,kill()\n, andsend_signal()\nmethods. On Windows,send_signal()\nonly supports theSIGTERM\nsignal, and all these methods are aliases for the Win32 API functionTerminateProcess()\n. (Contributed by Christian Heimes.)A new variable in the\nsys\nmodule,float_info\n, is an object containing information derived from thefloat.h\nfile about the platform\u2019s floating-point support. Attributes of this object includemant_dig\n(number of digits in the mantissa),epsilon\n(smallest difference between 1.0 and the next largest value representable), and several others. (Contributed by Christian Heimes; bpo-1534.)Another new variable,\ndont_write_bytecode\n, controls whether Python writes any.pyc\nor.pyo\nfiles on importing a module. If this variable is true, the compiled files are not written. The variable is initially set on start-up by supplying the-B\nswitch to the Python interpreter, or by setting thePYTHONDONTWRITEBYTECODE\nenvironment variable before running the interpreter. Python code can subsequently change the value of this variable to control whether bytecode files are written or not. (Contributed by Neal Norwitz and Georg Brandl.)Information about the command-line arguments supplied to the Python interpreter is available by reading attributes of a named tuple available as\nsys.flags\n. For example, theverbose\nattribute is true if Python was executed in verbose mode,debug\nis true in debugging mode, etc. These attributes are all read-only. (Contributed by Christian Heimes.)A new function,\ngetsizeof()\n, takes a Python object and returns the amount of memory used by the object, measured in bytes. Built-in objects return correct results; third-party extensions may not, but can define a__sizeof__()\nmethod to return the object\u2019s size. (Contributed by Robert Schuppenies; bpo-2898.)It\u2019s now possible to determine the current profiler and tracer functions by calling\nsys.getprofile()\nandsys.gettrace()\n. (Contributed by Georg Brandl; bpo-1648.)The\ntarfile\nmodule now supports POSIX.1-2001 (pax) tarfiles in addition to the POSIX.1-1988 (ustar) and GNU tar formats that were already supported. The default format is GNU tar; specify theformat\nparameter to open a file using a different format:tar = tarfile.open(\"output.tar\", \"w\", format=tarfile.PAX_FORMAT)\nThe new\nencoding\nanderrors\nparameters specify an encoding and an error handling scheme for character conversions.'strict'\n,'ignore'\n, and'replace'\nare the three standard ways Python can handle errors,;'utf-8'\nis a special value that replaces bad characters with their UTF-8 representation. (Character conversions occur because the PAX format supports Unicode filenames, defaulting to UTF-8 encoding.)The\nTarFile.add()\nmethod now accepts anexclude\nargument that\u2019s a function that can be used to exclude certain filenames from an archive. The function must take a filename and return true if the file should be excluded or false if it should be archived. The function is applied to both the name initially passed toadd()\nand to the names of files in recursively added directories.(All changes contributed by Lars Gust\u00e4bel).\nAn optional\ntimeout\nparameter was added to thetelnetlib.Telnet\nclass constructor, specifying a timeout measured in seconds. (Added by Facundo Batista.)The\ntempfile.NamedTemporaryFile\nclass usually deletes the temporary file it created when the file is closed. This behaviour can now be changed by passingdelete=False\nto the constructor. (Contributed by Damien Miller; bpo-1537850.)A new class,\nSpooledTemporaryFile\n, behaves like a temporary file but stores its data in memory until a maximum size is exceeded. On reaching that limit, the contents will be written to an on-disk temporary file. (Contributed by Dustin J. Mitchell.)The\nNamedTemporaryFile\nandSpooledTemporaryFile\nclasses both work as context managers, so you can writewith tempfile.NamedTemporaryFile() as tmp: ...\n. (Contributed by Alexander Belopolsky; bpo-2021.)The\ntest.test_support\nmodule gained a number of context managers useful for writing tests.EnvironmentVarGuard()\nis a context manager that temporarily changes environment variables and automatically restores them to their old values.Another context manager,\nTransientResource\n, can surround calls to resources that may or may not be available; it will catch and ignore a specified list of exceptions. For example, a network test may ignore certain failures when connecting to an external web site:with test_support.TransientResource(IOError, errno=errno.ETIMEDOUT): f = urllib.urlopen('https://sf.net') ...\nFinally,\ncheck_warnings()\nresets thewarning\nmodule\u2019s warning filters and returns an object that will record all warning messages triggered (bpo-3781):with test_support.check_warnings() as wrec: warnings.simplefilter(\"always\") # ... code that triggers a warning ... assert str(wrec.message) == \"function is outdated\" assert len(wrec.warnings) == 1, \"Multiple warnings raised\"\n(Contributed by Brett Cannon.)\nThe\ntextwrap\nmodule can now preserve existing whitespace at the beginnings and ends of the newly created lines by specifyingdrop_whitespace=False\nas an argument:>>> S = \"\"\"This sentence has a bunch of ... extra whitespace.\"\"\" >>> print textwrap.fill(S, width=15) This sentence has a bunch of extra whitespace. >>> print textwrap.fill(S, drop_whitespace=False, width=15) This sentence has a bunch of extra whitespace. >>>\n(Contributed by Dwayne Bailey; bpo-1581073.)\nThe\nthreading\nmodule API is being changed to use properties such asdaemon\ninstead ofsetDaemon()\nandisDaemon()\nmethods, and some methods have been renamed to use underscores instead of camel-case; for example, theactiveCount()\nmethod is renamed toactive_count()\n. Both the 2.6 and 3.0 versions of the module support the same properties and renamed methods, but don\u2019t remove the old methods. No date has been set for the deprecation of the old APIs in Python 3.x; the old APIs won\u2019t be removed in any 2.x version. (Carried out by several people, most notably Benjamin Peterson.)The\nthreading\nmodule\u2019sThread\nobjects gained anident\nproperty that returns the thread\u2019s identifier, a nonzero integer. (Contributed by Gregory P. Smith; bpo-2871.)The\ntimeit\nmodule now accepts callables as well as strings for the statement being timed and for the setup code. Two convenience functions were added for creatingTimer\ninstances:repeat(stmt, setup, time, repeat, number)\nandtimeit(stmt, setup, time, number)\ncreate an instance and call the corresponding method. (Contributed by Erik Demaine; bpo-1533909.)The\nTkinter\nmodule now accepts lists and tuples for options, separating the elements by spaces before passing the resulting value to Tcl/Tk. (Contributed by Guilherme Polo; bpo-2906.)The\nturtle\nmodule for turtle graphics was greatly enhanced by Gregor Lingl. New features in the module include:Better animation of turtle movement and rotation.\nControl over turtle movement using the new\ndelay()\n,tracer()\n, andspeed()\nmethods.The ability to set new shapes for the turtle, and to define a new coordinate system.\nTurtles now have an\nundo()\nmethod that can roll back actions.Simple support for reacting to input events such as mouse and keyboard activity, making it possible to write simple games.\nA\nturtle.cfg\nfile can be used to customize the starting appearance of the turtle\u2019s screen.The module\u2019s docstrings can be replaced by new docstrings that have been translated into another language.\nAn optional\ntimeout\nparameter was added to theurllib.urlopen\nfunction and theurllib.ftpwrapper\nclass constructor, as well as theurllib2.urlopen\nfunction. The parameter specifies a timeout measured in seconds. For example:>>> u = urllib2.urlopen(\"http://slow.example.com\", timeout=3) Traceback (most recent call last): ... urllib2.URLError: >>>\n(Added by Facundo Batista.)\nThe Unicode database provided by the\nunicodedata\nmodule has been updated to version 5.1.0. (Updated by Martin von L\u00f6wis; bpo-3811.)The\nwarnings\nmodule\u2019sformatwarning()\nandshowwarning()\ngained an optional line argument that can be used to supply the line of source code. (Added as part of bpo-1631171, which re-implemented part of thewarnings\nmodule in C code.)A new function,\ncatch_warnings()\n, is a context manager intended for testing purposes that lets you temporarily modify the warning filters and then restore their original values (bpo-3781).The XML-RPC\nSimpleXMLRPCServer\nandDocXMLRPCServer\nclasses can now be prevented from immediately opening and binding to their socket by passingFalse\nas the bind_and_activate constructor parameter. This can be used to modify the instance\u2019sallow_reuse_address\nattribute before calling theserver_bind()\nandserver_activate()\nmethods to open the socket and begin listening for connections. (Contributed by Peter Parente; bpo-1599845.)SimpleXMLRPCServer\nalso has a_send_traceback_header\nattribute; if true, the exception and formatted traceback are returned as HTTP headers \u201cX-Exception\u201d and \u201cX-Traceback\u201d. This feature is for debugging purposes only and should not be used on production servers because the tracebacks might reveal passwords or other sensitive information. (Contributed by Alan McIntyre as part of his project for Google\u2019s Summer of Code 2007.)The\nxmlrpclib\nmodule no longer automatically convertsdatetime.date\nanddatetime.time\nto thexmlrpclib.DateTime\ntype; the conversion semantics were not necessarily correct for all applications. Code usingxmlrpclib\nshould convertdate\nandtime\ninstances. (bpo-1330538) The code can also handle dates before 1900 (contributed by Ralf Schmitt; bpo-2014) and 64-bit integers represented by using\nin XML-RPC responses (contributed by Riku Lindblad; bpo-2985).The\nzipfile\nmodule\u2019sZipFile\nclass now hasextract()\nandextractall()\nmethods that will unpack a single file or all the files in the archive to the current directory, or to a specified directory:z = zipfile.ZipFile('python-251.zip') # Unpack a single file, writing it relative # to the /tmp directory. z.extract('Python/sysmodule.c', '/tmp') # Unpack all the files in the archive. z.extractall()\n(Contributed by Alan McIntyre; bpo-467924.)\nThe\nopen()\n,read()\nandextract()\nmethods can now take either a filename or aZipInfo\nobject. This is useful when an archive accidentally contains a duplicated filename. (Contributed by Graham Horler; bpo-1775025.)Finally,\nzipfile\nnow supports using Unicode filenames for archived files. (Contributed by Alexey Borzenkov; bpo-1734346.)\nThe ast\nmodule\u00b6\nThe ast\nmodule provides an Abstract Syntax Tree\nrepresentation of Python code, and Armin Ronacher\ncontributed a set of helper functions that perform a variety of\ncommon tasks. These will be useful for HTML templating\npackages, code analyzers, and similar tools that process\nPython code.\nThe parse()\nfunction takes an expression and returns an AST.\nThe dump()\nfunction outputs a representation of a tree, suitable\nfor debugging:\nimport ast\nt = ast.parse(\"\"\"\nd = {}\nfor i in 'abcdefghijklm':\nd[i + i] = ord(i) - ord('a') + 1\nprint d\n\"\"\")\nprint ast.dump(t)\nThis outputs a deeply nested tree:\nModule(body=[\nAssign(targets=[\nName(id='d', ctx=Store())\n], value=Dict(keys=[], values=[]))\nFor(target=Name(id='i', ctx=Store()),\niter=Str(s='abcdefghijklm'), body=[\nAssign(targets=[\nSubscript(value=\nName(id='d', ctx=Load()),\nslice=\nIndex(value=\nBinOp(left=Name(id='i', ctx=Load()), op=Add(),\nright=Name(id='i', ctx=Load()))), ctx=Store())\n], value=\nBinOp(left=\nBinOp(left=\nCall(func=\nName(id='ord', ctx=Load()), args=[\nName(id='i', ctx=Load())\n], keywords=[], starargs=None, kwargs=None),\nop=Sub(), right=Call(func=\nName(id='ord', ctx=Load()), args=[\nStr(s='a')\n], keywords=[], starargs=None, kwargs=None)),\nop=Add(), right=Num(n=1)))\n], orelse=[])\nPrint(dest=None, values=[\nName(id='d', ctx=Load())\n], nl=True)\n])\nThe literal_eval()\nmethod takes a string or an AST\nrepresenting a literal expression, parses and evaluates it, and\nreturns the resulting value. A literal expression is a Python\nexpression containing only strings, numbers, dictionaries,\netc. but no statements or function calls. If you need to\nevaluate an expression but cannot accept the security risk of using an\neval()\ncall, literal_eval()\nwill handle it safely:\n>>> literal = '(\"a\", \"b\", {2:4, 3:8, 1:2})'\n>>> print ast.literal_eval(literal)\n('a', 'b', {1: 2, 2: 4, 3: 8})\n>>> print ast.literal_eval('\"a\" + \"b\"')\nTraceback (most recent call last):\n...\nValueError: malformed string\nThe module also includes NodeVisitor\nand\nNodeTransformer\nclasses for traversing and modifying an AST,\nand functions for common transformations such as changing line\nnumbers.\nThe future_builtins\nmodule\u00b6\nPython 3.0 makes many changes to the repertoire of built-in\nfunctions, and most of the changes can\u2019t be introduced in the Python\n2.x series because they would break compatibility.\nThe future_builtins\nmodule provides versions\nof these built-in functions that can be imported when writing\n3.0-compatible code.\nThe functions in this module currently include:\nascii(obj)\n: equivalent torepr()\n. In Python 3.0,repr()\nwill return a Unicode string, whileascii()\nwill return a pure ASCII bytestring.filter(predicate, iterable)\n,map(func, iterable1, ...)\n: the 3.0 versions return iterators, unlike the 2.x builtins which return lists.hex(value)\n,oct(value)\n: instead of calling the__hex__()\nor__oct__()\nmethods, these versions will call the__index__()\nmethod and convert the result to hexadecimal or octal.oct()\nwill use the new0o\nnotation for its result.\nThe json\nmodule: JavaScript Object Notation\u00b6\nThe new json\nmodule supports the encoding and decoding of Python types in\nJSON (Javascript Object Notation). JSON is a lightweight interchange format\noften used in web applications. For more information about JSON, see\nhttp://www.json.org.\njson\ncomes with support for decoding and encoding most built-in Python\ntypes. The following example encodes and decodes a dictionary:\n>>> import json\n>>> data = {\"spam\": \"foo\", \"parrot\": 42}\n>>> in_json = json.dumps(data) # Encode the data\n>>> in_json\n'{\"parrot\": 42, \"spam\": \"foo\"}'\n>>> json.loads(in_json) # Decode into a Python object\n{\"spam\": \"foo\", \"parrot\": 42}\nIt\u2019s also possible to write your own decoders and encoders to support more types. Pretty-printing of the JSON strings is also supported.\njson\n(originally called simplejson) was written by Bob\nIppolito.\nThe plistlib\nmodule: A Property-List Parser\u00b6\nThe .plist\nformat is commonly used on Mac OS X to\nstore basic data types (numbers, strings, lists,\nand dictionaries) by serializing them into an XML-based format.\nIt resembles the XML-RPC serialization of data types.\nDespite being primarily used on Mac OS X, the format\nhas nothing Mac-specific about it and the Python implementation works\non any platform that Python supports, so the plistlib\nmodule\nhas been promoted to the standard library.\nUsing the module is simple:\nimport sys\nimport plistlib\nimport datetime\n# Create data structure\ndata_struct = dict(lastAccessed=datetime.datetime.now(),\nversion=1,\ncategories=('Personal','Shared','Private'))\n# Create string containing XML.\nplist_str = plistlib.writePlistToString(data_struct)\nnew_struct = plistlib.readPlistFromString(plist_str)\nprint data_struct\nprint new_struct\n# Write data structure to a file and read it back.\nplistlib.writePlist(data_struct, '/tmp/customizations.plist')\nnew_struct = plistlib.readPlist('/tmp/customizations.plist')\n# read/writePlist accepts file-like objects as well as paths.\nplistlib.writePlist(data_struct, sys.stdout)\nctypes Enhancements\u00b6\nThomas Heller continued to maintain and enhance the\nctypes\nmodule.\nctypes\nnow supports a c_bool\ndatatype\nthat represents the C99 bool\ntype. (Contributed by David Remahl;\nbpo-1649190.)\nThe ctypes\nstring, buffer and array types have improved\nsupport for extended slicing syntax,\nwhere various combinations of (start, stop, step)\nare supplied.\n(Implemented by Thomas Wouters.)\nAll ctypes\ndata types now support\nfrom_buffer()\nand from_buffer_copy()\nmethods that create a ctypes instance based on a\nprovided buffer object. from_buffer_copy()\ncopies\nthe contents of the object,\nwhile from_buffer()\nwill share the same memory area.\nA new calling convention tells ctypes\nto clear the errno\nor\nWin32 LastError variables at the outset of each wrapped call.\n(Implemented by Thomas Heller; bpo-1798.)\nYou can now retrieve the Unix errno\nvariable after a function\ncall. When creating a wrapped function, you can supply\nuse_errno=True\nas a keyword parameter to the DLL()\nfunction\nand then call the module-level methods set_errno()\nand\nget_errno()\nto set and retrieve the error value.\nThe Win32 LastError variable is similarly supported by\nthe DLL()\n, OleDLL()\n, and WinDLL()\nfunctions.\nYou supply use_last_error=True\nas a keyword parameter\nand then call the module-level methods set_last_error()\nand get_last_error()\n.\nThe byref()\nfunction, used to retrieve a pointer to a ctypes\ninstance, now has an optional offset parameter that is a byte\ncount that will be added to the returned pointer.\nImproved SSL Support\u00b6\nBill Janssen made extensive improvements to Python 2.6\u2019s support for\nthe Secure Sockets Layer by adding a new module, ssl\n, that\u2019s\nbuilt atop the OpenSSL library.\nThis new module provides more control over the protocol negotiated,\nthe X.509 certificates used, and has better support for writing SSL\nservers (as opposed to clients) in Python. The existing SSL support\nin the socket\nmodule hasn\u2019t been removed and continues to work,\nthough it will be removed in Python 3.0.\nTo use the new module, you must first create a TCP connection in the\nusual way and then pass it to the ssl.wrap_socket()\nfunction.\nIt\u2019s possible to specify whether a certificate is required, and to\nobtain certificate info by calling the getpeercert()\nmethod.\nSee also\nThe documentation for the ssl\nmodule.\nDeprecations and Removals\u00b6\nString exceptions have been removed. Attempting to use them raises a\nTypeError\n.Changes to the\nException\ninterface as dictated by PEP 352 continue to be made. For 2.6, themessage\nattribute is being deprecated in favor of theargs\nattribute.(3.0-warning mode) Python 3.0 will feature a reorganized standard library that will drop many outdated modules and rename others. Python 2.6 running in 3.0-warning mode will warn about these modules when they are imported.\nThe list of deprecated modules is:\naudiodev\n,bgenlocations\n,buildtools\n,bundlebuilder\n,Canvas\n,compiler\n,dircache\n,dl\n,fpformat\n,gensuitemodule\n,ihooks\n,imageop\n,imgfile\n,linuxaudiodev\n,mhlib\n,mimetools\n,multifile\n,new\n,pure\n,statvfs\n,sunaudiodev\n,test.testall\n, andtoaiff\n.The\ngopherlib\nmodule has been removed.The\nMimeWriter\nmodule andmimify\nmodule have been deprecated; use theemail\npackage instead.The\nmd5\nmodule has been deprecated; use thehashlib\nmodule instead.The\nposixfile\nmodule has been deprecated;fcntl.lockf()\nprovides better locking.The\npopen2\nmodule has been deprecated; use thesubprocess\nmodule.The\nrgbimg\nmodule has been removed.The\nsets\nmodule has been deprecated; it\u2019s better to use the built-inset\nandfrozenset\ntypes.The\nsha\nmodule has been deprecated; use thehashlib\nmodule instead.\nBuild and C API Changes\u00b6\nChanges to Python\u2019s build process and to the C API include:\nPython now must be compiled with C89 compilers (after 19 years!). This means that the Python source tree has dropped its own implementations of\nmemmove()\nandstrerror()\n, which are in the C89 standard library.Python 2.6 can be built with Microsoft Visual Studio 2008 (version 9.0), and this is the new default compiler. See the\nPCbuild\ndirectory for the build files. (Implemented by Christian Heimes.)On Mac OS X, Python 2.6 can be compiled as a 4-way universal build. The configure script can take a\n--with-universal-archs=[32-bit|64-bit|all]\nswitch, controlling whether the binaries are built for 32-bit architectures (x86, PowerPC), 64-bit (x86-64 and PPC-64), or both. (Contributed by Ronald Oussoren.)A new function added in Python 2.6.6,\nPySys_SetArgvEx()\n, sets the value ofsys.argv\nand can optionally updatesys.path\nto include the directory containing the script named bysys.argv[0]\ndepending on the value of an updatepath parameter.This function was added to close a security hole for applications that embed Python. The old function,\nPySys_SetArgv()\n, would always updatesys.path\n, and sometimes it would add the current directory. This meant that, if you ran an application embedding Python in a directory controlled by someone else, attackers could put a Trojan-horse module in the directory (say, a file namedos.py\n) that your application would then import and run.If you maintain a C/C++ application that embeds Python, check whether you\u2019re calling\nPySys_SetArgv()\nand carefully consider whether the application should be usingPySys_SetArgvEx()\nwith updatepath set to false. Note that using this function will break compatibility with Python versions 2.6.5 and earlier; if you have to continue working with earlier versions, you can leave the call toPySys_SetArgv()\nalone and callPyRun_SimpleString(\"sys.path.pop(0)\\n\")\nafterwards to discard the firstsys.path\ncomponent.Security issue reported as CVE 2008-5983; discussed in gh-50003, and fixed by Antoine Pitrou.\nThe BerkeleyDB module now has a C API object, available as\nbsddb.db.api\n. This object can be used by other C extensions that wish to use thebsddb\nmodule for their own purposes. (Contributed by Duncan Grisby.)The new buffer interface, previously described in the PEP 3118 section, adds\nPyObject_GetBuffer()\nandPyBuffer_Release()\n, as well as a few other functions.Python\u2019s use of the C stdio library is now thread-safe, or at least as thread-safe as the underlying library is. A long-standing potential bug occurred if one thread closed a file object while another thread was reading from or writing to the object. In 2.6 file objects have a reference count, manipulated by the\nPyFile_IncUseCount()\nandPyFile_DecUseCount()\nfunctions. File objects can\u2019t be closed unless the reference count is zero.PyFile_IncUseCount()\nshould be called while the GIL is still held, before carrying out an I/O operation using theFILE *\npointer, andPyFile_DecUseCount()\nshould be called immediately after the GIL is re-acquired. (Contributed by Antoine Pitrou and Gregory P. Smith.)Importing modules simultaneously in two different threads no longer deadlocks; it will now raise an\nImportError\n. A new API function,PyImport_ImportModuleNoBlock()\n, will look for a module insys.modules\nfirst, then try to import it after acquiring an import lock. If the import lock is held by another thread, anImportError\nis raised. (Contributed by Christian Heimes.)Several functions return information about the platform\u2019s floating-point support.\nPyFloat_GetMax()\nreturns the maximum representable floating-point value, andPyFloat_GetMin()\nreturns the minimum positive value.PyFloat_GetInfo()\nreturns an object containing more information from thefloat.h\nfile, such as\"mant_dig\"\n(number of digits in the mantissa),\"epsilon\"\n(smallest difference between 1.0 and the next largest value representable), and several others. (Contributed by Christian Heimes; bpo-1534.)C functions and methods that use\nPyComplex_AsCComplex()\nwill now accept arguments that have a__complex__()\nmethod. In particular, the functions in thecmath\nmodule will now accept objects with this method. This is a backport of a Python 3.0 change. (Contributed by Mark Dickinson; bpo-1675423.)Python\u2019s C API now includes two functions for case-insensitive string comparisons,\nPyOS_stricmp(char*, char*)\nandPyOS_strnicmp(char*, char*, Py_ssize_t)\n. (Contributed by Christian Heimes; bpo-1635.)Many C extensions define their own little macro for adding integers and strings to the module\u2019s dictionary in the\ninit*\nfunction. Python 2.6 finally defines standard macros for adding values to a module,PyModule_AddStringMacro\nandPyModule_AddIntMacro()\n. (Contributed by Christian Heimes.)Some macros were renamed in both 3.0 and 2.6 to make it clearer that they are macros, not functions.\nPy_Size()\nbecamePy_SIZE()\n,Py_Type()\nbecamePy_TYPE()\n, andPy_Refcnt()\nbecamePy_REFCNT()\n. The mixed-case macros are still available in Python 2.6 for backward compatibility. (bpo-1629)Distutils now places C extensions it builds in a different directory when running on a debug version of Python. (Contributed by Collin Winter; bpo-1530959.)\nSeveral basic data types, such as integers and strings, maintain internal free lists of objects that can be re-used. The data structures for these free lists now follow a naming convention: the variable is always named\nfree_list\n, the counter is always namednumfree\n, and a macroPy_MAXFREELIST\nis always defined.A new Makefile target, \u201cmake patchcheck\u201d, prepares the Python source tree for making a patch: it fixes trailing whitespace in all modified\n.py\nfiles, checks whether the documentation has been changed, and reports whether theMisc/ACKS\nandMisc/NEWS\nfiles have been updated. (Contributed by Brett Cannon.)Another new target, \u201cmake profile-opt\u201d, compiles a Python binary using GCC\u2019s profile-guided optimization. It compiles Python with profiling enabled, runs the test suite to obtain a set of profiling results, and then compiles using these results for optimization. (Contributed by Gregory P. Smith.)\nPort-Specific Changes: Windows\u00b6\nThe support for Windows 95, 98, ME and NT4 has been dropped. Python 2.6 requires at least Windows 2000 SP4.\nThe new default compiler on Windows is Visual Studio 2008 (version 9.0). The build directories for Visual Studio 2003 (version 7.1) and 2005 (version 8.0) were moved into the PC/ directory. The new\nPCbuild\ndirectory supports cross compilation for X64, debug builds and Profile Guided Optimization (PGO). PGO builds are roughly 10% faster than normal builds. (Contributed by Christian Heimes with help from Amaury Forgeot d\u2019Arc and Martin von L\u00f6wis.)The\nmsvcrt\nmodule now supports both the normal and wide char variants of the console I/O API. Thegetwch()\nfunction reads a keypress and returns a Unicode value, as does thegetwche()\nfunction. Theputwch()\nfunction takes a Unicode character and writes it to the console. (Contributed by Christian Heimes.)os.path.expandvars()\nwill now expand environment variables in the form \u201c%var%\u201d, and \u201c~user\u201d will be expanded into the user\u2019s home directory path. (Contributed by Josiah Carlson; bpo-957650.)The\nsocket\nmodule\u2019s socket objects now have anioctl()\nmethod that provides a limited interface to theWSAIoctl()\nsystem interface.The\n_winreg\nmodule now has a function,ExpandEnvironmentStrings()\n, that expands environment variable references such as%NAME%\nin an input string. The handle objects provided by this module now support the context protocol, so they can be used inwith\nstatements. (Contributed by Christian Heimes.)_winreg\nalso has better support for x64 systems, exposing theDisableReflectionKey()\n,EnableReflectionKey()\n, andQueryReflectionKey()\nfunctions, which enable and disable registry reflection for 32-bit processes running on 64-bit systems. (bpo-1753245)The\nmsilib\nmodule\u2019sRecord\nobject gainedGetInteger()\nandGetString()\nmethods that return field values as an integer or a string. (Contributed by Floris Bruynooghe; bpo-2125.)\nPort-Specific Changes: Mac OS X\u00b6\nWhen compiling a framework build of Python, you can now specify the framework name to be used by providing the\n--with-framework-name=\noption to the configure script.The\nmacfs\nmodule has been removed. This in turn required themacostools.touched()\nfunction to be removed because it depended on themacfs\nmodule. (bpo-1490190)Many other Mac OS modules have been deprecated and will be removed in Python 3.0:\n_builtinSuites\n,aepack\n,aetools\n,aetypes\n,applesingle\n,appletrawmain\n,appletrunner\n,argvemulator\n,Audio_mac\n,autoGIL\n,Carbon\n,cfmfile\n,CodeWarrior\n,ColorPicker\n,EasyDialogs\n,Explorer\n,Finder\n,FrameWork\n,findertools\n,ic\n,icglue\n,icopen\n,macerrors\n,MacOS\n,macfs\n,macostools\n,macresource\n,MiniAEFrame\n,Nav\n,Netscape\n,OSATerminology\n,pimp\n,PixMapWrapper\n,StdSuites\n,SystemEvents\n,Terminal\n, andterminalcommand\n.\nPort-Specific Changes: IRIX\u00b6\nA number of old IRIX-specific modules were deprecated and will\nbe removed in Python 3.0:\nal\nand AL\n,\ncd\n,\ncddb\n,\ncdplayer\n,\nCL\nand cl\n,\nDEVICE\n,\nERRNO\n,\nFILE\n,\nFL\nand fl\n,\nflp\n,\nfm\n,\nGET\n,\nGLWS\n,\nGL\nand gl\n,\nIN\n,\nIOCTL\n,\njpeg\n,\npanelparser\n,\nreadcd\n,\nSV\nand sv\n,\ntorgb\n,\nvideoreader\n, and\nWAIT\n.\nPorting to Python 2.6\u00b6\nThis section lists previously described changes and other bugfixes that may require changes to your code:\nClasses that aren\u2019t supposed to be hashable should set\n__hash__ = None\nin their definitions to indicate the fact.String exceptions have been removed. Attempting to use them raises a\nTypeError\n.The\n__init__()\nmethod ofcollections.deque\nnow clears any existing contents of the deque before adding elements from the iterable. This change makes the behavior matchlist.__init__()\n.object.__init__()\npreviously accepted arbitrary arguments and keyword arguments, ignoring them. In Python 2.6, this is no longer allowed and will result in aTypeError\n. This will affect__init__()\nmethods that end up calling the corresponding method onobject\n(perhaps through usingsuper()\n). See bpo-1683368 for discussion.The\nDecimal\nconstructor now accepts leading and trailing whitespace when passed a string. Previously it would raise anInvalidOperation\nexception. On the other hand, thecreate_decimal()\nmethod ofContext\nobjects now explicitly disallows extra whitespace, raising aConversionSyntax\nexception.Due to an implementation accident, if you passed a file path to the built-in\n__import__()\nfunction, it would actually import the specified file. This was never intended to work, however, and the implementation now explicitly checks for this case and raises anImportError\n.C API: the\nPyImport_Import()\nandPyImport_ImportModule()\nfunctions now default to absolute imports, not relative imports. This will affect C extensions that import other modules.C API: extension data types that shouldn\u2019t be hashable should define their\ntp_hash\nslot toPyObject_HashNotImplemented()\n.The\nsocket\nmodule exceptionsocket.error\nnow inherits fromIOError\n. Previously it wasn\u2019t a subclass ofStandardError\nbut now it is, throughIOError\n. (Implemented by Gregory P. Smith; bpo-1706815.)The\nxmlrpclib\nmodule no longer automatically convertsdatetime.date\nanddatetime.time\nto thexmlrpclib.DateTime\ntype; the conversion semantics were not necessarily correct for all applications. Code usingxmlrpclib\nshould convertdate\nandtime\ninstances. (bpo-1330538)(3.0-warning mode) The\nException\nclass now warns when accessed using slicing or index access; havingException\nbehave like a tuple is being phased out.(3.0-warning mode) inequality comparisons between two dictionaries or two objects that don\u2019t implement comparison methods are reported as warnings.\ndict1 == dict2\nstill works, butdict1 < dict2\nis being phased out.Comparisons between cells, which are an implementation detail of Python\u2019s scoping rules, also cause warnings because such comparisons are forbidden entirely in 3.0.\nFor applications that embed Python:\nThe\nPySys_SetArgvEx()\nfunction was added in Python 2.6.6, letting applications close a security hole when the existingPySys_SetArgv()\nfunction was used. Check whether you\u2019re callingPySys_SetArgv()\nand carefully consider whether the application should be usingPySys_SetArgvEx()\nwith updatepath set to false.\nAcknowledgements\u00b6\nThe author would like to thank the following people for offering suggestions, corrections and assistance with various drafts of this article: Georg Brandl, Steve Brown, Nick Coghlan, Ralph Corderoy, Jim Jewett, Kent Johnson, Chris Lambacher, Martin Michlmayr, Antoine Pitrou, Brian Warner.", "code_snippets": [" ", " ", " ", "\n ", "\n", " ", " ", " ", " ", "\n ", " ", " ", " ", "\n ", " ", "\n ", " ", " ", " ", " ", "\n", " ", " ", "\n", " ", "\n ", "\n ", "\n", " ", " ", " ", "\n\n", "\n", " ", " ", "\n", " ", "\n\n", " ", "\n ", "\n ", "\n ", " ", "\n", " ", " ", "\n", " ", " ", " ", "\n ", "\n ", "\n ", "\n", "\n ", "\n ", "\n ", "\n ", "\n ", "\n ", "\n ", "\n", "\n ", "\n ", "\n ", "\n ", " ", " ", "\n ", " ", "\n", "\n ", "\n ", " ", " ", " ", "\n ", " ", " ", " ", "\n ", "\n ", "\n ", "\n ", "\n ", "\n ", "\n", " ", "\n\n", "\n", "\n ", " ", " ", "\n ", "\n ", " ", "\n ", "\n ", "\n ", "\n ", "\n ", "\n\n", " ", " ", "\n", " ", " ", " ", "\n ", "\n", " ", " ", "\n", " ", " ", " ", " ", " ", " ", "\n ", "\n", "\n", " ", "\n\n", " ", " ", " ", "\n ", " ", " ", " ", "\n ", "\n", "\n", " ", " ", "\n\n\n", " ", "\n ", "\n ", "\n ", " ", " ", " ", " ", " ", "\n ", " ", " ", "\n\n ", "\n ", " ", " ", "\n ", " ", " ", " ", " ", "\n ", " ", " ", " ", " ", "\n\n ", "\n ", "\n\n", " ", " ", " ", "\n ", " ", " ", "\n\n ", " ", " ", "\n\n ", " ", " ", " ", " ", "\n ", "\n ", "\n\n ", " ", " ", "\n ", " ", " ", " ", " ", "\n", " ", "\n\n", " ", "\n ", "\n ", "\n", " ", " ", "\n", " ", " ", " ", " ", " ", "\n", " ", " ", " ", "\n ", " ", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", " ", " ", "\n\n", " ", "\n ", "\n ", "\n ", " ", " ", "\n ", " ", " ", " ", " ", "\n ", " ", " ", " ", " ", "\n\n ", "\n ", " ", " ", "\n\n", " ", " ", " ", "\n ", " ", " ", "\n ", " ", " ", "\n ", " ", " ", " ", "\n\n ", "\n ", " ", " ", " ", " ", " ", "\n ", " ", " ", "\n\n ", "\n ", "\n\n ", "\n ", "\n\n ", "\n ", " ", " ", " ", " ", "\n ", " ", " ", "\n", " ", "\n", " ", "\n", " ", "\n", " ", "\n", " ", "\n", " ", "\n", "\n", "\n", "\n", "\n", "\n", " ", "\n", " ", " ", " ", "\n", "\n", "\n", "\n", "\n", " ", "\n", "\n", "\n", "\n\n", "\n", "\n", "\n", "\n", "\n", " ", " ", "\n", " ", "\n", "\n", " ", "\n", "\n", " ", "\n", "\n", " ", " ", "\n", " ", " ", "\n", " ", "\n", "\n", " ", " ", "\n", " ", "\n", "\n", "\n", "\n", "\n", "\n", " ", "\n ", " ", " ", "\n ", " ", "\n ", "\n ", " ", "\n", " ", "\n", "\n", " ", "\n", " ", " ", "\n", " ", " ", " ", "\n", "\n ", "\n", " ", " ", " ", "\n ", "\n", "\n ", "\n", " ", " ", "\n ", "\n", "\n ", "\n", " ", " ", " ", "\n ", "\n", " ", "\n\n", " ", " ", "\n ", "\n\n", " ", " ", "\n", " ", " ", "\n", "\n", " ", " ", " ", "\n", "\n", "\n", " ", " ", "\n", "\n", "\n", " ", "\n", "\n", " ", " ", "\n", "\n", "\n", "\n", "\n", "\n\n", "\n ", "\n", "\n\n", "\n ", "\n\n", "\n", "\n", "\n", "\n", "\n", "\n ", " ", " ", " ", "\n ", " ", " ", " ", "\n", " ", " ", "\n\n", "\n ", " ", " ", "\n\n ", "\n ", " ", " ", " ", "\n ", "\n\n ", " ", " ", "\n ", " ", " ", "\n\n\n", "\n ", " ", " ", " ", "\n ", "\n", "\n", " ", "\n", "\n", " ", " ", "\n", "\n File ", ", line ", ", in ", "\n", ": ", "\n", "\n", " ", "\n", "\n\n", "\n", "\n ", " ", "\n", " ", " ", " ", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", " ", " ", "\n", "\n", " ", "\n", "\n", " ", "\n", "\n", " ", "\n", "\n", "\n", "\n", "\n ", "\n", "\n ", "\n\n", " ", " ", "\n", " ", "\n", " ", " ", " ", "\n", " ", " ", " ", "\n", " ", "\n", "\n", "\n", "\n", "\n", "\n", " ", "\n", "\n", " ", "\n", "\n", " ", "\n", "\n", "\n", " ", " ", "\n", "\n", "\n", " ", " ", "\n", " ", " ", "\n", "\n", "\n", " ", "\n", " ", " ", " ", "\n", "\n", " ", " ", "\n", "\n", " ", " ", "\n", "\n", "\n", "\n", "\n", "\n ", "\n ", "\n ", " ", "\n\n ", "\n ", " ", "\n ", " ", " ", "\n\n ", "\n ", "\n ", " ", "\n\n", "\n ", "\n ", "\n ", " ", " ", " ", "\n\n ", "\n ", " ", "\n ", " ", " ", " ", " ", "\n", "\n", " ", " ", "\n", "\n", " ", "\n", "\n", " ", " ", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", " ", " ", "\n", " ", "\n", "\n", "\n", "\n", "\n\n", " ", " ", " ", " ", " ", "\n", " ", " ", " ", "\n", "\n", " ", " ", " ", "\n", "\n", "\n", "\n", " ", " ", "\n", "\n", "\n", " ", "\n", "\n", "\n", "\n", " ", " ", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", " ", " ", "\n", "\n", "\n", "\n", "\n", "\n", "\n", " ", " ", " ", " ", " ", " ", "\n", "\n", " ", "\n", "\n", " ", "\n", "\n", "\n", "\n", " ", "\n", "\n", "\n", " ", " ", "\n", "\n", "\n", "\n", "\n", " ", "\n", "\n", " ", "\n", "\n", " ", "\n", "\n", "\n", " ", "\n", "\n", "\n", "\n", "\n", " ", "\n", "\n", "\n", " ", " ", " ", " ", "\n", "\n", "\n", " ", " ", "\n", " ", "\n", "\n", "\n", "\n", "\n", "\n", " ", " ", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", " ", "\n ", " ", "\n", " ", " ", " ", "\n ", "\n", " ", "\n ", "\n ", " ", " ", "\n ", "\n", " ", " ", " ", "\n ", "\n ", "\n ", " ", " ", " ", "\n ", " ", " ", " ", " ", "\n", " ", " ", "\n", "\n", " ", " ", "\n", "\n", "\n", "\n", "\n", " ", " ", " ", "\n", "\n", "\n", "\n", "\n", "\n", " ", " ", "\n", "\n", "\n", "\n", ": ", "\n", "\n", " ", " ", "\n\n", "\n", "\n", " ", "\n\n", "\n", "\n", "\n\n", " ", " ", "\n", "\n", "\n", "\n", "\n", "\n", " ", "\n", "\n ", "\n ", " ", "\n ", " ", " ", "\n ", " ", "\n ", " ", "\n ", "\n ", "\n ", " ", "\n ", "\n ", "\n ", " ", " ", "\n ", " ", " ", "\n ", " ", "\n ", "\n ", "\n ", "\n ", " ", " ", "\n ", " ", "\n ", " ", " ", " ", "\n ", " ", "\n ", " ", " ", "\n ", "\n ", " ", " ", " ", "\n ", " ", "\n ", " ", "\n ", " ", "\n ", " ", "\n ", " ", "\n ", "\n", " ", " ", "\n", " ", "\n", "\n", " ", "\n", "\n", "\n", ": ", "\n", "\n", " ", " ", " ", " ", " ", "\n", " ", " ", " ", "\n", "\n", "\n", " ", "\n", "\n", "\n", "\n", "\n\n", "\n", " ", " ", "\n ", "\n ", "\n\n", "\n", " ", " ", "\n", " ", " ", "\n", " ", "\n", " ", "\n\n", "\n", " ", "\n", " ", " ", "\n\n", "\n", " ", "\n"], "language": "Python", "source": "python.org", "token_count": 27955} +{"url": "https://docs.python.org/3/howto/sorting.html", "title": "Sorting Techniques", "content": "Sorting Techniques\u00b6\n- Author:\nAndrew Dalke and Raymond Hettinger\nPython lists have a built-in list.sort()\nmethod that modifies the list\nin-place. There is also a sorted()\nbuilt-in function that builds a new\nsorted list from an iterable.\nIn this document, we explore the various techniques for sorting data using Python.\nSorting Basics\u00b6\nA simple ascending sort is very easy: just call the sorted()\nfunction. It\nreturns a new sorted list:\n>>> sorted([5, 2, 3, 1, 4])\n[1, 2, 3, 4, 5]\nYou can also use the list.sort()\nmethod. It modifies the list\nin-place (and returns None\nto avoid confusion). Usually it\u2019s less convenient\nthan sorted()\n- but if you don\u2019t need the original list, it\u2019s slightly\nmore efficient.\n>>> a = [5, 2, 3, 1, 4]\n>>> a.sort()\n>>> a\n[1, 2, 3, 4, 5]\nAnother difference is that the list.sort()\nmethod is only defined for\nlists. In contrast, the sorted()\nfunction accepts any iterable.\n>>> sorted({1: 'D', 2: 'B', 3: 'B', 4: 'E', 5: 'A'})\n[1, 2, 3, 4, 5]\nKey Functions\u00b6\nThe list.sort()\nmethod and the functions sorted()\n,\nmin()\n, max()\n, heapq.nsmallest()\n, and\nheapq.nlargest()\nhave a key parameter to specify a function (or\nother callable) to be called on each list element prior to making\ncomparisons.\nFor example, here\u2019s a case-insensitive string comparison using\nstr.casefold()\n:\n>>> sorted(\"This is a test string from Andrew\".split(), key=str.casefold)\n['a', 'Andrew', 'from', 'is', 'string', 'test', 'This']\nThe value of the key parameter should be a function (or other callable) that takes a single argument and returns a key to use for sorting purposes. This technique is fast because the key function is called exactly once for each input record.\nA common pattern is to sort complex objects using some of the object\u2019s indices as keys. For example:\n>>> student_tuples = [\n... ('john', 'A', 15),\n... ('jane', 'B', 12),\n... ('dave', 'B', 10),\n... ]\n>>> sorted(student_tuples, key=lambda student: student[2]) # sort by age\n[('dave', 'B', 10), ('jane', 'B', 12), ('john', 'A', 15)]\nThe same technique works for objects with named attributes. For example:\n>>> class Student:\n... def __init__(self, name, grade, age):\n... self.name = name\n... self.grade = grade\n... self.age = age\n... def __repr__(self):\n... return repr((self.name, self.grade, self.age))\n>>> student_objects = [\n... Student('john', 'A', 15),\n... Student('jane', 'B', 12),\n... Student('dave', 'B', 10),\n... ]\n>>> sorted(student_objects, key=lambda student: student.age) # sort by age\n[('dave', 'B', 10), ('jane', 'B', 12), ('john', 'A', 15)]\nObjects with named attributes can be made by a regular class as shown\nabove, or they can be instances of dataclass\nor\na named tuple.\nOperator Module Functions and Partial Function Evaluation\u00b6\nThe key function patterns shown above are very common, so Python provides\nconvenience functions to make accessor functions easier and faster. The\noperator\nmodule has itemgetter()\n,\nattrgetter()\n, and a methodcaller()\nfunction.\nUsing those functions, the above examples become simpler and faster:\n>>> from operator import itemgetter, attrgetter\n>>> sorted(student_tuples, key=itemgetter(2))\n[('dave', 'B', 10), ('jane', 'B', 12), ('john', 'A', 15)]\n>>> sorted(student_objects, key=attrgetter('age'))\n[('dave', 'B', 10), ('jane', 'B', 12), ('john', 'A', 15)]\nThe operator module functions allow multiple levels of sorting. For example, to sort by grade then by age:\n>>> sorted(student_tuples, key=itemgetter(1,2))\n[('john', 'A', 15), ('dave', 'B', 10), ('jane', 'B', 12)]\n>>> sorted(student_objects, key=attrgetter('grade', 'age'))\n[('john', 'A', 15), ('dave', 'B', 10), ('jane', 'B', 12)]\nThe functools\nmodule provides another helpful tool for making\nkey-functions. The partial()\nfunction can reduce the\narity of a multi-argument\nfunction making it suitable for use as a key-function.\n>>> from functools import partial\n>>> from unicodedata import normalize\n>>> names = 'Zo\u00eb \u00c5bj\u00f8rn N\u00fa\u00f1ez \u00c9lana Zeke Abe Nubia Eloise'.split()\n>>> sorted(names, key=partial(normalize, 'NFD'))\n['Abe', '\u00c5bj\u00f8rn', 'Eloise', '\u00c9lana', 'Nubia', 'N\u00fa\u00f1ez', 'Zeke', 'Zo\u00eb']\n>>> sorted(names, key=partial(normalize, 'NFC'))\n['Abe', 'Eloise', 'Nubia', 'N\u00fa\u00f1ez', 'Zeke', 'Zo\u00eb', '\u00c5bj\u00f8rn', '\u00c9lana']\nAscending and Descending\u00b6\nBoth list.sort()\nand sorted()\naccept a reverse parameter with a\nboolean value. This is used to flag descending sorts. For example, to get the\nstudent data in reverse age order:\n>>> sorted(student_tuples, key=itemgetter(2), reverse=True)\n[('john', 'A', 15), ('jane', 'B', 12), ('dave', 'B', 10)]\n>>> sorted(student_objects, key=attrgetter('age'), reverse=True)\n[('john', 'A', 15), ('jane', 'B', 12), ('dave', 'B', 10)]\nSort Stability and Complex Sorts\u00b6\nSorts are guaranteed to be stable. That means that when multiple records have the same key, their original order is preserved.\n>>> data = [('red', 1), ('blue', 1), ('red', 2), ('blue', 2)]\n>>> sorted(data, key=itemgetter(0))\n[('blue', 1), ('blue', 2), ('red', 1), ('red', 2)]\nNotice how the two records for blue retain their original order so that\n('blue', 1)\nis guaranteed to precede ('blue', 2)\n.\nThis wonderful property lets you build complex sorts in a series of sorting steps. For example, to sort the student data by descending grade and then ascending age, do the age sort first and then sort again using grade:\n>>> s = sorted(student_objects, key=attrgetter('age')) # sort on secondary key\n>>> sorted(s, key=attrgetter('grade'), reverse=True) # now sort on primary key, descending\n[('dave', 'B', 10), ('jane', 'B', 12), ('john', 'A', 15)]\nThis can be abstracted out into a wrapper function that can take a list and tuples of field and order to sort them on multiple passes.\n>>> def multisort(xs, specs):\n... for key, reverse in reversed(specs):\n... xs.sort(key=attrgetter(key), reverse=reverse)\n... return xs\n>>> multisort(list(student_objects), (('grade', True), ('age', False)))\n[('dave', 'B', 10), ('jane', 'B', 12), ('john', 'A', 15)]\nThe Timsort algorithm used in Python does multiple sorts efficiently because it can take advantage of any ordering already present in a dataset.\nDecorate-Sort-Undecorate\u00b6\nThis idiom is called Decorate-Sort-Undecorate after its three steps:\nFirst, the initial list is decorated with new values that control the sort order.\nSecond, the decorated list is sorted.\nFinally, the decorations are removed, creating a list that contains only the initial values in the new order.\nFor example, to sort the student data by grade using the DSU approach:\n>>> decorated = [(student.grade, i, student) for i, student in enumerate(student_objects)]\n>>> decorated.sort()\n>>> [student for grade, i, student in decorated] # undecorate\n[('john', 'A', 15), ('jane', 'B', 12), ('dave', 'B', 10)]\nThis idiom works because tuples are compared lexicographically; the first items are compared; if they are the same then the second items are compared, and so on.\nIt is not strictly necessary in all cases to include the index i in the decorated list, but including it gives two benefits:\nThe sort is stable \u2013 if two items have the same key, their order will be preserved in the sorted list.\nThe original items do not have to be comparable because the ordering of the decorated tuples will be determined by at most the first two items. So for example the original list could contain complex numbers which cannot be sorted directly.\nAnother name for this idiom is Schwartzian transform, after Randal L. Schwartz, who popularized it among Perl programmers.\nNow that Python sorting provides key-functions, this technique is not often needed.\nComparison Functions\u00b6\nUnlike key functions that return an absolute value for sorting, a comparison function computes the relative ordering for two inputs.\nFor example, a balance scale\ncompares two samples giving a relative ordering: lighter, equal, or heavier.\nLikewise, a comparison function such as cmp(a, b)\nwill return a negative\nvalue for less-than, zero if the inputs are equal, or a positive value for\ngreater-than.\nIt is common to encounter comparison functions when translating algorithms from\nother languages. Also, some libraries provide comparison functions as part of\ntheir API. For example, locale.strcoll()\nis a comparison function.\nTo accommodate those situations, Python provides\nfunctools.cmp_to_key\nto wrap the comparison function\nto make it usable as a key function:\nsorted(words, key=cmp_to_key(strcoll)) # locale-aware sort order\nStrategies For Unorderable Types and Values\u00b6\nA number of type and value issues can arise when sorting. Here are some strategies that can help:\nConvert non-comparable input types to strings prior to sorting:\n>>> data = ['twelve', '11', 10]\n>>> sorted(map(str, data))\n['10', '11', 'twelve']\nThis is needed because most cross-type comparisons raise a\nTypeError\n.\nRemove special values prior to sorting:\n>>> from math import isnan\n>>> from itertools import filterfalse\n>>> data = [3.3, float('nan'), 1.1, 2.2]\n>>> sorted(filterfalse(isnan, data))\n[1.1, 2.2, 3.3]\nThis is needed because the IEEE-754 standard specifies that, \u201cEvery NaN shall compare unordered with everything, including itself.\u201d\nLikewise, None\ncan be stripped from datasets as well:\n>>> data = [3.3, None, 1.1, 2.2]\n>>> sorted(x for x in data if x is not None)\n[1.1, 2.2, 3.3]\nThis is needed because None\nis not comparable to other types.\nConvert mapping types into sorted item lists before sorting:\n>>> data = [{'a': 1}, {'b': 2}]\n>>> sorted(data, key=lambda d: sorted(d.items()))\n[{'a': 1}, {'b': 2}]\nThis is needed because dict-to-dict comparisons raise a\nTypeError\n.\nConvert set types into sorted lists before sorting:\n>>> data = [{'a', 'b', 'c'}, {'b', 'c', 'd'}]\n>>> sorted(map(sorted, data))\n[['a', 'b', 'c'], ['b', 'c', 'd']]\nThis is needed because the elements contained in set types do not have a\ndeterministic order. For example, list({'a', 'b'})\nmay produce\neither ['a', 'b']\nor ['b', 'a']\n.\nOdds and Ends\u00b6\nFor locale aware sorting, use\nlocale.strxfrm()\nfor a key function orlocale.strcoll()\nfor a comparison function. This is necessary because \u201calphabetical\u201d sort orderings can vary across cultures even if the underlying alphabet is the same.The reverse parameter still maintains sort stability (so that records with equal keys retain the original order). Interestingly, that effect can be simulated without the parameter by using the builtin\nreversed()\nfunction twice:>>> data = [('red', 1), ('blue', 1), ('red', 2), ('blue', 2)] >>> standard_way = sorted(data, key=itemgetter(0), reverse=True) >>> double_reversed = list(reversed(sorted(reversed(data), key=itemgetter(0)))) >>> assert standard_way == double_reversed >>> standard_way [('red', 1), ('red', 2), ('blue', 1), ('blue', 2)]\nThe sort routines use\n<\nwhen making comparisons between two objects. So, it is easy to add a standard sort order to a class by defining an__lt__()\nmethod:>>> Student.__lt__ = lambda self, other: self.age < other.age >>> sorted(student_objects) [('dave', 'B', 10), ('jane', 'B', 12), ('john', 'A', 15)]\nHowever, note that\n<\ncan fall back to using__gt__()\nif__lt__()\nis not implemented (seeobject.__lt__()\nfor details on the mechanics). To avoid surprises, PEP 8 recommends that all six comparison methods be implemented. Thetotal_ordering()\ndecorator is provided to make that task easier.Key functions need not depend directly on the objects being sorted. A key function can also access external resources. For instance, if the student grades are stored in a dictionary, they can be used to sort a separate list of student names:\n>>> students = ['dave', 'john', 'jane'] >>> newgrades = {'john': 'F', 'jane':'A', 'dave': 'C'} >>> sorted(students, key=newgrades.__getitem__) ['jane', 'dave', 'john']\nPartial Sorts\u00b6\nSome applications require only some of the data to be ordered. The standard library provides several tools that do less work than a full sort:\nmin()\nandmax()\nreturn the smallest and largest values, respectively. These functions make a single pass over the input data and require almost no auxiliary memory.heapq.nsmallest()\nandheapq.nlargest()\nreturn the n smallest and largest values, respectively. These functions make a single pass over the data keeping only n elements in memory at a time. For values of n that are small relative to the number of inputs, these functions make far fewer comparisons than a full sort.heapq.heappush()\nandheapq.heappop()\ncreate and maintain a partially sorted arrangement of data that keeps the smallest element at position0\n. These functions are suitable for implementing priority queues which are commonly used for task scheduling.", "code_snippets": [" ", " ", "\n"], "language": "Python", "source": "python.org", "token_count": 3127} +{"url": "https://docs.python.org/3/library/asyncio-dev.html", "title": "Developing with asyncio", "content": "Developing with asyncio\u00b6\nAsynchronous programming is different from classic \u201csequential\u201d programming.\nThis page lists common mistakes and traps and explains how to avoid them.\nDebug Mode\u00b6\nBy default asyncio runs in production mode. In order to ease the development asyncio has a debug mode.\nThere are several ways to enable asyncio debug mode:\nSetting the\nPYTHONASYNCIODEBUG\nenvironment variable to1\n.Using the Python Development Mode.\nPassing\ndebug=True\ntoasyncio.run()\n.Calling\nloop.set_debug()\n.\nIn addition to enabling the debug mode, consider also:\nsetting the log level of the asyncio logger to\nlogging.DEBUG\n, for example the following snippet of code can be run at startup of the application:logging.basicConfig(level=logging.DEBUG)\nconfiguring the\nwarnings\nmodule to displayResourceWarning\nwarnings. One way of doing that is by using the-W\ndefault\ncommand line option.\nWhen the debug mode is enabled:\nMany non-threadsafe asyncio APIs (such as\nloop.call_soon()\nandloop.call_at()\nmethods) raise an exception if they are called from a wrong thread.The execution time of the I/O selector is logged if it takes too long to perform an I/O operation.\nCallbacks taking longer than 100 milliseconds are logged. The\nloop.slow_callback_duration\nattribute can be used to set the minimum execution duration in seconds that is considered \u201cslow\u201d.\nConcurrency and Multithreading\u00b6\nAn event loop runs in a thread (typically the main thread) and executes\nall callbacks and Tasks in its thread. While a Task is running in the\nevent loop, no other Tasks can run in the same thread. When a Task\nexecutes an await\nexpression, the running Task gets suspended, and\nthe event loop executes the next Task.\nTo schedule a callback from another OS thread, the\nloop.call_soon_threadsafe()\nmethod should be used. Example:\nloop.call_soon_threadsafe(callback, *args)\nAlmost all asyncio objects are not thread safe, which is typically\nnot a problem unless there is code that works with them from outside\nof a Task or a callback. If there\u2019s a need for such code to call a\nlow-level asyncio API, the loop.call_soon_threadsafe()\nmethod\nshould be used, e.g.:\nloop.call_soon_threadsafe(fut.cancel)\nTo schedule a coroutine object from a different OS thread, the\nrun_coroutine_threadsafe()\nfunction should be used. It returns a\nconcurrent.futures.Future\nto access the result:\nasync def coro_func():\nreturn await asyncio.sleep(1, 42)\n# Later in another OS thread:\nfuture = asyncio.run_coroutine_threadsafe(coro_func(), loop)\n# Wait for the result:\nresult = future.result()\nTo handle signals the event loop must be run in the main thread.\nThe loop.run_in_executor()\nmethod can be used with a\nconcurrent.futures.ThreadPoolExecutor\nor\nInterpreterPoolExecutor\nto execute\nblocking code in a different OS thread without blocking the OS thread\nthat the event loop runs in.\nThere is currently no way to schedule coroutines or callbacks directly\nfrom a different process (such as one started with\nmultiprocessing\n). The Event Loop Methods\nsection lists APIs that can read from pipes and watch file descriptors\nwithout blocking the event loop. In addition, asyncio\u2019s\nSubprocess APIs provide a way to start a\nprocess and communicate with it from the event loop. Lastly, the\naforementioned loop.run_in_executor()\nmethod can also be used\nwith a concurrent.futures.ProcessPoolExecutor\nto execute\ncode in a different process.\nRunning Blocking Code\u00b6\nBlocking (CPU-bound) code should not be called directly. For example, if a function performs a CPU-intensive calculation for 1 second, all concurrent asyncio Tasks and IO operations would be delayed by 1 second.\nAn executor can be used to run a task in a different thread,\nincluding in a different interpreter, or even in\na different process to avoid blocking the OS thread with the\nevent loop. See the loop.run_in_executor()\nmethod for more\ndetails.\nLogging\u00b6\nasyncio uses the logging\nmodule and all logging is performed\nvia the \"asyncio\"\nlogger.\nThe default log level is logging.INFO\n, which can be easily\nadjusted:\nlogging.getLogger(\"asyncio\").setLevel(logging.WARNING)\nNetwork logging can block the event loop. It is recommended to use a separate thread for handling logs or use non-blocking IO. For example, see Dealing with handlers that block.\nDetect never-awaited coroutines\u00b6\nWhen a coroutine function is called, but not awaited\n(e.g. coro()\ninstead of await coro()\n)\nor the coroutine is not scheduled with asyncio.create_task()\n, asyncio\nwill emit a RuntimeWarning\n:\nimport asyncio\nasync def test():\nprint(\"never scheduled\")\nasync def main():\ntest()\nasyncio.run(main())\nOutput:\ntest.py:7: RuntimeWarning: coroutine 'test' was never awaited\ntest()\nOutput in debug mode:\ntest.py:7: RuntimeWarning: coroutine 'test' was never awaited\nCoroutine created at (most recent call last)\nFile \"../t.py\", line 9, in \nasyncio.run(main(), debug=True)\n< .. >\nFile \"../t.py\", line 7, in main\ntest()\ntest()\nThe usual fix is to either await the coroutine or call the\nasyncio.create_task()\nfunction:\nasync def main():\nawait test()\nDetect never-retrieved exceptions\u00b6\nIf a Future.set_exception()\nis called but the Future object is\nnever awaited on, the exception would never be propagated to the\nuser code. In this case, asyncio would emit a log message when the\nFuture object is garbage collected.\nExample of an unhandled exception:\nimport asyncio\nasync def bug():\nraise Exception(\"not consumed\")\nasync def main():\nasyncio.create_task(bug())\nasyncio.run(main())\nOutput:\nTask exception was never retrieved\nfuture: \nexception=Exception('not consumed')>\nTraceback (most recent call last):\nFile \"test.py\", line 4, in bug\nraise Exception(\"not consumed\")\nException: not consumed\nEnable the debug mode to get the traceback where the task was created:\nasyncio.run(main(), debug=True)\nOutput in debug mode:\nTask exception was never retrieved\nfuture: \nexception=Exception('not consumed') created at asyncio/tasks.py:321>\nsource_traceback: Object created at (most recent call last):\nFile \"../t.py\", line 9, in \nasyncio.run(main(), debug=True)\n< .. >\nTraceback (most recent call last):\nFile \"../t.py\", line 4, in bug\nraise Exception(\"not consumed\")\nException: not consumed", "code_snippets": ["\n", " ", "\n", "\n", " ", "\n ", " ", " ", " ", "\n\n", "\n\n", " ", " ", " ", "\n", "\n", " ", " ", "\n", "\n", "\n\n", " ", "\n ", "\n\n", " ", "\n ", "\n\n", "\n", " ", " ", " ", " ", " ", " ", "\n ", "\n", " ", " ", " ", " ", " ", " ", "\n", " ", " ", " ", " ", " ", " ", "\n ", " ", " ", " ", " ", " ", "\n ", " ", "\n\n ", " ", " ", "\n\n ", " ", " ", " ", " ", " ", "\n ", "\n ", "\n", " ", "\n ", " ", "\n", "\n\n", " ", "\n ", " ", "\n\n", " ", "\n ", "\n\n", "\n", " ", " ", " ", " ", "\n", " ", " ", " ", " ", " ", " ", " ", "\n ", "\n\n", " ", " ", " ", " ", "\n ", " ", " ", " ", " ", " ", "\n ", " ", "\n", " ", " ", "\n", " ", "\n", " ", " ", " ", " ", "\n", " ", " ", " ", " ", " ", " ", " ", "\n ", " ", " ", " ", "\n\n", " ", " ", " ", " ", " ", " ", " ", "\n ", " ", " ", " ", " ", " ", "\n ", " ", "\n\n", " ", " ", "\n\n", " ", " ", " ", " ", "\n ", " ", " ", " ", " ", " ", "\n ", " ", "\n", " ", " ", "\n"], "language": "Python", "source": "python.org", "token_count": 1568} +{"url": "https://docs.python.org/3/c-api/none.html", "title": "The ", "content": "The None\nObject\u00b6\nNote that the PyTypeObject\nfor None\nis not directly exposed in the\nPython/C API. Since None\nis a singleton, testing for object identity (using\n==\nin C) is sufficient. There is no PyNone_Check()\nfunction for the\nsame reason.", "code_snippets": [], "language": "Python", "source": "python.org", "token_count": 60} +{"url": "https://docs.python.org/3/library/asyncio-llapi-index.html", "title": "Low-level API Index", "content": "Low-level API Index\u00b6\nThis page lists all low-level asyncio APIs.\nObtaining the Event Loop\u00b6\nThe preferred function to get the running event loop. |\n|\nGet an event loop instance (running or current via the current policy). |\n|\nSet the event loop as current via the current policy. |\n|\nCreate a new event loop. |\nExamples\nEvent Loop Methods\u00b6\nSee also the main documentation section about the Event Loop Methods.\nLifecycle\nRun a Future/Task/awaitable until complete. |\n|\nRun the event loop forever. |\n|\nStop the event loop. |\n|\nClose the event loop. |\n|\nReturn |\n|\nReturn |\n|\nClose asynchronous generators. |\nDebugging\nEnable or disable the debug mode. |\n|\nGet the current debug mode. |\nScheduling Callbacks\nInvoke a callback soon. |\n|\nA thread-safe variant of |\n|\nInvoke a callback after the given time. |\n|\nInvoke a callback at the given time. |\nThread/Interpreter/Process Pool\n|\nRun a CPU-bound or other blocking function in\na |\nSet the default executor for |\nTasks and Futures\nCreate a |\n|\nSchedule coroutine as a |\n|\nSet a factory used by |\n|\nGet the factory |\nDNS\n|\nAsynchronous version of |\n|\nAsynchronous version of |\nNetworking and IPC\n|\nOpen a TCP connection. |\n|\nCreate a TCP server. |\nOpen a Unix socket connection. |\n|\nCreate a Unix socket server. |\n|\nWrap a |\n|\nOpen a datagram (UDP) connection. |\n|\n|\nSend a file over a transport. |\n|\nUpgrade an existing connection to TLS. |\n|\nWrap a read end of a pipe into a |\nWrap a write end of a pipe into a |\nSockets\n|\nReceive data from the |\n|\nReceive data from the |\n|\nReceive a datagram from the |\nReceive a datagram from the |\n|\n|\nSend data to the |\n|\nSend a datagram via the |\n|\nConnect the |\n|\nAccept a |\n|\nSend a file over the |\nStart watching a file descriptor for read availability. |\n|\nStop watching a file descriptor for read availability. |\n|\nStart watching a file descriptor for write availability. |\n|\nStop watching a file descriptor for write availability. |\nUnix Signals\nAdd a handler for a |\n|\nRemove a handler for a |\nSubprocesses\nSpawn a subprocess. |\n|\nSpawn a subprocess from a shell command. |\nError Handling\nCall the exception handler. |\n|\nSet a new exception handler. |\n|\nGet the current exception handler. |\n|\nThe default exception handler implementation. |\nExamples\nUsing\nloop.create_connection()\nto implement an echo-client.Using\nloop.create_connection()\nto connect a socket.\nTransports\u00b6\nAll transports implement the following methods:\nClose the transport. |\n|\nReturn |\n|\nRequest for information about the transport. |\n|\nSet a new protocol. |\n|\nReturn the current protocol. |\nTransports that can receive data (TCP and Unix connections,\npipes, etc). Returned from methods like\nloop.create_connection()\n, loop.create_unix_connection()\n,\nloop.connect_read_pipe()\n, etc:\nRead Transports\nReturn |\n|\nPause receiving. |\n|\nResume receiving. |\nTransports that can Send data (TCP and Unix connections,\npipes, etc). Returned from methods like\nloop.create_connection()\n, loop.create_unix_connection()\n,\nloop.connect_write_pipe()\n, etc:\nWrite Transports\nWrite data to the transport. |\n|\nWrite buffers to the transport. |\n|\nReturn |\n|\nClose and send EOF after flushing buffered data. |\n|\nClose the transport immediately. |\n|\nReturn the current size of the output buffer. |\n|\nReturn high and low water marks for write flow control. |\n|\nSet new high and low water marks for write flow control. |\nTransports returned by loop.create_datagram_endpoint()\n:\nDatagram Transports\nSend data to the remote peer. |\n|\nClose the transport immediately. |\nLow-level transport abstraction over subprocesses.\nReturned by loop.subprocess_exec()\nand\nloop.subprocess_shell()\n:\nSubprocess Transports\nReturn the subprocess process id. |\n|\nReturn the transport for the requested communication pipe (stdin, stdout, or stderr). |\n|\nReturn the subprocess return code. |\n|\nKill the subprocess. |\n|\nSend a signal to the subprocess. |\n|\nStop the subprocess. |\n|\nKill the subprocess and close all pipes. |\nProtocols\u00b6\nProtocol classes can implement the following callback methods:\n|\nCalled when a connection is made. |\n|\nCalled when the connection is lost or closed. |\n|\nCalled when the transport\u2019s buffer goes over the high water mark. |\n|\nCalled when the transport\u2019s buffer drains below the low water mark. |\nStreaming Protocols (TCP, Unix Sockets, Pipes)\n|\nCalled when some data is received. |\n|\nCalled when an EOF is received. |\nBuffered Streaming Protocols\n|\nCalled to allocate a new receive buffer. |\n|\nCalled when the buffer was updated with the received data. |\n|\nCalled when an EOF is received. |\nDatagram Protocols\n|\nCalled when a datagram is received. |\n|\nCalled when a previous send or receive operation raises an\n|\nSubprocess Protocols\n|\nCalled when the child process writes data into its stdout or stderr pipe. |\n|\nCalled when one of the pipes communicating with the child process is closed. |\n|\nCalled when the child process has exited. It can be called before\n|\nEvent Loop Policies\u00b6\nPolicies is a low-level mechanism to alter the behavior of\nfunctions like asyncio.get_event_loop()\n. See also\nthe main policies section for more\ndetails.\nAccessing Policies\nReturn the current process-wide policy. |\n|\nSet a new process-wide policy. |\n|\nBase class for policy objects. |", "code_snippets": [], "language": "Python", "source": "python.org", "token_count": 1304} +{"url": "https://docs.python.org/3/howto/logging.html", "title": "Logging HOWTO", "content": "Logging HOWTO\u00b6\n- Author:\nVinay Sajip \nThis page contains tutorial information. For links to reference information and a logging cookbook, please see Other resources.\nBasic Logging Tutorial\u00b6\nLogging is a means of tracking events that happen when some software runs. The software\u2019s developer adds logging calls to their code to indicate that certain events have occurred. An event is described by a descriptive message which can optionally contain variable data (i.e. data that is potentially different for each occurrence of the event). Events also have an importance which the developer ascribes to the event; the importance can also be called the level or severity.\nWhen to use logging\u00b6\nYou can access logging functionality by creating a logger via logger =\ngetLogger(__name__)\n, and then calling the logger\u2019s debug()\n,\ninfo()\n, warning()\n, error()\nand\ncritical()\nmethods. To determine when to use logging, and to see\nwhich logger methods to use when, see the table below. It states, for each of a\nset of common tasks, the best tool to use for that task.\nTask you want to perform |\nThe best tool for the task |\n|---|---|\nDisplay console output for ordinary usage of a command line script or program |\n|\nReport events that occur during normal operation of a program (e.g. for status monitoring or fault investigation) |\nA logger\u2019s |\nIssue a warning regarding a particular runtime event |\nA logger\u2019s |\nReport an error regarding a particular runtime event |\nRaise an exception |\nReport suppression of an error without raising an exception (e.g. error handler in a long-running server process) |\nA logger\u2019s |\nThe logger methods are named after the level or severity of the events they are used to track. The standard levels and their applicability are described below (in increasing order of severity):\nLevel |\nWhen it\u2019s used |\n|---|---|\n|\nDetailed information, typically of interest only when diagnosing problems. |\n|\nConfirmation that things are working as expected. |\n|\nAn indication that something unexpected happened, or indicative of some problem in the near future (e.g. \u2018disk space low\u2019). The software is still working as expected. |\n|\nDue to a more serious problem, the software has not been able to perform some function. |\n|\nA serious error, indicating that the program itself may be unable to continue running. |\nThe default level is WARNING\n, which means that only events of this severity and higher\nwill be tracked, unless the logging package is configured to do otherwise.\nEvents that are tracked can be handled in different ways. The simplest way of handling tracked events is to print them to the console. Another common way is to write them to a disk file.\nA simple example\u00b6\nA very simple example is:\nimport logging\nlogging.warning('Watch out!') # will print a message to the console\nlogging.info('I told you so') # will not print anything\nIf you type these lines into a script and run it, you\u2019ll see:\nWARNING:root:Watch out!\nprinted out on the console. The INFO\nmessage doesn\u2019t appear because the\ndefault level is WARNING\n. The printed message includes the indication of the\nlevel and the description of the event provided in the logging call, i.e.\n\u2018Watch out!\u2019. The actual output can be formatted quite flexibly if you need\nthat; formatting options will also be explained later.\nNotice that in this example, we use functions directly on the logging\nmodule, like logging.debug\n, rather than creating a logger and calling\nfunctions on it. These functions operate on the root logger, but can be useful\nas they will call basicConfig()\nfor you if it has not been called yet, like in\nthis example. In larger programs you\u2019ll usually want to control the logging\nconfiguration explicitly however - so for that reason as well as others, it\u2019s\nbetter to create loggers and call their methods.\nLogging to a file\u00b6\nA very common situation is that of recording logging events in a file, so let\u2019s look at that next. Be sure to try the following in a newly started Python interpreter, and don\u2019t just continue from the session described above:\nimport logging\nlogger = logging.getLogger(__name__)\nlogging.basicConfig(filename='example.log', encoding='utf-8', level=logging.DEBUG)\nlogger.debug('This message should go to the log file')\nlogger.info('So should this')\nlogger.warning('And this, too')\nlogger.error('And non-ASCII stuff, too, like \u00d8resund and Malm\u00f6')\nChanged in version 3.9: The encoding argument was added. In earlier Python versions, or if not\nspecified, the encoding used is the default value used by open()\n. While\nnot shown in the above example, an errors argument can also now be passed,\nwhich determines how encoding errors are handled. For available values and\nthe default, see the documentation for open()\n.\nAnd now if we open the file and look at what we have, we should find the log messages:\nDEBUG:__main__:This message should go to the log file\nINFO:__main__:So should this\nWARNING:__main__:And this, too\nERROR:__main__:And non-ASCII stuff, too, like \u00d8resund and Malm\u00f6\nThis example also shows how you can set the logging level which acts as the\nthreshold for tracking. In this case, because we set the threshold to\nDEBUG\n, all of the messages were printed.\nIf you want to set the logging level from a command-line option such as:\n--log=INFO\nand you have the value of the parameter passed for --log\nin some variable\nloglevel, you can use:\ngetattr(logging, loglevel.upper())\nto get the value which you\u2019ll pass to basicConfig()\nvia the level\nargument. You may want to error check any user input value, perhaps as in the\nfollowing example:\n# assuming loglevel is bound to the string value obtained from the\n# command line argument. Convert to upper case to allow the user to\n# specify --log=DEBUG or --log=debug\nnumeric_level = getattr(logging, loglevel.upper(), None)\nif not isinstance(numeric_level, int):\nraise ValueError('Invalid log level: %s' % loglevel)\nlogging.basicConfig(level=numeric_level, ...)\nThe call to basicConfig()\nshould come before any calls to a logger\u2019s\nmethods such as debug()\n, info()\n, etc. Otherwise,\nthat logging event may not be handled in the desired manner.\nIf you run the above script several times, the messages from successive runs are appended to the file example.log. If you want each run to start afresh, not remembering the messages from earlier runs, you can specify the filemode argument, by changing the call in the above example to:\nlogging.basicConfig(filename='example.log', filemode='w', level=logging.DEBUG)\nThe output will be the same as before, but the log file is no longer appended to, so the messages from earlier runs are lost.\nLogging variable data\u00b6\nTo log variable data, use a format string for the event description message and append the variable data as arguments. For example:\nimport logging\nlogging.warning('%s before you %s', 'Look', 'leap!')\nwill display:\nWARNING:root:Look before you leap!\nAs you can see, merging of variable data into the event description message\nuses the old, %-style of string formatting. This is for backwards\ncompatibility: the logging package pre-dates newer formatting options such as\nstr.format()\nand string.Template\n. These newer formatting\noptions are supported, but exploring them is outside the scope of this\ntutorial: see Using particular formatting styles throughout your application for more information.\nChanging the format of displayed messages\u00b6\nTo change the format which is used to display messages, you need to specify the format you want to use:\nimport logging\nlogging.basicConfig(format='%(levelname)s:%(message)s', level=logging.DEBUG)\nlogging.debug('This message should appear on the console')\nlogging.info('So should this')\nlogging.warning('And this, too')\nwhich would print:\nDEBUG:This message should appear on the console\nINFO:So should this\nWARNING:And this, too\nNotice that the \u2018root\u2019 which appeared in earlier examples has disappeared. For a full set of things that can appear in format strings, you can refer to the documentation for LogRecord attributes, but for simple usage, you just need the levelname (severity), message (event description, including variable data) and perhaps to display when the event occurred. This is described in the next section.\nDisplaying the date/time in messages\u00b6\nTo display the date and time of an event, you would place \u2018%(asctime)s\u2019 in your format string:\nimport logging\nlogging.basicConfig(format='%(asctime)s %(message)s')\nlogging.warning('is when this event was logged.')\nwhich should print something like this:\n2010-12-12 11:41:42,612 is when this event was logged.\nThe default format for date/time display (shown above) is like ISO8601 or\nRFC 3339. If you need more control over the formatting of the date/time, provide\na datefmt argument to basicConfig\n, as in this example:\nimport logging\nlogging.basicConfig(format='%(asctime)s %(message)s', datefmt='%m/%d/%Y %I:%M:%S %p')\nlogging.warning('is when this event was logged.')\nwhich would display something like this:\n12/12/2010 11:46:36 AM is when this event was logged.\nThe format of the datefmt argument is the same as supported by\ntime.strftime()\n.\nNext Steps\u00b6\nThat concludes the basic tutorial. It should be enough to get you up and running with logging. There\u2019s a lot more that the logging package offers, but to get the best out of it, you\u2019ll need to invest a little more of your time in reading the following sections. If you\u2019re ready for that, grab some of your favourite beverage and carry on.\nIf your logging needs are simple, then use the above examples to incorporate logging into your own scripts, and if you run into problems or don\u2019t understand something, please post a question in the Help category of the Python discussion forum and you should receive help before too long.\nStill here? You can carry on reading the next few sections, which provide a slightly more advanced/in-depth tutorial than the basic one above. After that, you can take a look at the Logging Cookbook.\nAdvanced Logging Tutorial\u00b6\nThe logging library takes a modular approach and offers several categories of components: loggers, handlers, filters, and formatters.\nLoggers expose the interface that application code directly uses.\nHandlers send the log records (created by loggers) to the appropriate destination.\nFilters provide a finer grained facility for determining which log records to output.\nFormatters specify the layout of log records in the final output.\nLog event information is passed between loggers, handlers, filters and\nformatters in a LogRecord\ninstance.\nLogging is performed by calling methods on instances of the Logger\nclass (hereafter called loggers). Each instance has a name, and they are\nconceptually arranged in a namespace hierarchy using dots (periods) as\nseparators. For example, a logger named \u2018scan\u2019 is the parent of loggers\n\u2018scan.text\u2019, \u2018scan.html\u2019 and \u2018scan.pdf\u2019. Logger names can be anything you want,\nand indicate the area of an application in which a logged message originates.\nA good convention to use when naming loggers is to use a module-level logger, in each module which uses logging, named as follows:\nlogger = logging.getLogger(__name__)\nThis means that logger names track the package/module hierarchy, and it\u2019s intuitively obvious where events are logged just from the logger name.\nThe root of the hierarchy of loggers is called the root logger. That\u2019s the\nlogger used by the functions debug()\n, info()\n, warning()\n,\nerror()\nand critical()\n, which just call the same-named method of\nthe root logger. The functions and the methods have the same signatures. The\nroot logger\u2019s name is printed as \u2018root\u2019 in the logged output.\nIt is, of course, possible to log messages to different destinations. Support is included in the package for writing log messages to files, HTTP GET/POST locations, email via SMTP, generic sockets, queues, or OS-specific logging mechanisms such as syslog or the Windows NT event log. Destinations are served by handler classes. You can create your own log destination class if you have special requirements not met by any of the built-in handler classes.\nBy default, no destination is set for any logging messages. You can specify\na destination (such as console or file) by using basicConfig()\nas in the\ntutorial examples. If you call the functions debug()\n, info()\n,\nwarning()\n, error()\nand critical()\n, they will check to see\nif no destination is set; and if one is not set, they will set a destination\nof the console (sys.stderr\n) and a default format for the displayed\nmessage before delegating to the root logger to do the actual message output.\nThe default format set by basicConfig()\nfor messages is:\nseverity:logger name:message\nYou can change this by passing a format string to basicConfig()\nwith the\nformat keyword argument. For all options regarding how a format string is\nconstructed, see Formatter Objects.\nLogging Flow\u00b6\nThe flow of log event information in loggers and handlers is illustrated in the following diagram.\nLoggers\u00b6\nLogger\nobjects have a threefold job. First, they expose several\nmethods to application code so that applications can log messages at runtime.\nSecond, logger objects determine which log messages to act upon based upon\nseverity (the default filtering facility) or filter objects. Third, logger\nobjects pass along relevant log messages to all interested log handlers.\nThe most widely used methods on logger objects fall into two categories: configuration and message sending.\nThese are the most common configuration methods:\nLogger.setLevel()\nspecifies the lowest-severity log message a logger will handle, where debug is the lowest built-in severity level and critical is the highest built-in severity. For example, if the severity level is INFO, the logger will handle only INFO, WARNING, ERROR, and CRITICAL messages and will ignore DEBUG messages.Logger.addHandler()\nandLogger.removeHandler()\nadd and remove handler objects from the logger object. Handlers are covered in more detail in Handlers.Logger.addFilter()\nandLogger.removeFilter()\nadd and remove filter objects from the logger object. Filters are covered in more detail in Filter Objects.\nYou don\u2019t need to always call these methods on every logger you create. See the last two paragraphs in this section.\nWith the logger object configured, the following methods create log messages:\nLogger.debug()\n,Logger.info()\n,Logger.warning()\n,Logger.error()\n, andLogger.critical()\nall create log records with a message and a level that corresponds to their respective method names. The message is actually a format string, which may contain the standard string substitution syntax of%s\n,%d\n,%f\n, and so on. The rest of their arguments is a list of objects that correspond with the substitution fields in the message. With regard to**kwargs\n, the logging methods care only about a keyword ofexc_info\nand use it to determine whether to log exception information.Logger.exception()\ncreates a log message similar toLogger.error()\n. The difference is thatLogger.exception()\ndumps a stack trace along with it. Call this method only from an exception handler.Logger.log()\ntakes a log level as an explicit argument. This is a little more verbose for logging messages than using the log level convenience methods listed above, but this is how to log at custom log levels.\ngetLogger()\nreturns a reference to a logger instance with the specified\nname if it is provided, or root\nif not. The names are period-separated\nhierarchical structures. Multiple calls to getLogger()\nwith the same name\nwill return a reference to the same logger object. Loggers that are further\ndown in the hierarchical list are children of loggers higher up in the list.\nFor example, given a logger with a name of foo\n, loggers with names of\nfoo.bar\n, foo.bar.baz\n, and foo.bam\nare all descendants of foo\n.\nLoggers have a concept of effective level. If a level is not explicitly set\non a logger, the level of its parent is used instead as its effective level.\nIf the parent has no explicit level set, its parent is examined, and so on -\nall ancestors are searched until an explicitly set level is found. The root\nlogger always has an explicit level set (WARNING\nby default). When deciding\nwhether to process an event, the effective level of the logger is used to\ndetermine whether the event is passed to the logger\u2019s handlers.\nChild loggers propagate messages up to the handlers associated with their\nancestor loggers. Because of this, it is unnecessary to define and configure\nhandlers for all the loggers an application uses. It is sufficient to\nconfigure handlers for a top-level logger and create child loggers as needed.\n(You can, however, turn off propagation by setting the propagate\nattribute of a logger to False\n.)\nHandlers\u00b6\nHandler\nobjects are responsible for dispatching the\nappropriate log messages (based on the log messages\u2019 severity) to the handler\u2019s\nspecified destination. Logger\nobjects can add zero or more handler\nobjects to themselves with an addHandler()\nmethod. As an example\nscenario, an application may want to send all log messages to a log file, all\nlog messages of error or higher to stdout, and all messages of critical to an\nemail address. This scenario requires three individual handlers where each\nhandler is responsible for sending messages of a specific severity to a specific\nlocation.\nThe standard library includes quite a few handler types (see\nUseful Handlers); the tutorials use mainly StreamHandler\nand\nFileHandler\nin its examples.\nThere are very few methods in a handler for application developers to concern themselves with. The only handler methods that seem relevant for application developers who are using the built-in handler objects (that is, not creating custom handlers) are the following configuration methods:\nThe\nsetLevel()\nmethod, just as in logger objects, specifies the lowest severity that will be dispatched to the appropriate destination. Why are there twosetLevel()\nmethods? The level set in the logger determines which severity of messages it will pass to its handlers. The level set in each handler determines which messages that handler will send on.setFormatter()\nselects a Formatter object for this handler to use.addFilter()\nandremoveFilter()\nrespectively configure and deconfigure filter objects on handlers.\nApplication code should not directly instantiate and use instances of\nHandler\n. Instead, the Handler\nclass is a base class that\ndefines the interface that all handlers should have and establishes some\ndefault behavior that child classes can use (or override).\nFormatters\u00b6\nFormatter objects configure the final order, structure, and contents of the log\nmessage. Unlike the base logging.Handler\nclass, application code may\ninstantiate formatter classes, although you could likely subclass the formatter\nif your application needs special behavior. The constructor takes three\noptional arguments \u2013 a message format string, a date format string and a style\nindicator.\n- logging.Formatter.__init__(fmt=None, datefmt=None, style='%')\u00b6\nIf there is no message format string, the default is to use the raw message. If there is no date format string, the default date format is:\n%Y-%m-%d %H:%M:%S\nwith the milliseconds tacked on at the end. The style\nis one of '%'\n,\n'{'\n, or '$'\n. If one of these is not specified, then '%'\nwill be used.\nIf the style\nis '%'\n, the message format string uses\n%()s\nstyled string substitution; the possible keys are\ndocumented in LogRecord attributes. If the style is '{'\n, the message\nformat string is assumed to be compatible with str.format()\n(using\nkeyword arguments), while if the style is '$'\nthen the message format string\nshould conform to what is expected by string.Template.substitute()\n.\nChanged in version 3.2: Added the style\nparameter.\nThe following message format string will log the time in a human-readable format, the severity of the message, and the contents of the message, in that order:\n'%(asctime)s - %(levelname)s - %(message)s'\nFormatters use a user-configurable function to convert the creation time of a\nrecord to a tuple. By default, time.localtime()\nis used; to change this\nfor a particular formatter instance, set the converter\nattribute of the\ninstance to a function with the same signature as time.localtime()\nor\ntime.gmtime()\n. To change it for all formatters, for example if you want\nall logging times to be shown in GMT, set the converter\nattribute in the\nFormatter class (to time.gmtime\nfor GMT display).\nConfiguring Logging\u00b6\nProgrammers can configure logging in three ways:\nCreating loggers, handlers, and formatters explicitly using Python code that calls the configuration methods listed above.\nCreating a logging config file and reading it using the\nfileConfig()\nfunction.Creating a dictionary of configuration information and passing it to the\ndictConfig()\nfunction.\nFor the reference documentation on the last two options, see Configuration functions. The following example configures a very simple logger, a console handler, and a simple formatter using Python code:\nimport logging\n# create logger\nlogger = logging.getLogger('simple_example')\nlogger.setLevel(logging.DEBUG)\n# create console handler and set level to debug\nch = logging.StreamHandler()\nch.setLevel(logging.DEBUG)\n# create formatter\nformatter = logging.Formatter('%(asctime)s - %(name)s - %(levelname)s - %(message)s')\n# add formatter to ch\nch.setFormatter(formatter)\n# add ch to logger\nlogger.addHandler(ch)\n# 'application' code\nlogger.debug('debug message')\nlogger.info('info message')\nlogger.warning('warn message')\nlogger.error('error message')\nlogger.critical('critical message')\nRunning this module from the command line produces the following output:\n$ python simple_logging_module.py\n2005-03-19 15:10:26,618 - simple_example - DEBUG - debug message\n2005-03-19 15:10:26,620 - simple_example - INFO - info message\n2005-03-19 15:10:26,695 - simple_example - WARNING - warn message\n2005-03-19 15:10:26,697 - simple_example - ERROR - error message\n2005-03-19 15:10:26,773 - simple_example - CRITICAL - critical message\nThe following Python module creates a logger, handler, and formatter nearly identical to those in the example listed above, with the only difference being the names of the objects:\nimport logging\nimport logging.config\nlogging.config.fileConfig('logging.conf')\n# create logger\nlogger = logging.getLogger('simpleExample')\n# 'application' code\nlogger.debug('debug message')\nlogger.info('info message')\nlogger.warning('warn message')\nlogger.error('error message')\nlogger.critical('critical message')\nHere is the logging.conf file:\n[loggers]\nkeys=root,simpleExample\n[handlers]\nkeys=consoleHandler\n[formatters]\nkeys=simpleFormatter\n[logger_root]\nlevel=DEBUG\nhandlers=consoleHandler\n[logger_simpleExample]\nlevel=DEBUG\nhandlers=consoleHandler\nqualname=simpleExample\npropagate=0\n[handler_consoleHandler]\nclass=StreamHandler\nlevel=DEBUG\nformatter=simpleFormatter\nargs=(sys.stdout,)\n[formatter_simpleFormatter]\nformat=%(asctime)s - %(name)s - %(levelname)s - %(message)s\nThe output is nearly identical to that of the non-config-file-based example:\n$ python simple_logging_config.py\n2005-03-19 15:38:55,977 - simpleExample - DEBUG - debug message\n2005-03-19 15:38:55,979 - simpleExample - INFO - info message\n2005-03-19 15:38:56,054 - simpleExample - WARNING - warn message\n2005-03-19 15:38:56,055 - simpleExample - ERROR - error message\n2005-03-19 15:38:56,130 - simpleExample - CRITICAL - critical message\nYou can see that the config file approach has a few advantages over the Python code approach, mainly separation of configuration and code and the ability of noncoders to easily modify the logging properties.\nWarning\nThe fileConfig()\nfunction takes a default parameter,\ndisable_existing_loggers\n, which defaults to True\nfor reasons of\nbackward compatibility. This may or may not be what you want, since it\nwill cause any non-root loggers existing before the fileConfig()\ncall to be disabled unless they (or an ancestor) are explicitly named in\nthe configuration. Please refer to the reference documentation for more\ninformation, and specify False\nfor this parameter if you wish.\nThe dictionary passed to dictConfig()\ncan also specify a Boolean\nvalue with key disable_existing_loggers\n, which if not specified\nexplicitly in the dictionary also defaults to being interpreted as\nTrue\n. This leads to the logger-disabling behaviour described above,\nwhich may not be what you want - in which case, provide the key\nexplicitly with a value of False\n.\nNote that the class names referenced in config files need to be either relative\nto the logging module, or absolute values which can be resolved using normal\nimport mechanisms. Thus, you could use either\nWatchedFileHandler\n(relative to the logging module) or\nmypackage.mymodule.MyHandler\n(for a class defined in package mypackage\nand module mymodule\n, where mypackage\nis available on the Python import\npath).\nIn Python 3.2, a new means of configuring logging has been introduced, using dictionaries to hold configuration information. This provides a superset of the functionality of the config-file-based approach outlined above, and is the recommended configuration method for new applications and deployments. Because a Python dictionary is used to hold configuration information, and since you can populate that dictionary using different means, you have more options for configuration. For example, you can use a configuration file in JSON format, or, if you have access to YAML processing functionality, a file in YAML format, to populate the configuration dictionary. Or, of course, you can construct the dictionary in Python code, receive it in pickled form over a socket, or use whatever approach makes sense for your application.\nHere\u2019s an example of the same configuration as above, in YAML format for the new dictionary-based approach:\nversion: 1\nformatters:\nsimple:\nformat: '%(asctime)s - %(name)s - %(levelname)s - %(message)s'\nhandlers:\nconsole:\nclass: logging.StreamHandler\nlevel: DEBUG\nformatter: simple\nstream: ext://sys.stdout\nloggers:\nsimpleExample:\nlevel: DEBUG\nhandlers: [console]\npropagate: no\nroot:\nlevel: DEBUG\nhandlers: [console]\nFor more information about logging using a dictionary, see Configuration functions.\nWhat happens if no configuration is provided\u00b6\nIf no logging configuration is provided, it is possible to have a situation where a logging event needs to be output, but no handlers can be found to output the event.\nThe event is output using a \u2018handler of last resort\u2019, stored in\nlastResort\n. This internal handler is not associated with any\nlogger, and acts like a StreamHandler\nwhich writes the\nevent description message to the current value of sys.stderr\n(therefore\nrespecting any redirections which may be in effect). No formatting is\ndone on the message - just the bare event description message is printed.\nThe handler\u2019s level is set to WARNING\n, so all events at this and\ngreater severities will be output.\nChanged in version 3.2: For versions of Python prior to 3.2, the behaviour is as follows:\nIf\nraiseExceptions\nisFalse\n(production mode), the event is silently dropped.If\nraiseExceptions\nisTrue\n(development mode), a message \u2018No handlers could be found for logger X.Y.Z\u2019 is printed once.\nTo obtain the pre-3.2 behaviour,\nlastResort\ncan be set to None\n.\nConfiguring Logging for a Library\u00b6\nWhen developing a library which uses logging, you should take care to\ndocument how the library uses logging - for example, the names of loggers\nused. Some consideration also needs to be given to its logging configuration.\nIf the using application does not use logging, and library code makes logging\ncalls, then (as described in the previous section) events of severity\nWARNING\nand greater will be printed to sys.stderr\n. This is regarded as\nthe best default behaviour.\nIf for some reason you don\u2019t want these messages printed in the absence of any logging configuration, you can attach a do-nothing handler to the top-level logger for your library. This avoids the message being printed, since a handler will always be found for the library\u2019s events: it just doesn\u2019t produce any output. If the library user configures logging for application use, presumably that configuration will add some handlers, and if levels are suitably configured then logging calls made in library code will send output to those handlers, as normal.\nA do-nothing handler is included in the logging package:\nNullHandler\n(since Python 3.1). An instance of this handler\ncould be added to the top-level logger of the logging namespace used by the\nlibrary (if you want to prevent your library\u2019s logged events being output to\nsys.stderr\nin the absence of logging configuration). If all logging by a\nlibrary foo is done using loggers with names matching \u2018foo.x\u2019, \u2018foo.x.y\u2019,\netc. then the code:\nimport logging\nlogging.getLogger('foo').addHandler(logging.NullHandler())\nshould have the desired effect. If an organisation produces a number of libraries, then the logger name specified can be \u2018orgname.foo\u2019 rather than just \u2018foo\u2019.\nNote\nIt is strongly advised that you do not log to the root logger\nin your library. Instead, use a logger with a unique and easily\nidentifiable name, such as the __name__\nfor your library\u2019s top-level package\nor module. Logging to the root logger will make it difficult or impossible for\nthe application developer to configure the logging verbosity or handlers of\nyour library as they wish.\nNote\nIt is strongly advised that you do not add any handlers other\nthan NullHandler\nto your library\u2019s loggers. This is\nbecause the configuration of handlers is the prerogative of the application\ndeveloper who uses your library. The application developer knows their\ntarget audience and what handlers are most appropriate for their\napplication: if you add handlers \u2018under the hood\u2019, you might well interfere\nwith their ability to carry out unit tests and deliver logs which suit their\nrequirements.\nLogging Levels\u00b6\nThe numeric values of logging levels are given in the following table. These are primarily of interest if you want to define your own levels, and need them to have specific values relative to the predefined levels. If you define a level with the same numeric value, it overwrites the predefined value; the predefined name is lost.\nLevel |\nNumeric value |\n|---|---|\n|\n50 |\n|\n40 |\n|\n30 |\n|\n20 |\n|\n10 |\n|\n0 |\nLevels can also be associated with loggers, being set either by the developer or through loading a saved logging configuration. When a logging method is called on a logger, the logger compares its own level with the level associated with the method call. If the logger\u2019s level is higher than the method call\u2019s, no logging message is actually generated. This is the basic mechanism controlling the verbosity of logging output.\nLogging messages are encoded as instances of the LogRecord\nclass. When a logger decides to actually log an event, a\nLogRecord\ninstance is created from the logging message.\nLogging messages are subjected to a dispatch mechanism through the use of\nhandlers, which are instances of subclasses of the Handler\nclass. Handlers are responsible for ensuring that a logged message (in the form\nof a LogRecord\n) ends up in a particular location (or set of locations)\nwhich is useful for the target audience for that message (such as end users,\nsupport desk staff, system administrators, developers). Handlers are passed\nLogRecord\ninstances intended for particular destinations. Each logger\ncan have zero, one or more handlers associated with it (via the\naddHandler()\nmethod of Logger\n). In addition to any\nhandlers directly associated with a logger, all handlers associated with all\nancestors of the logger are called to dispatch the message (unless the\npropagate flag for a logger is set to a false value, at which point the\npassing to ancestor handlers stops).\nJust as for loggers, handlers can have levels associated with them. A handler\u2019s\nlevel acts as a filter in the same way as a logger\u2019s level does. If a handler\ndecides to actually dispatch an event, the emit()\nmethod is used\nto send the message to its destination. Most user-defined subclasses of\nHandler\nwill need to override this emit()\n.\nCustom Levels\u00b6\nDefining your own levels is possible, but should not be necessary, as the existing levels have been chosen on the basis of practical experience. However, if you are convinced that you need custom levels, great care should be exercised when doing this, and it is possibly a very bad idea to define custom levels if you are developing a library. That\u2019s because if multiple library authors all define their own custom levels, there is a chance that the logging output from such multiple libraries used together will be difficult for the using developer to control and/or interpret, because a given numeric value might mean different things for different libraries.\nUseful Handlers\u00b6\nIn addition to the base Handler\nclass, many useful subclasses are\nprovided:\nStreamHandler\ninstances send messages to streams (file-like objects).FileHandler\ninstances send messages to disk files.BaseRotatingHandler\nis the base class for handlers that rotate log files at a certain point. It is not meant to be instantiated directly. Instead, useRotatingFileHandler\norTimedRotatingFileHandler\n.RotatingFileHandler\ninstances send messages to disk files, with support for maximum log file sizes and log file rotation.TimedRotatingFileHandler\ninstances send messages to disk files, rotating the log file at certain timed intervals.SocketHandler\ninstances send messages to TCP/IP sockets. Since 3.4, Unix domain sockets are also supported.DatagramHandler\ninstances send messages to UDP sockets. Since 3.4, Unix domain sockets are also supported.SMTPHandler\ninstances send messages to a designated email address.SysLogHandler\ninstances send messages to a Unix syslog daemon, possibly on a remote machine.NTEventLogHandler\ninstances send messages to a Windows NT/2000/XP event log.MemoryHandler\ninstances send messages to a buffer in memory, which is flushed whenever specific criteria are met.HTTPHandler\ninstances send messages to an HTTP server using eitherGET\norPOST\nsemantics.WatchedFileHandler\ninstances watch the file they are logging to. If the file changes, it is closed and reopened using the file name. This handler is only useful on Unix-like systems; Windows does not support the underlying mechanism used.QueueHandler\ninstances send messages to a queue, such as those implemented in thequeue\normultiprocessing\nmodules.NullHandler\ninstances do nothing with error messages. They are used by library developers who want to use logging, but want to avoid the \u2018No handlers could be found for logger XXX\u2019 message which can be displayed if the library user has not configured logging. See Configuring Logging for a Library for more information.\nAdded in version 3.1: The NullHandler\nclass.\nAdded in version 3.2: The QueueHandler\nclass.\nThe NullHandler\n, StreamHandler\nand FileHandler\nclasses are defined in the core logging package. The other handlers are\ndefined in a sub-module, logging.handlers\n. (There is also another\nsub-module, logging.config\n, for configuration functionality.)\nLogged messages are formatted for presentation through instances of the\nFormatter\nclass. They are initialized with a format string suitable for\nuse with the % operator and a dictionary.\nFor formatting multiple messages in a batch, instances of\nBufferingFormatter\ncan be used. In addition to the format\nstring (which is applied to each message in the batch), there is provision for\nheader and trailer format strings.\nWhen filtering based on logger level and/or handler level is not enough,\ninstances of Filter\ncan be added to both Logger\nand\nHandler\ninstances (through their addFilter()\nmethod).\nBefore deciding to process a message further, both loggers and handlers consult\nall their filters for permission. If any filter returns a false value, the\nmessage is not processed further.\nThe basic Filter\nfunctionality allows filtering by specific logger\nname. If this feature is used, messages sent to the named logger and its\nchildren are allowed through the filter, and all others dropped.\nExceptions raised during logging\u00b6\nThe logging package is designed to swallow exceptions which occur while logging in production. This is so that errors which occur while handling logging events - such as logging misconfiguration, network or other similar errors - do not cause the application using logging to terminate prematurely.\nSystemExit\nand KeyboardInterrupt\nexceptions are never\nswallowed. Other exceptions which occur during the emit()\nmethod\nof a Handler\nsubclass are passed to its handleError()\nmethod.\nThe default implementation of handleError()\nin Handler\nchecks to see if a module-level variable, raiseExceptions\n, is set. If\nset, a traceback is printed to sys.stderr\n. If not set, the exception is\nswallowed.\nNote\nThe default value of raiseExceptions\nis True\n. This is\nbecause during development, you typically want to be notified of any\nexceptions that occur. It\u2019s advised that you set raiseExceptions\nto\nFalse\nfor production usage.\nUsing arbitrary objects as messages\u00b6\nIn the preceding sections and examples, it has been assumed that the message\npassed when logging the event is a string. However, this is not the only\npossibility. You can pass an arbitrary object as a message, and its\n__str__()\nmethod will be called when the logging system needs to\nconvert it to a string representation. In fact, if you want to, you can avoid\ncomputing a string representation altogether - for example, the\nSocketHandler\nemits an event by pickling it and sending it\nover the wire.\nOptimization\u00b6\nFormatting of message arguments is deferred until it cannot be avoided.\nHowever, computing the arguments passed to the logging method can also be\nexpensive, and you may want to avoid doing it if the logger will just throw\naway your event. To decide what to do, you can call the\nisEnabledFor()\nmethod which takes a level argument and returns\ntrue if the event would be created by the Logger for that level of call.\nYou can write code like this:\nif logger.isEnabledFor(logging.DEBUG):\nlogger.debug('Message with %s, %s', expensive_func1(),\nexpensive_func2())\nso that if the logger\u2019s threshold is set above DEBUG\n, the calls to\nexpensive_func1\nand expensive_func2\nare never made.\nNote\nIn some cases, isEnabledFor()\ncan itself be more\nexpensive than you\u2019d like (e.g. for deeply nested loggers where an explicit\nlevel is only set high up in the logger hierarchy). In such cases (or if you\nwant to avoid calling a method in tight loops), you can cache the result of a\ncall to isEnabledFor()\nin a local or instance variable, and use\nthat instead of calling the method each time. Such a cached value would only\nneed to be recomputed when the logging configuration changes dynamically\nwhile the application is running (which is not all that common).\nThere are other optimizations which can be made for specific applications which need more precise control over what logging information is collected. Here\u2019s a list of things you can do to avoid processing during logging which you don\u2019t need:\nWhat you don\u2019t want to collect |\nHow to avoid collecting it |\n|---|---|\nInformation about where calls were made from. |\nSet |\nThreading information. |\nSet |\nCurrent process ID ( |\nSet |\nCurrent process name when using |\nSet |\nCurrent |\nSet |\nAlso note that the core logging module only includes the basic handlers. If\nyou don\u2019t import logging.handlers\nand logging.config\n, they won\u2019t\ntake up any memory.\nOther resources\u00b6\nSee also\n- Module\nlogging\nAPI reference for the logging module.\n- Module\nlogging.config\nConfiguration API for the logging module.\n- Module\nlogging.handlers\nUseful handlers included with the logging module.", "code_snippets": ["\n", " ", "\n", " ", "\n", "\n", " ", " ", "\n", " ", " ", "\n", "\n", "\n", "\n", "\n", " ", "\n", "\n", "\n", "\n", " ", " ", " ", " ", "\n", " ", " ", " ", "\n ", " ", " ", " ", "\n", " ", "\n", " ", " ", "\n", "\n", " ", " ", "\n", "\n", " ", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", " ", "\n", "\n", " ", " ", "\n", "\n", "\n\n", "\n", " ", " ", "\n", "\n\n", "\n", " ", " ", "\n", "\n\n", "\n", " ", " ", "\n\n", "\n", "\n\n", "\n", "\n\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n\n", "\n\n", "\n", " ", " ", "\n\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", " ", "\n ", " ", "\n ", "\n"], "language": "Python", "source": "python.org", "token_count": 9919} +{"url": "https://docs.python.org/3/whatsnew/3.6.html", "title": "What\u2019s New In Python 3.6", "content": "What\u2019s New In Python 3.6\u00b6\n- Editors:\nElvis Pranskevichus , Yury Selivanov \nThis article explains the new features in Python 3.6, compared to 3.5. Python 3.6 was released on December 23, 2016. See the changelog for a full list of changes.\nSee also\nPEP 494 - Python 3.6 Release Schedule\nSummary \u2013 Release highlights\u00b6\nNew syntax features:\nPEP 498, formatted string literals.\nPEP 515, underscores in numeric literals.\nPEP 526, syntax for variable annotations.\nPEP 525, asynchronous generators.\nPEP 530: asynchronous comprehensions.\nNew library modules:\nCPython implementation improvements:\nThe dict type has been reimplemented to use a more compact representation based on a proposal by Raymond Hettinger and similar to the PyPy dict implementation. This resulted in dictionaries using 20% to 25% less memory when compared to Python 3.5.\nCustomization of class creation has been simplified with the new protocol.\nThe class attribute definition order is now preserved.\nThe order of elements in\n**kwargs\nnow corresponds to the order in which keyword arguments were passed to the function.DTrace and SystemTap probing support has been added.\nThe new PYTHONMALLOC environment variable can now be used to debug the interpreter memory allocation and access errors.\nSignificant improvements in the standard library:\nThe\nasyncio\nmodule has received new features, significant usability and performance improvements, and a fair amount of bug fixes. Starting with Python 3.6 theasyncio\nmodule is no longer provisional and its API is considered stable.A new file system path protocol has been implemented to support path-like objects. All standard library functions operating on paths have been updated to work with the new protocol.\nThe\ndatetime\nmodule has gained support for Local Time Disambiguation.The\ntyping\nmodule received a number of improvements.The\ntracemalloc\nmodule has been significantly reworked and is now used to provide better output forResourceWarning\nas well as provide better diagnostics for memory allocation errors. See the PYTHONMALLOC section for more information.\nSecurity improvements:\nThe new\nsecrets\nmodule has been added to simplify the generation of cryptographically strong pseudo-random numbers suitable for managing secrets such as account authentication, tokens, and similar.On Linux,\nos.urandom()\nnow blocks until the system urandom entropy pool is initialized to increase the security. See the PEP 524 for the rationale.The default settings and feature set of the\nssl\nmodule have been improved.The\nhashlib\nmodule received support for the BLAKE2, SHA-3 and SHAKE hash algorithms and thescrypt()\nkey derivation function.\nWindows improvements:\nPEP 528 and PEP 529, Windows filesystem and console encoding changed to UTF-8.\nThe\npy.exe\nlauncher, when used interactively, no longer prefers Python 2 over Python 3 when the user doesn\u2019t specify a version (via command line arguments or a config file). Handling of shebang lines remains unchanged - \u201cpython\u201d refers to Python 2 in that case.python.exe\nandpythonw.exe\nhave been marked as long-path aware, which means that the 260 character path limit may no longer apply. See removing the MAX_PATH limitation for details.A\n._pth\nfile can be added to force isolated mode and fully specify all search paths to avoid registry and environment lookup. See the documentation for more information.A\npython36.zip\nfile now works as a landmark to inferPYTHONHOME\n. See the documentation for more information.\nNew Features\u00b6\nPEP 498: Formatted string literals\u00b6\nPEP 498 introduces a new kind of string literals: f-strings, or formatted string literals.\nFormatted string literals are prefixed with 'f'\nand are similar to\nthe format strings accepted by str.format()\n. They contain replacement\nfields surrounded by curly braces. The replacement fields are expressions,\nwhich are evaluated at run time, and then formatted using the\nformat()\nprotocol:\n>>> name = \"Fred\"\n>>> f\"He said his name is {name}.\"\n'He said his name is Fred.'\n>>> width = 10\n>>> precision = 4\n>>> value = decimal.Decimal(\"12.34567\")\n>>> f\"result: {value:{width}.{precision}}\" # nested fields\n'result: 12.35'\nSee also\n- PEP 498 \u2013 Literal String Interpolation.\nPEP written and implemented by Eric V. Smith.\nPEP 526: Syntax for variable annotations\u00b6\nPEP 484 introduced the standard for type annotations of function parameters, a.k.a. type hints. This PEP adds syntax to Python for annotating the types of variables including class variables and instance variables:\nprimes: List[int] = []\ncaptain: str # Note: no initial value!\nclass Starship:\nstats: Dict[str, int] = {}\nJust as for function annotations, the Python interpreter does not attach any\nparticular meaning to variable annotations and only stores them in the\n__annotations__\nattribute of a class or module.\nIn contrast to variable declarations in statically typed languages,\nthe goal of annotation syntax is to provide an easy way to specify structured\ntype metadata for third party tools and libraries via the abstract syntax tree\nand the __annotations__\nattribute.\nPEP 515: Underscores in Numeric Literals\u00b6\nPEP 515 adds the ability to use underscores in numeric literals for improved readability. For example:\n>>> 1_000_000_000_000_000\n1000000000000000\n>>> 0x_FF_FF_FF_FF\n4294967295\nSingle underscores are allowed between digits and after any base specifier. Leading, trailing, or multiple underscores in a row are not allowed.\nThe string formatting language also now has support\nfor the '_'\noption to signal the use of an underscore for a thousands\nseparator for floating-point presentation types and for integer\npresentation type 'd'\n. For integer presentation types 'b'\n,\n'o'\n, 'x'\n, and 'X'\n, underscores will be inserted every 4\ndigits:\n>>> '{:_}'.format(1000000)\n'1_000_000'\n>>> '{:_x}'.format(0xFFFFFFFF)\n'ffff_ffff'\nSee also\n- PEP 515 \u2013 Underscores in Numeric Literals\nPEP written by Georg Brandl and Serhiy Storchaka.\nPEP 525: Asynchronous Generators\u00b6\nPEP 492 introduced support for native coroutines and async\n/ await\nsyntax to Python 3.5. A notable limitation of the Python 3.5 implementation\nis that it was not possible to use await\nand yield\nin the same\nfunction body. In Python 3.6 this restriction has been lifted, making it\npossible to define asynchronous generators:\nasync def ticker(delay, to):\n\"\"\"Yield numbers from 0 to *to* every *delay* seconds.\"\"\"\nfor i in range(to):\nyield i\nawait asyncio.sleep(delay)\nThe new syntax allows for faster and more concise code.\nSee also\n- PEP 525 \u2013 Asynchronous Generators\nPEP written and implemented by Yury Selivanov.\nPEP 530: Asynchronous Comprehensions\u00b6\nPEP 530 adds support for using async for\nin list, set, dict\ncomprehensions and generator expressions:\nresult = [i async for i in aiter() if i % 2]\nAdditionally, await\nexpressions are supported in all kinds\nof comprehensions:\nresult = [await fun() for fun in funcs if await condition()]\nSee also\n- PEP 530 \u2013 Asynchronous Comprehensions\nPEP written and implemented by Yury Selivanov.\nPEP 487: Simpler customization of class creation\u00b6\nIt is now possible to customize subclass creation without using a metaclass.\nThe new __init_subclass__\nclassmethod will be called on the base class\nwhenever a new subclass is created:\nclass PluginBase:\nsubclasses = []\ndef __init_subclass__(cls, **kwargs):\nsuper().__init_subclass__(**kwargs)\ncls.subclasses.append(cls)\nclass Plugin1(PluginBase):\npass\nclass Plugin2(PluginBase):\npass\nIn order to allow zero-argument super()\ncalls to work correctly from\n__init_subclass__()\nimplementations, custom metaclasses must\nensure that the new __classcell__\nnamespace entry is propagated to\ntype.__new__\n(as described in Creating the class object).\nSee also\n- PEP 487 \u2013 Simpler customization of class creation\nPEP written and implemented by Martin Teichmann.\nPEP 487: Descriptor Protocol Enhancements\u00b6\nPEP 487 extends the descriptor protocol to include the new optional\n__set_name__()\nmethod. Whenever a new class is defined, the new\nmethod will be called on all descriptors included in the definition, providing\nthem with a reference to the class being defined and the name given to the\ndescriptor within the class namespace. In other words, instances of\ndescriptors can now know the attribute name of the descriptor in the\nowner class:\nclass IntField:\ndef __get__(self, instance, owner):\nreturn instance.__dict__[self.name]\ndef __set__(self, instance, value):\nif not isinstance(value, int):\nraise ValueError(f'expecting integer in {self.name}')\ninstance.__dict__[self.name] = value\n# this is the new initializer:\ndef __set_name__(self, owner, name):\nself.name = name\nclass Model:\nint_field = IntField()\nSee also\n- PEP 487 \u2013 Simpler customization of class creation\nPEP written and implemented by Martin Teichmann.\nPEP 519: Adding a file system path protocol\u00b6\nFile system paths have historically been represented as str\nor bytes\nobjects. This has led to people who write code which\noperate on file system paths to assume that such objects are only one\nof those two types (an int\nrepresenting a file descriptor\ndoes not count as that is not a file path). Unfortunately that\nassumption prevents alternative object representations of file system\npaths like pathlib\nfrom working with pre-existing code,\nincluding Python\u2019s standard library.\nTo fix this situation, a new interface represented by\nos.PathLike\nhas been defined. By implementing the\n__fspath__()\nmethod, an object signals that it\nrepresents a path. An object can then provide a low-level\nrepresentation of a file system path as a str\nor\nbytes\nobject. This means an object is considered\npath-like if it implements\nos.PathLike\nor is a str\nor bytes\nobject\nwhich represents a file system path. Code can use os.fspath()\n,\nos.fsdecode()\n, or os.fsencode()\nto explicitly get a\nstr\nand/or bytes\nrepresentation of a path-like\nobject.\nThe built-in open()\nfunction has been updated to accept\nos.PathLike\nobjects, as have all relevant functions in the\nos\nand os.path\nmodules, and most other functions and\nclasses in the standard library. The os.DirEntry\nclass\nand relevant classes in pathlib\nhave also been updated to\nimplement os.PathLike\n.\nThe hope is that updating the fundamental functions for operating\non file system paths will lead to third-party code to implicitly\nsupport all path-like objects without any\ncode changes, or at least very minimal ones (e.g. calling\nos.fspath()\nat the beginning of code before operating on a\npath-like object).\nHere are some examples of how the new interface allows for\npathlib.Path\nto be used more easily and transparently with\npre-existing code:\n>>> import pathlib\n>>> with open(pathlib.Path(\"README\")) as f:\n... contents = f.read()\n...\n>>> import os.path\n>>> os.path.splitext(pathlib.Path(\"some_file.txt\"))\n('some_file', '.txt')\n>>> os.path.join(\"/a/b\", pathlib.Path(\"c\"))\n'/a/b/c'\n>>> import os\n>>> os.fspath(pathlib.Path(\"some_file.txt\"))\n'some_file.txt'\n(Implemented by Brett Cannon, Ethan Furman, Dusty Phillips, and Jelle Zijlstra.)\nSee also\n- PEP 519 \u2013 Adding a file system path protocol\nPEP written by Brett Cannon and Koos Zevenhoven.\nPEP 495: Local Time Disambiguation\u00b6\nIn most world locations, there have been and will be times when local clocks are moved back. In those times, intervals are introduced in which local clocks show the same time twice in the same day. In these situations, the information displayed on a local clock (or stored in a Python datetime instance) is insufficient to identify a particular moment in time.\nPEP 495 adds the new fold attribute to instances of\ndatetime.datetime\nand datetime.time\nclasses to differentiate\nbetween two moments in time for which local times are the same:\n>>> u0 = datetime(2016, 11, 6, 4, tzinfo=timezone.utc)\n>>> for i in range(4):\n... u = u0 + i*HOUR\n... t = u.astimezone(Eastern)\n... print(u.time(), 'UTC =', t.time(), t.tzname(), t.fold)\n...\n04:00:00 UTC = 00:00:00 EDT 0\n05:00:00 UTC = 01:00:00 EDT 0\n06:00:00 UTC = 01:00:00 EST 1\n07:00:00 UTC = 02:00:00 EST 0\nThe values of the fold\nattribute have the\nvalue 0\nfor all instances except those that represent the second\n(chronologically) moment in time in an ambiguous case.\nSee also\n- PEP 495 \u2013 Local Time Disambiguation\nPEP written by Alexander Belopolsky and Tim Peters, implementation by Alexander Belopolsky.\nPEP 529: Change Windows filesystem encoding to UTF-8\u00b6\nRepresenting filesystem paths is best performed with str (Unicode) rather than bytes. However, there are some situations where using bytes is sufficient and correct.\nPrior to Python 3.6, data loss could result when using bytes paths on Windows.\nWith this change, using bytes to represent paths is now supported on Windows,\nprovided those bytes are encoded with the encoding returned by\nsys.getfilesystemencoding()\n, which now defaults to 'utf-8'\n.\nApplications that do not use str to represent paths should use\nos.fsencode()\nand os.fsdecode()\nto ensure their bytes are\ncorrectly encoded. To revert to the previous behaviour, set\nPYTHONLEGACYWINDOWSFSENCODING\nor call\nsys._enablelegacywindowsfsencoding()\n.\nSee PEP 529 for more information and discussion of code modifications that may be required.\nPEP 528: Change Windows console encoding to UTF-8\u00b6\nThe default console on Windows will now accept all Unicode characters and\nprovide correctly read str objects to Python code. sys.stdin\n,\nsys.stdout\nand sys.stderr\nnow default to utf-8 encoding.\nThis change only applies when using an interactive console, and not when\nredirecting files or pipes. To revert to the previous behaviour for interactive\nconsole use, set PYTHONLEGACYWINDOWSSTDIO\n.\nSee also\n- PEP 528 \u2013 Change Windows console encoding to UTF-8\nPEP written and implemented by Steve Dower.\nPEP 520: Preserving Class Attribute Definition Order\u00b6\nAttributes in a class definition body have a natural ordering: the same\norder in which the names appear in the source. This order is now\npreserved in the new class\u2019s __dict__\nattribute.\nAlso, the effective default class execution namespace (returned from type.__prepare__()) is now an insertion-order-preserving mapping.\nSee also\n- PEP 520 \u2013 Preserving Class Attribute Definition Order\nPEP written and implemented by Eric Snow.\nPEP 468: Preserving Keyword Argument Order\u00b6\n**kwargs\nin a function signature is now guaranteed to be an\ninsertion-order-preserving mapping.\nSee also\n- PEP 468 \u2013 Preserving Keyword Argument Order\nPEP written and implemented by Eric Snow.\nNew dict implementation\u00b6\nThe dict type now uses a \u201ccompact\u201d representation\nbased on a proposal by Raymond Hettinger\nwhich was first implemented by PyPy.\nThe memory usage of the new dict()\nis between 20% and 25% smaller\ncompared to Python 3.5.\nThe order-preserving aspect of this new implementation is considered an implementation detail and should not be relied upon (this may change in the future, but it is desired to have this new dict implementation in the language for a few releases before changing the language spec to mandate order-preserving semantics for all current and future Python implementations; this also helps preserve backwards-compatibility with older versions of the language where random iteration order is still in effect, e.g. Python 3.5).\n(Contributed by INADA Naoki in bpo-27350. Idea originally suggested by Raymond Hettinger.)\nPEP 523: Adding a frame evaluation API to CPython\u00b6\nWhile Python provides extensive support to customize how code executes, one place it has not done so is in the evaluation of frame objects. If you wanted some way to intercept frame evaluation in Python there really wasn\u2019t any way without directly manipulating function pointers for defined functions.\nPEP 523 changes this by providing an API to make frame evaluation pluggable at the C level. This will allow for tools such as debuggers and JITs to intercept frame evaluation before the execution of Python code begins. This enables the use of alternative evaluation implementations for Python code, tracking frame evaluation, etc.\nThis API is not part of the limited C API and is marked as private to signal that usage of this API is expected to be limited and only applicable to very select, low-level use-cases. Semantics of the API will change with Python as necessary.\nSee also\n- PEP 523 \u2013 Adding a frame evaluation API to CPython\nPEP written by Brett Cannon and Dino Viehland.\nPYTHONMALLOC environment variable\u00b6\nThe new PYTHONMALLOC\nenvironment variable allows setting the Python\nmemory allocators and installing debug hooks.\nIt is now possible to install debug hooks on Python memory allocators on Python\ncompiled in release mode using PYTHONMALLOC=debug\n. Effects of debug hooks:\nNewly allocated memory is filled with the byte\n0xCB\nFreed memory is filled with the byte\n0xDB\nDetect violations of the Python memory allocator API. For example,\nPyObject_Free()\ncalled on a memory block allocated byPyMem_Malloc()\n.Detect writes before the start of a buffer (buffer underflows)\nDetect writes after the end of a buffer (buffer overflows)\nCheck that the GIL is held when allocator functions of\nPYMEM_DOMAIN_OBJ\n(ex:PyObject_Malloc()\n) andPYMEM_DOMAIN_MEM\n(ex:PyMem_Malloc()\n) domains are called.\nChecking if the GIL is held is also a new feature of Python 3.6.\nSee the PyMem_SetupDebugHooks()\nfunction for debug hooks on Python\nmemory allocators.\nIt is now also possible to force the usage of the malloc()\nallocator of\nthe C library for all Python memory allocations using PYTHONMALLOC=malloc\n.\nThis is helpful when using external memory debuggers like Valgrind on\na Python compiled in release mode.\nOn error, the debug hooks on Python memory allocators now use the\ntracemalloc\nmodule to get the traceback where a memory block was\nallocated.\nExample of fatal error on buffer overflow using\npython3.6 -X tracemalloc=5\n(store 5 frames in traces):\nDebug memory block at address p=0x7fbcd41666f8: API 'o'\n4 bytes originally requested\nThe 7 pad bytes at p-7 are FORBIDDENBYTE, as expected.\nThe 8 pad bytes at tail=0x7fbcd41666fc are not all FORBIDDENBYTE (0xfb):\nat tail+0: 0x02 *** OUCH\nat tail+1: 0xfb\nat tail+2: 0xfb\nat tail+3: 0xfb\nat tail+4: 0xfb\nat tail+5: 0xfb\nat tail+6: 0xfb\nat tail+7: 0xfb\nThe block was made by call #1233329 to debug malloc/realloc.\nData at p: 1a 2b 30 00\nMemory block allocated at (most recent call first):\nFile \"test/test_bytes.py\", line 323\nFile \"unittest/case.py\", line 600\nFile \"unittest/case.py\", line 648\nFile \"unittest/suite.py\", line 122\nFile \"unittest/suite.py\", line 84\nFatal Python error: bad trailing pad byte\nCurrent thread 0x00007fbcdbd32700 (most recent call first):\nFile \"test/test_bytes.py\", line 323 in test_hex\nFile \"unittest/case.py\", line 600 in run\nFile \"unittest/case.py\", line 648 in __call__\nFile \"unittest/suite.py\", line 122 in run\nFile \"unittest/suite.py\", line 84 in __call__\nFile \"unittest/suite.py\", line 122 in run\nFile \"unittest/suite.py\", line 84 in __call__\n...\nDTrace and SystemTap probing support\u00b6\nPython can now be built --with-dtrace\nwhich enables static markers\nfor the following events in the interpreter:\nfunction call/return\ngarbage collection started/finished\nline of code executed.\nThis can be used to instrument running interpreters in production, without the need to recompile specific debug builds or providing application-specific profiling/debugging code.\nMore details in Instrumenting CPython with DTrace and SystemTap.\nThe current implementation is tested on Linux and macOS. Additional markers may be added in the future.\n(Contributed by \u0141ukasz Langa in bpo-21590, based on patches by Jes\u00fas Cea Avi\u00f3n, David Malcolm, and Nikhil Benesch.)\nOther Language Changes\u00b6\nSome smaller changes made to the core Python language are:\nA\nglobal\nornonlocal\nstatement must now textually appear before the first use of the affected name in the same scope. Previously this was aSyntaxWarning\n.It is now possible to set a special method to\nNone\nto indicate that the corresponding operation is not available. For example, if a class sets__iter__()\ntoNone\n, the class is not iterable. (Contributed by Andrew Barnert and Ivan Levkivskyi in bpo-25958.)Long sequences of repeated traceback lines are now abbreviated as\n\"[Previous line repeated {count} more times]\"\n(see traceback for an example). (Contributed by Emanuel Barry in bpo-26823.)Import now raises the new exception\nModuleNotFoundError\n(subclass ofImportError\n) when it cannot find a module. Code that currently checks for ImportError (in try-except) will still work. (Contributed by Eric Snow in bpo-15767.)Class methods relying on zero-argument\nsuper()\nwill now work correctly when called from metaclass methods during class creation. (Contributed by Martin Teichmann in bpo-23722.)\nNew Modules\u00b6\nsecrets\u00b6\nThe main purpose of the new secrets\nmodule is to provide an obvious way\nto reliably generate cryptographically strong pseudo-random values suitable\nfor managing secrets, such as account authentication, tokens, and similar.\nWarning\nNote that the pseudo-random generators in the random\nmodule\nshould NOT be used for security purposes. Use secrets\non Python 3.6+ and os.urandom()\non Python 3.5 and earlier.\nSee also\n- PEP 506 \u2013 Adding A Secrets Module To The Standard Library\nPEP written and implemented by Steven D\u2019Aprano.\nImproved Modules\u00b6\narray\u00b6\nExhausted iterators of array.array\nwill now stay exhausted even\nif the iterated array is extended. This is consistent with the behavior\nof other mutable sequences.\nContributed by Serhiy Storchaka in bpo-26492.\nast\u00b6\nThe new ast.Constant\nAST node has been added. It can be used\nby external AST optimizers for the purposes of constant folding.\nContributed by Victor Stinner in bpo-26146.\nasyncio\u00b6\nStarting with Python 3.6 the asyncio\nmodule is no longer provisional and its\nAPI is considered stable.\nNotable changes in the asyncio\nmodule since Python 3.5.0\n(all backported to 3.5.x due to the provisional status):\nThe\nget_event_loop()\nfunction has been changed to always return the currently running loop when called from coroutines and callbacks. (Contributed by Yury Selivanov in bpo-28613.)The\nensure_future()\nfunction and all functions that use it, such asloop.run_until_complete()\n, now accept all kinds of awaitable objects. (Contributed by Yury Selivanov.)New\nrun_coroutine_threadsafe()\nfunction to submit coroutines to event loops from other threads. (Contributed by Vincent Michel.)New\nTransport.is_closing()\nmethod to check if the transport is closing or closed. (Contributed by Yury Selivanov.)The\nloop.create_server()\nmethod can now accept a list of hosts. (Contributed by Yann Sionneau.)New\nloop.create_future()\nmethod to create Future objects. This allows alternative event loop implementations, such as uvloop, to provide a fasterasyncio.Future\nimplementation. (Contributed by Yury Selivanov in bpo-27041.)New\nloop.get_exception_handler()\nmethod to get the current exception handler. (Contributed by Yury Selivanov in bpo-27040.)New\nStreamReader.readuntil()\nmethod to read data from the stream until a separator bytes sequence appears. (Contributed by Mark Korenberg.)The performance of\nStreamReader.readexactly()\nhas been improved. (Contributed by Mark Korenberg in bpo-28370.)The\nloop.getaddrinfo()\nmethod is optimized to avoid calling the systemgetaddrinfo\nfunction if the address is already resolved. (Contributed by A. Jesse Jiryu Davis.)The\nloop.stop()\nmethod has been changed to stop the loop immediately after the current iteration. Any new callbacks scheduled as a result of the last iteration will be discarded. (Contributed by Guido van Rossum in bpo-25593.)Future.set_exception\nwill now raiseTypeError\nwhen passed an instance of theStopIteration\nexception. (Contributed by Chris Angelico in bpo-26221.)New\nloop.connect_accepted_socket()\nmethod to be used by servers that accept connections outside of asyncio, but that use asyncio to handle them. (Contributed by Jim Fulton in bpo-27392.)TCP_NODELAY\nflag is now set for all TCP transports by default. (Contributed by Yury Selivanov in bpo-27456.)New\nloop.shutdown_asyncgens()\nto properly close pending asynchronous generators before closing the loop. (Contributed by Yury Selivanov in bpo-28003.)Future\nandTask\nclasses now have an optimized C implementation which makes asyncio code up to 30% faster. (Contributed by Yury Selivanov and INADA Naoki in bpo-26081 and bpo-28544.)\nbinascii\u00b6\nThe b2a_base64()\nfunction now accepts an optional newline\nkeyword argument to control whether the newline character is appended to the\nreturn value.\n(Contributed by Victor Stinner in bpo-25357.)\ncmath\u00b6\nThe new cmath.tau\n(\u03c4) constant has been added.\n(Contributed by Lisa Roach in bpo-12345, see PEP 628 for details.)\nNew constants: cmath.inf\nand cmath.nan\nto\nmatch math.inf\nand math.nan\n, and also cmath.infj\nand cmath.nanj\nto match the format used by complex repr.\n(Contributed by Mark Dickinson in bpo-23229.)\ncollections\u00b6\nThe new Collection\nabstract base class has been\nadded to represent sized iterable container classes.\n(Contributed by Ivan Levkivskyi, docs by Neil Girdhar in bpo-27598.)\nThe new Reversible\nabstract base class represents\niterable classes that also provide the __reversed__()\nmethod.\n(Contributed by Ivan Levkivskyi in bpo-25987.)\nThe new AsyncGenerator\nabstract base class represents\nasynchronous generators.\n(Contributed by Yury Selivanov in bpo-28720.)\nThe namedtuple()\nfunction now accepts an optional\nkeyword argument module, which, when specified, is used for\nthe __module__\nattribute of the returned named tuple class.\n(Contributed by Raymond Hettinger in bpo-17941.)\nThe verbose and rename arguments for\nnamedtuple()\nare now keyword-only.\n(Contributed by Raymond Hettinger in bpo-25628.)\nRecursive collections.deque\ninstances can now be pickled.\n(Contributed by Serhiy Storchaka in bpo-26482.)\nconcurrent.futures\u00b6\nThe ThreadPoolExecutor\nclass constructor now accepts an optional thread_name_prefix argument\nto make it possible to customize the names of the threads created by the\npool.\n(Contributed by Gregory P. Smith in bpo-27664.)\ncontextlib\u00b6\nThe contextlib.AbstractContextManager\nclass has been added to\nprovide an abstract base class for context managers. It provides a\nsensible default implementation for __enter__()\nwhich returns\nself\nand leaves __exit__()\nan abstract method. A matching\nclass has been added to the typing\nmodule as\ntyping.ContextManager\n.\n(Contributed by Brett Cannon in bpo-25609.)\ndatetime\u00b6\nThe datetime\nand time\nclasses have\nthe new fold\nattribute used to disambiguate local time\nwhen necessary. Many functions in the datetime\nhave been\nupdated to support local time disambiguation.\nSee Local Time Disambiguation section for more\ninformation.\n(Contributed by Alexander Belopolsky in bpo-24773.)\nThe datetime.strftime()\nand\ndate.strftime()\nmethods now support\nISO 8601 date directives %G\n, %u\nand %V\n.\n(Contributed by Ashley Anderson in bpo-12006.)\nThe datetime.isoformat()\nfunction\nnow accepts an optional timespec argument that specifies the number\nof additional components of the time value to include.\n(Contributed by Alessandro Cucci and Alexander Belopolsky in bpo-19475.)\nThe datetime.combine()\nnow\naccepts an optional tzinfo argument.\n(Contributed by Alexander Belopolsky in bpo-27661.)\ndecimal\u00b6\nNew Decimal.as_integer_ratio()\nmethod that returns a pair (n, d)\nof integers that represent the given\nDecimal\ninstance as a fraction, in lowest terms and\nwith a positive denominator:\n>>> Decimal('-3.14').as_integer_ratio()\n(-157, 50)\n(Contributed by Stefan Krah amd Mark Dickinson in bpo-25928.)\ndistutils\u00b6\nThe default_format\nattribute has been removed from\ndistutils.command.sdist.sdist\nand the formats\nattribute defaults to ['gztar']\n. Although not anticipated,\nany code relying on the presence of default_format\nmay\nneed to be adapted. See bpo-27819 for more details.\nemail\u00b6\nThe new email API, enabled via the policy keyword to various constructors, is\nno longer provisional. The email\ndocumentation has been reorganized and\nrewritten to focus on the new API, while retaining the old documentation for\nthe legacy API. (Contributed by R. David Murray in bpo-24277.)\nThe email.mime\nclasses now all accept an optional policy keyword.\n(Contributed by Berker Peksag in bpo-27331.)\nThe DecodedGenerator\nnow supports the policy\nkeyword.\nThere is a new policy\nattribute,\nmessage_factory\n, that controls what class is used\nby default when the parser creates new message objects. For the\nemail.policy.compat32\npolicy this is Message\n,\nfor the new policies it is EmailMessage\n.\n(Contributed by R. David Murray in bpo-20476.)\nencodings\u00b6\nOn Windows, added the 'oem'\nencoding to use CP_OEMCP\n, and the 'ansi'\nalias for the existing 'mbcs'\nencoding, which uses the CP_ACP\ncode page.\n(Contributed by Steve Dower in bpo-27959.)\nenum\u00b6\nTwo new enumeration base classes have been added to the enum\nmodule:\nFlag\nand IntFlag\n. Both are used to define\nconstants that can be combined using the bitwise operators.\n(Contributed by Ethan Furman in bpo-23591.)\nMany standard library modules have been updated to use the\nIntFlag\nclass for their constants.\nThe new enum.auto\nvalue can be used to assign values to enum\nmembers automatically:\n>>> from enum import Enum, auto\n>>> class Color(Enum):\n... red = auto()\n... blue = auto()\n... green = auto()\n...\n>>> list(Color)\n[, , ]\nfaulthandler\u00b6\nOn Windows, the faulthandler\nmodule now installs a handler for Windows\nexceptions: see faulthandler.enable()\n. (Contributed by Victor Stinner in\nbpo-23848.)\nfileinput\u00b6\nhook_encoded()\nnow supports the errors argument.\n(Contributed by Joseph Hackman in bpo-25788.)\nhashlib\u00b6\nhashlib\nsupports OpenSSL 1.1.0. The minimum recommend version is 1.0.2.\n(Contributed by Christian Heimes in bpo-26470.)\nBLAKE2 hash functions were added to the module. blake2b()\nand blake2s()\nare always available and support the full\nfeature set of BLAKE2.\n(Contributed by Christian Heimes in bpo-26798 based on code by\nDmitry Chestnykh and Samuel Neves. Documentation written by Dmitry Chestnykh.)\nThe SHA-3 hash functions sha3_224()\n, sha3_256()\n,\nsha3_384()\n, sha3_512()\n, and SHAKE hash functions\nshake_128()\nand shake_256()\nwere added.\n(Contributed by Christian Heimes in bpo-16113. Keccak Code Package\nby Guido Bertoni, Joan Daemen, Micha\u00ebl Peeters, Gilles Van Assche, and\nRonny Van Keer.)\nThe password-based key derivation function scrypt()\nis now\navailable with OpenSSL 1.1.0 and newer.\n(Contributed by Christian Heimes in bpo-27928.)\nhttp.client\u00b6\nHTTPConnection.request()\nand\nendheaders()\nboth now support\nchunked encoding request bodies.\n(Contributed by Demian Brecht and Rolf Krahl in bpo-12319.)\nidlelib and IDLE\u00b6\nThe idlelib package is being modernized and refactored to make IDLE look and work better and to make the code easier to understand, test, and improve. Part of making IDLE look better, especially on Linux and Mac, is using ttk widgets, mostly in the dialogs. As a result, IDLE no longer runs with tcl/tk 8.4. It now requires tcl/tk 8.5 or 8.6. We recommend running the latest release of either.\n\u2018Modernizing\u2019 includes renaming and consolidation of idlelib modules. The renaming of files with partial uppercase names is similar to the renaming of, for instance, Tkinter and TkFont to tkinter and tkinter.font in 3.0. As a result, imports of idlelib files that worked in 3.5 will usually not work in 3.6. At least a module name change will be needed (see idlelib/README.txt), sometimes more. (Name changes contributed by Al Swiegart and Terry Reedy in bpo-24225. Most idlelib patches since have been and will be part of the process.)\nIn compensation, the eventual result with be that some idlelib classes will be easier to use, with better APIs and docstrings explaining them. Additional useful information will be added to idlelib when available.\nNew in 3.6.2:\nMultiple fixes for autocompletion. (Contributed by Louie Lu in bpo-15786.)\nNew in 3.6.3:\nModule Browser (on the File menu, formerly called Class Browser), now displays nested functions and classes in addition to top-level functions and classes. (Contributed by Guilherme Polo, Cheryl Sabella, and Terry Jan Reedy in bpo-1612262.)\nThe IDLE features formerly implemented as extensions have been reimplemented as normal features. Their settings have been moved from the Extensions tab to other dialog tabs. (Contributed by Charles Wohlganger and Terry Jan Reedy in bpo-27099.)\nThe Settings dialog (Options, Configure IDLE) has been partly rewritten to improve both appearance and function. (Contributed by Cheryl Sabella and Terry Jan Reedy in multiple issues.)\nNew in 3.6.4:\nThe font sample now includes a selection of non-Latin characters so that users can better see the effect of selecting a particular font. (Contributed by Terry Jan Reedy in bpo-13802.) The sample can be edited to include other characters. (Contributed by Serhiy Storchaka in bpo-31860.)\nNew in 3.6.6:\nEditor code context option revised. Box displays all context lines up to maxlines. Clicking on a context line jumps the editor to that line. Context colors for custom themes is added to Highlights tab of Settings dialog. (Contributed by Cheryl Sabella and Terry Jan Reedy in bpo-33642, bpo-33768, and bpo-33679.)\nOn Windows, a new API call tells Windows that tk scales for DPI. On Windows 8.1+ or 10, with DPI compatibility properties of the Python binary unchanged, and a monitor resolution greater than 96 DPI, this should make text and lines sharper. It should otherwise have no effect. (Contributed by Terry Jan Reedy in bpo-33656.)\nNew in 3.6.7:\nOutput over N lines (50 by default) is squeezed down to a button. N can be changed in the PyShell section of the General page of the Settings dialog. Fewer, but possibly extra long, lines can be squeezed by right clicking on the output. Squeezed output can be expanded in place by double-clicking the button or into the clipboard or a separate window by right-clicking the button. (Contributed by Tal Einat in bpo-1529353.)\nimportlib\u00b6\nImport now raises the new exception ModuleNotFoundError\n(subclass of ImportError\n) when it cannot find a module. Code\nthat current checks for ImportError\n(in try-except) will still work.\n(Contributed by Eric Snow in bpo-15767.)\nimportlib.util.LazyLoader\nnow calls\ncreate_module()\non the wrapped loader, removing the\nrestriction that importlib.machinery.BuiltinImporter\nand\nimportlib.machinery.ExtensionFileLoader\ncouldn\u2019t be used with\nimportlib.util.LazyLoader\n.\nimportlib.util.cache_from_source()\n,\nimportlib.util.source_from_cache()\n, and\nimportlib.util.spec_from_file_location()\nnow accept a\npath-like object.\ninspect\u00b6\nThe inspect.signature()\nfunction now reports the\nimplicit .0\nparameters generated by the compiler for comprehension and\ngenerator expression scopes as if they were positional-only parameters called\nimplicit0\n. (Contributed by Jelle Zijlstra in bpo-19611.)\nTo reduce code churn when upgrading from Python 2.7 and the legacy\ninspect.getargspec()\nAPI, the previously documented deprecation of\ninspect.getfullargspec()\nhas been reversed. While this function is\nconvenient for single/source Python 2/3 code bases, the richer\ninspect.signature()\ninterface remains the recommended approach for new\ncode. (Contributed by Nick Coghlan in bpo-27172)\njson\u00b6\njson.load()\nand json.loads()\nnow support binary input. Encoded\nJSON should be represented using either UTF-8, UTF-16, or UTF-32.\n(Contributed by Serhiy Storchaka in bpo-17909.)\nlogging\u00b6\nThe new WatchedFileHandler.reopenIfNeeded()\nmethod has been added to add the ability to check if the log file needs to\nbe reopened.\n(Contributed by Marian Horban in bpo-24884.)\nmath\u00b6\nThe tau (\u03c4) constant has been added to the math\nand cmath\nmodules.\n(Contributed by Lisa Roach in bpo-12345, see PEP 628 for details.)\nmultiprocessing\u00b6\nProxy Objects returned by\nmultiprocessing.Manager()\ncan now be nested.\n(Contributed by Davin Potts in bpo-6766.)\nos\u00b6\nSee the summary of PEP 519 for details on how the\nos\nand os.path\nmodules now support\npath-like objects.\nscandir()\nnow supports bytes\npaths on Windows.\nA new close()\nmethod allows explicitly closing a\nscandir()\niterator. The scandir()\niterator now\nsupports the context manager protocol. If a scandir()\niterator is neither exhausted nor explicitly closed a ResourceWarning\nwill be emitted in its destructor.\n(Contributed by Serhiy Storchaka in bpo-25994.)\nOn Linux, os.urandom()\nnow blocks until the system urandom entropy pool\nis initialized to increase the security. See the PEP 524 for the rationale.\nThe Linux getrandom()\nsyscall (get random bytes) is now exposed as the new\nos.getrandom()\nfunction.\n(Contributed by Victor Stinner, part of the PEP 524)\npathlib\u00b6\npathlib\nnow supports path-like objects.\n(Contributed by Brett Cannon in bpo-27186.)\nSee the summary of PEP 519 for details.\npdb\u00b6\nThe Pdb\nclass constructor has a new optional readrc argument\nto control whether .pdbrc\nfiles should be read.\npickle\u00b6\nObjects that need __new__\ncalled with keyword arguments can now be pickled\nusing pickle protocols older than protocol version 4.\nProtocol version 4 already supports this case. (Contributed by Serhiy\nStorchaka in bpo-24164.)\npickletools\u00b6\npickletools.dis()\nnow outputs the implicit memo index for the\nMEMOIZE\nopcode.\n(Contributed by Serhiy Storchaka in bpo-25382.)\npydoc\u00b6\nThe pydoc\nmodule has learned to respect the MANPAGER\nenvironment variable.\n(Contributed by Matthias Klose in bpo-8637.)\nhelp()\nand pydoc\ncan now list named tuple fields in the\norder they were defined rather than alphabetically.\n(Contributed by Raymond Hettinger in bpo-24879.)\nrandom\u00b6\nThe new choices()\nfunction returns a list of elements of\nspecified size from the given population with optional weights.\n(Contributed by Raymond Hettinger in bpo-18844.)\nre\u00b6\nAdded support of modifier spans in regular expressions. Examples:\n'(?i:p)ython'\nmatches 'python'\nand 'Python'\n, but not 'PYTHON'\n;\n'(?i)g(?-i:v)r'\nmatches 'GvR'\nand 'gvr'\n, but not 'GVR'\n.\n(Contributed by Serhiy Storchaka in bpo-433028.)\nMatch object groups can be accessed by __getitem__\n, which is\nequivalent to group()\n. So mo['name']\nis now equivalent to\nmo.group('name')\n. (Contributed by Eric Smith in bpo-24454.)\nMatch\nobjects now support\nindex-like objects\nas group\nindices.\n(Contributed by Jeroen Demeyer and Xiang Zhang in bpo-27177.)\nreadline\u00b6\nAdded set_auto_history()\nto enable or disable\nautomatic addition of input to the history list. (Contributed by\nTyler Crompton in bpo-26870.)\nrlcompleter\u00b6\nPrivate and special attribute names now are omitted unless the prefix starts with underscores. A space or a colon is added after some completed keywords. (Contributed by Serhiy Storchaka in bpo-25011 and bpo-25209.)\nshlex\u00b6\nThe shlex\nhas much\nimproved shell compatibility\nthrough the new punctuation_chars argument to control which characters\nare treated as punctuation.\n(Contributed by Vinay Sajip in bpo-1521950.)\nsite\u00b6\nWhen specifying paths to add to sys.path\nin a .pth\nfile,\nyou may now specify file paths on top of directories (e.g. zip files).\n(Contributed by Wolfgang Langner in bpo-26587).\nsqlite3\u00b6\nsqlite3.Cursor.lastrowid\nnow supports the REPLACE\nstatement.\n(Contributed by Alex LordThorsen in bpo-16864.)\nsocket\u00b6\nThe ioctl()\nfunction now supports the\nSIO_LOOPBACK_FAST_PATH\ncontrol code.\n(Contributed by Daniel Stokes in bpo-26536.)\nThe getsockopt()\nconstants SO_DOMAIN\n,\nSO_PROTOCOL\n, SO_PEERSEC\n, and SO_PASSSEC\nare now supported.\n(Contributed by Christian Heimes in bpo-26907.)\nThe setsockopt()\nnow supports the\nsetsockopt(level, optname, None, optlen: int)\nform.\n(Contributed by Christian Heimes in bpo-27744.)\nThe socket module now supports the address family\nAF_ALG\nto interface with Linux Kernel crypto API. ALG_*\n,\nSOL_ALG\nand sendmsg_afalg()\nwere added.\n(Contributed by Christian Heimes in bpo-27744 with support from\nVictor Stinner.)\nNew Linux constants TCP_USER_TIMEOUT\nand TCP_CONGESTION\nwere added.\n(Contributed by Omar Sandoval, bpo-26273).\nsocketserver\u00b6\nServers based on the socketserver\nmodule, including those\ndefined in http.server\n, xmlrpc.server\nand\nwsgiref.simple_server\n, now support the context manager\nprotocol.\n(Contributed by Aviv Palivoda in bpo-26404.)\nThe wfile\nattribute of\nStreamRequestHandler\nclasses now implements\nthe io.BufferedIOBase\nwritable interface. In particular,\ncalling write()\nis now guaranteed to send the\ndata in full. (Contributed by Martin Panter in bpo-26721.)\nssl\u00b6\nssl\nsupports OpenSSL 1.1.0. The minimum recommend version is 1.0.2.\n(Contributed by Christian Heimes in bpo-26470.)\n3DES has been removed from the default cipher suites and ChaCha20 Poly1305 cipher suites have been added. (Contributed by Christian Heimes in bpo-27850 and bpo-27766.)\nSSLContext\nhas better default configuration for options\nand ciphers.\n(Contributed by Christian Heimes in bpo-28043.)\nSSL session can be copied from one client-side connection to another\nwith the new SSLSession\nclass. TLS session resumption can\nspeed up the initial handshake, reduce latency and improve performance\n(Contributed by Christian Heimes in bpo-19500 based on a draft by\nAlex Warhawk.)\nThe new get_ciphers()\nmethod can be used to\nget a list of enabled ciphers in order of cipher priority.\nAll constants and flags have been converted to IntEnum\nand\nIntFlag\n.\n(Contributed by Christian Heimes in bpo-28025.)\nServer and client-side specific TLS protocols for SSLContext\nwere added.\n(Contributed by Christian Heimes in bpo-28085.)\nAdded ssl.SSLContext.post_handshake_auth\nto enable and\nssl.SSLSocket.verify_client_post_handshake()\nto initiate TLS 1.3\npost-handshake authentication.\n(Contributed by Christian Heimes in gh-78851.)\nstatistics\u00b6\nA new harmonic_mean()\nfunction has been added.\n(Contributed by Steven D\u2019Aprano in bpo-27181.)\nstruct\u00b6\nstruct\nnow supports IEEE 754 half-precision floats via the 'e'\nformat specifier.\n(Contributed by Eli Stevens, Mark Dickinson in bpo-11734.)\nsubprocess\u00b6\nsubprocess.Popen\ndestructor now emits a ResourceWarning\nwarning\nif the child process is still running. Use the context manager protocol (with\nproc: ...\n) or explicitly call the wait()\nmethod to\nread the exit status of the child process. (Contributed by Victor Stinner in\nbpo-26741.)\nThe subprocess.Popen\nconstructor and all functions that pass arguments\nthrough to it now accept encoding and errors arguments. Specifying either\nof these will enable text mode for the stdin, stdout and stderr streams.\n(Contributed by Steve Dower in bpo-6135.)\nsys\u00b6\nThe new getfilesystemencodeerrors()\nfunction returns the name of\nthe error mode used to convert between Unicode filenames and bytes filenames.\n(Contributed by Steve Dower in bpo-27781.)\nOn Windows the return value of the getwindowsversion()\nfunction\nnow includes the platform_version field which contains the accurate major\nversion, minor version and build number of the current operating system,\nrather than the version that is being emulated for the process\n(Contributed by Steve Dower in bpo-27932.)\ntelnetlib\u00b6\ntelnetlib.Telnet\nis now a context manager (contributed by\nSt\u00e9phane Wirtel in bpo-25485).\ntime\u00b6\nThe struct_time\nattributes tm_gmtoff\nand\ntm_zone\nare now available on all platforms.\ntimeit\u00b6\nThe new Timer.autorange()\nconvenience\nmethod has been added to call Timer.timeit()\nrepeatedly so that the total run time is greater or equal to 200 milliseconds.\n(Contributed by Steven D\u2019Aprano in bpo-6422.)\ntimeit\nnow warns when there is substantial (4x) variance\nbetween best and worst times.\n(Contributed by Serhiy Storchaka in bpo-23552.)\ntkinter\u00b6\nAdded methods Variable.trace_add()\n,\nVariable.trace_remove()\nand trace_info()\nin the tkinter.Variable\nclass. They replace old methods\ntrace_variable()\n, trace()\n,\ntrace_vdelete()\nand\ntrace_vinfo()\nthat use obsolete Tcl commands and might\nnot work in future versions of Tcl.\n(Contributed by Serhiy Storchaka in bpo-22115).\ntraceback\u00b6\nBoth the traceback module and the interpreter\u2019s builtin exception display now abbreviate long sequences of repeated lines in tracebacks as shown in the following example:\n>>> def f(): f()\n...\n>>> f()\nTraceback (most recent call last):\nFile \"\", line 1, in \nFile \"\", line 1, in f\nFile \"\", line 1, in f\nFile \"\", line 1, in f\n[Previous line repeated 995 more times]\nRecursionError: maximum recursion depth exceeded\n(Contributed by Emanuel Barry in bpo-26823.)\ntracemalloc\u00b6\nThe tracemalloc\nmodule now supports tracing memory allocations in\nmultiple different address spaces.\nThe new DomainFilter\nfilter class has been added\nto filter block traces by their address space (domain).\n(Contributed by Victor Stinner in bpo-26588.)\ntyping\u00b6\nSince the typing\nmodule is provisional,\nall changes introduced in Python 3.6 have also been\nbackported to Python 3.5.x.\nThe typing\nmodule has a much improved support for generic type\naliases. For example Dict[str, Tuple[S, T]]\nis now a valid\ntype annotation.\n(Contributed by Guido van Rossum in Github #195.)\nThe typing.ContextManager\nclass has been added for\nrepresenting contextlib.AbstractContextManager\n.\n(Contributed by Brett Cannon in bpo-25609.)\nThe typing.Collection\nclass has been added for\nrepresenting collections.abc.Collection\n.\n(Contributed by Ivan Levkivskyi in bpo-27598.)\nThe typing.ClassVar\ntype construct has been added to\nmark class variables. As introduced in PEP 526, a variable annotation\nwrapped in ClassVar indicates that a given attribute is intended to be used as\na class variable and should not be set on instances of that class.\n(Contributed by Ivan Levkivskyi in Github #280.)\nA new TYPE_CHECKING\nconstant that is assumed to be\nTrue\nby the static type checkers, but is False\nat runtime.\n(Contributed by Guido van Rossum in Github #230.)\nA new NewType()\nhelper function has been added to create\nlightweight distinct types for annotations:\nfrom typing import NewType\nUserId = NewType('UserId', int)\nsome_id = UserId(524313)\nThe static type checker will treat the new type as if it were a subclass of the original type. (Contributed by Ivan Levkivskyi in Github #189.)\nunicodedata\u00b6\nThe unicodedata\nmodule now uses data from Unicode 9.0.0.\n(Contributed by Benjamin Peterson.)\nunittest.mock\u00b6\nThe Mock\nclass has the following improvements:\nTwo new methods,\nMock.assert_called()\nandMock.assert_called_once()\nto check if the mock object was called. (Contributed by Amit Saha in bpo-26323.)The\nMock.reset_mock()\nmethod now has two optional keyword only arguments: return_value and side_effect. (Contributed by Kushal Das in bpo-21271.)\nurllib.request\u00b6\nIf a HTTP request has a file or iterable body (other than a\nbytes object) but no Content-Length\nheader, rather than\nthrowing an error, AbstractHTTPHandler\nnow falls back to use chunked transfer encoding.\n(Contributed by Demian Brecht and Rolf Krahl in bpo-12319.)\nurllib.robotparser\u00b6\nRobotFileParser\nnow supports the Crawl-delay\nand\nRequest-rate\nextensions.\n(Contributed by Nikolay Bogoychev in bpo-16099.)\nvenv\u00b6\nvenv\naccepts a new parameter --prompt\n. This parameter provides an\nalternative prefix for the virtual environment. (Proposed by \u0141ukasz Balcerzak\nand ported to 3.6 by St\u00e9phane Wirtel in bpo-22829.)\nwarnings\u00b6\nA new optional source parameter has been added to the\nwarnings.warn_explicit()\nfunction: the destroyed object which emitted a\nResourceWarning\n. A source attribute has also been added to\nwarnings.WarningMessage\n(contributed by Victor Stinner in\nbpo-26568 and bpo-26567).\nWhen a ResourceWarning\nwarning is logged, the tracemalloc\nmodule is now\nused to try to retrieve the traceback where the destroyed object was allocated.\nExample with the script example.py\n:\nimport warnings\ndef func():\nreturn open(__file__)\nf = func()\nf = None\nOutput of the command python3.6 -Wd -X tracemalloc=5 example.py\n:\nexample.py:7: ResourceWarning: unclosed file <_io.TextIOWrapper name='example.py' mode='r' encoding='UTF-8'>\nf = None\nObject allocated at (most recent call first):\nFile \"example.py\", lineno 4\nreturn open(__file__)\nFile \"example.py\", lineno 6\nf = func()\nThe \u201cObject allocated at\u201d traceback is new and is only displayed if\ntracemalloc\nis tracing Python memory allocations and if the\nwarnings\nmodule was already imported.\nwinreg\u00b6\nAdded the 64-bit integer type REG_QWORD\n.\n(Contributed by Clement Rouault in bpo-23026.)\nwinsound\u00b6\nAllowed keyword arguments to be passed to Beep\n,\nMessageBeep\n, and PlaySound\n(bpo-27982).\nxmlrpc.client\u00b6\nThe xmlrpc.client\nmodule now supports unmarshalling\nadditional data types used by the Apache XML-RPC implementation\nfor numerics and None\n.\n(Contributed by Serhiy Storchaka in bpo-26885.)\nzipfile\u00b6\nA new ZipInfo.from_file()\nclass method\nallows making a ZipInfo\ninstance from a filesystem file.\nA new ZipInfo.is_dir()\nmethod can be used\nto check if the ZipInfo\ninstance represents a directory.\n(Contributed by Thomas Kluyver in bpo-26039.)\nThe ZipFile.open()\nmethod can now be used to\nwrite data into a ZIP file, as well as for extracting data.\n(Contributed by Thomas Kluyver in bpo-26039.)\nzlib\u00b6\nThe compress()\nand decompress()\nfunctions now accept\nkeyword arguments.\n(Contributed by Aviv Palivoda in bpo-26243 and\nXiang Zhang in bpo-16764 respectively.)\nOptimizations\u00b6\nThe Python interpreter now uses a 16-bit wordcode instead of bytecode which made a number of opcode optimizations possible. (Contributed by Demur Rumed with input and reviews from Serhiy Storchaka and Victor Stinner in bpo-26647 and bpo-28050.)\nThe\nasyncio.Future\nclass now has an optimized C implementation. (Contributed by Yury Selivanov and INADA Naoki in bpo-26081.)The\nasyncio.Task\nclass now has an optimized C implementation. (Contributed by Yury Selivanov in bpo-28544.)Various implementation improvements in the\ntyping\nmodule (such as caching of generic types) allow up to 30 times performance improvements and reduced memory footprint.The ASCII decoder is now up to 60 times as fast for error handlers\nsurrogateescape\n,ignore\nandreplace\n(Contributed by Victor Stinner in bpo-24870).The ASCII and the Latin1 encoders are now up to 3 times as fast for the error handler\nsurrogateescape\n(Contributed by Victor Stinner in bpo-25227).The UTF-8 encoder is now up to 75 times as fast for error handlers\nignore\n,replace\n,surrogateescape\n,surrogatepass\n(Contributed by Victor Stinner in bpo-25267).The UTF-8 decoder is now up to 15 times as fast for error handlers\nignore\n,replace\nandsurrogateescape\n(Contributed by Victor Stinner in bpo-25301).bytes % args\nis now up to 2 times faster. (Contributed by Victor Stinner in bpo-25349).bytearray % args\nis now between 2.5 and 5 times faster. (Contributed by Victor Stinner in bpo-25399).Optimize\nbytes.fromhex()\nandbytearray.fromhex()\n: they are now between 2x and 3.5x faster. (Contributed by Victor Stinner in bpo-25401).Optimize\nbytes.replace(b'', b'.')\nandbytearray.replace(b'', b'.')\n: up to 80% faster. (Contributed by Josh Snider in bpo-26574).Allocator functions of the\nPyMem_Malloc()\ndomain (PYMEM_DOMAIN_MEM\n) now use the pymalloc memory allocator instead ofmalloc()\nfunction of the C library. The pymalloc allocator is optimized for objects smaller or equal to 512 bytes with a short lifetime, and usemalloc()\nfor larger memory blocks. (Contributed by Victor Stinner in bpo-26249).pickle.load()\nandpickle.loads()\nare now up to 10% faster when deserializing many small objects (Contributed by Victor Stinner in bpo-27056).Passing keyword arguments to a function has an overhead in comparison with passing positional arguments. Now in extension functions implemented with using Argument Clinic this overhead is significantly decreased. (Contributed by Serhiy Storchaka in bpo-27574).\nOptimized\nglob()\nandiglob()\nfunctions in theglob\nmodule; they are now about 3\u20136 times faster. (Contributed by Serhiy Storchaka in bpo-25596).Optimized globbing in\npathlib\nby usingos.scandir()\n; it is now about 1.5\u20134 times faster. (Contributed by Serhiy Storchaka in bpo-26032).xml.etree.ElementTree\nparsing, iteration and deepcopy performance has been significantly improved. (Contributed by Serhiy Storchaka in bpo-25638, bpo-25873, and bpo-25869.)Creation of\nfractions.Fraction\ninstances from floats and decimals is now 2 to 3 times faster. (Contributed by Serhiy Storchaka in bpo-25971.)\nBuild and C API Changes\u00b6\nPython now requires some C99 support in the toolchain to build. Most notably, Python now uses standard integer types and macros in place of custom macros like\nPY_LONG_LONG\n. For more information, see PEP 7 and bpo-17884.Cross-compiling CPython with the Android NDK and the Android API level set to 21 (Android 5.0 Lollipop) or greater runs successfully. While Android is not yet a supported platform, the Python test suite runs on the Android emulator with only about 16 tests failures. See the Android meta-issue bpo-26865.\nThe\n--enable-optimizations\nconfigure flag has been added. Turning it on will activate expensive optimizations like PGO. (Original patch by Alecsandru Patrascu of Intel in bpo-26359.)The GIL must now be held when allocator functions of\nPYMEM_DOMAIN_OBJ\n(ex:PyObject_Malloc()\n) andPYMEM_DOMAIN_MEM\n(ex:PyMem_Malloc()\n) domains are called.New\nPy_FinalizeEx()\nAPI which indicates if flushing buffered data failed. (Contributed by Martin Panter in bpo-5319.)PyArg_ParseTupleAndKeywords()\nnow supports positional-only parameters. Positional-only parameters are defined by empty names. (Contributed by Serhiy Storchaka in bpo-26282).PyTraceback_Print\nmethod now abbreviates long sequences of repeated lines as\"[Previous line repeated {count} more times]\"\n. (Contributed by Emanuel Barry in bpo-26823.)The new\nPyErr_SetImportErrorSubclass()\nfunction allows for specifying a subclass ofImportError\nto raise. (Contributed by Eric Snow in bpo-15767.)The new\nPyErr_ResourceWarning()\nfunction can be used to generate aResourceWarning\nproviding the source of the resource allocation. (Contributed by Victor Stinner in bpo-26567.)The new\nPyOS_FSPath()\nfunction returns the file system representation of a path-like object. (Contributed by Brett Cannon in bpo-27186.)The\nPyUnicode_FSConverter()\nandPyUnicode_FSDecoder()\nfunctions will now accept path-like objects.\nOther Improvements\u00b6\nWhen\n--version\n(short form:-V\n) is supplied twice, Python printssys.version\nfor detailed information.$ ./python -VV Python 3.6.0b4+ (3.6:223967b49e49+, Nov 21 2016, 20:55:04) [GCC 4.2.1 Compatible Apple LLVM 8.0.0 (clang-800.0.42.1)]\nDeprecated\u00b6\nNew Keywords\u00b6\nasync\nand await\nare not recommended to be used as variable, class,\nfunction or module names. Introduced by PEP 492 in Python 3.5, they will\nbecome proper keywords in Python 3.7. Starting in Python 3.6, the use of\nasync\nor await\nas names will generate a DeprecationWarning\n.\nDeprecated Python behavior\u00b6\nRaising the StopIteration\nexception inside a generator will now\ngenerate a DeprecationWarning\n, and will trigger a RuntimeError\nin Python 3.7. See PEP 479: Change StopIteration handling inside generators for details.\nThe __aiter__()\nmethod is now expected to return an asynchronous\niterator directly instead of returning an awaitable as previously.\nDoing the former will trigger a DeprecationWarning\n. Backward\ncompatibility will be removed in Python 3.7.\n(Contributed by Yury Selivanov in bpo-27243.)\nA backslash-character pair that is not a valid escape sequence now generates\na DeprecationWarning\n. Although this will eventually become a\nSyntaxError\n, that will not be for several Python releases.\n(Contributed by Emanuel Barry in bpo-27364.)\nWhen performing a relative import, falling back on __name__\nand\n__path__\nfrom the calling module when __spec__\nor\n__package__\nare not defined now raises an ImportWarning\n.\n(Contributed by Rose Ames in bpo-25791.)\nDeprecated Python modules, functions and methods\u00b6\nasynchat\u00b6\nThe asynchat\nhas been deprecated in favor of asyncio\n.\n(Contributed by Mariatta in bpo-25002.)\nasyncore\u00b6\nThe asyncore\nhas been deprecated in favor of asyncio\n.\n(Contributed by Mariatta in bpo-25002.)\ndbm\u00b6\nUnlike other dbm\nimplementations, the dbm.dumb\nmodule\ncreates databases with the 'rw'\nmode and allows modifying the database\nopened with the 'r'\nmode. This behavior is now deprecated and will\nbe removed in 3.8.\n(Contributed by Serhiy Storchaka in bpo-21708.)\ndistutils\u00b6\nThe undocumented extra_path\nargument to the\ndistutils.Distribution\nconstructor is now considered deprecated\nand will raise a warning if set. Support for this parameter will be\nremoved in a future Python release. See bpo-27919 for details.\ngrp\u00b6\nThe support of non-integer arguments in getgrgid()\nhas been\ndeprecated.\n(Contributed by Serhiy Storchaka in bpo-26129.)\nimportlib\u00b6\nThe importlib.machinery.SourceFileLoader.load_module()\nand\nimportlib.machinery.SourcelessFileLoader.load_module()\nmethods\nare now deprecated. They were the only remaining implementations of\nimportlib.abc.Loader.load_module()\nin importlib\nthat had not\nbeen deprecated in previous versions of Python in favour of\nimportlib.abc.Loader.exec_module()\n.\nThe importlib.machinery.WindowsRegistryFinder\nclass is now\ndeprecated. As of 3.6.0, it is still added to sys.meta_path\nby\ndefault (on Windows), but this may change in future releases.\nos\u00b6\nUndocumented support of general bytes-like objects\nas paths in os\nfunctions, compile()\nand similar functions is\nnow deprecated.\n(Contributed by Serhiy Storchaka in bpo-25791 and bpo-26754.)\nre\u00b6\nSupport for inline flags (?letters)\nin the middle of the regular\nexpression has been deprecated and will be removed in a future Python\nversion. Flags at the start of a regular expression are still allowed.\n(Contributed by Serhiy Storchaka in bpo-22493.)\nssl\u00b6\nOpenSSL 0.9.8, 1.0.0 and 1.0.1 are deprecated and no longer supported.\nIn the future the ssl\nmodule will require at least OpenSSL 1.0.2 or\n1.1.0.\nSSL-related arguments like certfile\n, keyfile\nand check_hostname\nin ftplib\n, http.client\n, imaplib\n, poplib\n,\nand smtplib\nhave been deprecated in favor of context\n.\n(Contributed by Christian Heimes in bpo-28022.)\nA couple of protocols and functions of the ssl\nmodule are now\ndeprecated. Some features will no longer be available in future versions\nof OpenSSL. Other features are deprecated in favor of a different API.\n(Contributed by Christian Heimes in bpo-28022 and bpo-26470.)\ntkinter\u00b6\nThe tkinter.tix\nmodule is now deprecated. tkinter\nusers\nshould use tkinter.ttk\ninstead.\nvenv\u00b6\nThe pyvenv\nscript has been deprecated in favour of python3 -m venv\n.\nThis prevents confusion as to what Python interpreter pyvenv\nis\nconnected to and thus what Python interpreter will be used by the virtual\nenvironment. (Contributed by Brett Cannon in bpo-25154.)\nxml\u00b6\nAs mitigation against DTD and external entity retrieval, the\nxml.dom.minidom\nandxml.sax\nmodules no longer process external entities by default. (Contributed by Christian Heimes in gh-61441.)\nDeprecated functions and types of the C API\u00b6\nUndocumented functions PyUnicode_AsEncodedObject()\n,\nPyUnicode_AsDecodedObject()\n, PyUnicode_AsEncodedUnicode()\nand PyUnicode_AsDecodedUnicode()\nare deprecated now.\nUse the generic codec based API instead.\nDeprecated Build Options\u00b6\nThe --with-system-ffi\nconfigure flag is now on by default on non-macOS\nUNIX platforms. It may be disabled by using --without-system-ffi\n, but\nusing the flag is deprecated and will not be accepted in Python 3.7.\nmacOS is unaffected by this change. Note that many OS distributors already\nuse the --with-system-ffi\nflag when building their system Python.\nRemoved\u00b6\nAPI and Feature Removals\u00b6\nUnknown escapes consisting of\n'\\'\nand an ASCII letter in regular expressions will now cause an error. In replacement templates forre.sub()\nthey are still allowed, but deprecated. There.LOCALE\nflag can now only be used with binary patterns.inspect.getmoduleinfo()\nwas removed (was deprecated since CPython 3.3).inspect.getmodulename()\nshould be used for obtaining the module name for a given path. (Contributed by Yury Selivanov in bpo-13248.)traceback.Ignore\nclass andtraceback.usage\n,traceback.modname\n,traceback.fullmodname\n,traceback.find_lines_from_code\n,traceback.find_lines\n,traceback.find_strings\n,traceback.find_executable_lines\nmethods were removed from thetraceback\nmodule. They were undocumented methods deprecated since Python 3.2 and equivalent functionality is available from private methods.The\ntk_menuBar()\nandtk_bindForTraversal()\ndummy methods intkinter\nwidget classes were removed (corresponding Tk commands were obsolete since Tk 4.0).The\nopen()\nmethod of thezipfile.ZipFile\nclass no longer supports the'U'\nmode (was deprecated since Python 3.4). Useio.TextIOWrapper\nfor reading compressed text files in universal newlines mode.The undocumented\nIN\n,CDROM\n,DLFCN\n,TYPES\n,CDIO\n, andSTROPTS\nmodules have been removed. They had been available in the platform specificLib/plat-*/\ndirectories, but were chronically out of date, inconsistently available across platforms, and unmaintained. The script that created these modules is still available in the source distribution at Tools/scripts/h2py.py.The deprecated\nasynchat.fifo\nclass has been removed.\nPorting to Python 3.6\u00b6\nThis section lists previously described changes and other bugfixes that may require changes to your code.\nChanges in \u2018python\u2019 Command Behavior\u00b6\nThe output of a special Python build with defined\nCOUNT_ALLOCS\n,SHOW_ALLOC_COUNT\norSHOW_TRACK_COUNT\nmacros is now off by default. It can be re-enabled using the-X showalloccount\noption. It now outputs tostderr\ninstead ofstdout\n. (Contributed by Serhiy Storchaka in bpo-23034.)\nChanges in the Python API\u00b6\nopen()\nwill no longer allow combining the'U'\nmode flag with'+'\n. (Contributed by Jeff Balogh and John O\u2019Connor in bpo-2091.)sqlite3\nno longer implicitly commits an open transaction before DDL statements.On Linux,\nos.urandom()\nnow blocks until the system urandom entropy pool is initialized to increase the security.When\nimportlib.abc.Loader.exec_module()\nis defined,importlib.abc.Loader.create_module()\nmust also be defined.PyErr_SetImportError()\nnow setsTypeError\nwhen its msg argument is not set. Previously onlyNULL\nwas returned.The format of the\nco_lnotab\nattribute of code objects changed to support a negative line number delta. By default, Python does not emit bytecode with a negative line number delta. Functions usingframe.f_lineno\n,PyFrame_GetLineNumber()\norPyCode_Addr2Line()\nare not affected. Functions directly decodingco_lnotab\nshould be updated to use a signed 8-bit integer type for the line number delta, but this is only required to support applications using a negative line number delta. SeeObjects/lnotab_notes.txt\nfor theco_lnotab\nformat and how to decode it, and see the PEP 511 for the rationale.The functions in the\ncompileall\nmodule now return booleans instead of1\nor0\nto represent success or failure, respectively. Thanks to booleans being a subclass of integers, this should only be an issue if you were doing identity checks for1\nor0\n. See bpo-25768.Reading the\nport\nattribute ofurllib.parse.urlsplit()\nandurlparse()\nresults now raisesValueError\nfor out-of-range values, rather than returningNone\n. See bpo-20059.The\nimp\nmodule now raises aDeprecationWarning\ninstead ofPendingDeprecationWarning\n.The following modules have had missing APIs added to their\n__all__\nattributes to match the documented APIs:calendar\n,cgi\n,csv\n,ElementTree\n,enum\n,fileinput\n,ftplib\n,logging\n,mailbox\n,mimetypes\n,optparse\n,plistlib\n,smtpd\n,subprocess\n,tarfile\n,threading\nandwave\n. This means they will export new symbols whenimport *\nis used. (Contributed by Joel Taddei and Jacek Ko\u0142odziej in bpo-23883.)When performing a relative import, if\n__package__\ndoes not compare equal to__spec__.parent\nthenImportWarning\nis raised. (Contributed by Brett Cannon in bpo-25791.)When a relative import is performed and no parent package is known, then\nImportError\nwill be raised. Previously,SystemError\ncould be raised. (Contributed by Brett Cannon in bpo-18018.)Servers based on the\nsocketserver\nmodule, including those defined inhttp.server\n,xmlrpc.server\nandwsgiref.simple_server\n, now only catch exceptions derived fromException\n. Therefore if a request handler raises an exception likeSystemExit\norKeyboardInterrupt\n,handle_error()\nis no longer called, and the exception will stop a single-threaded server. (Contributed by Martin Panter in bpo-23430.)spwd.getspnam()\nnow raises aPermissionError\ninstead ofKeyError\nif the user doesn\u2019t have privileges.The\nsocket.socket.close()\nmethod now raises an exception if an error (e.g.EBADF\n) was reported by the underlying system call. (Contributed by Martin Panter in bpo-26685.)The decode_data argument for the\nsmtpd.SMTPChannel\nandsmtpd.SMTPServer\nconstructors is nowFalse\nby default. This means that the argument passed toprocess_message()\nis now a bytes object by default, andprocess_message()\nwill be passed keyword arguments. Code that has already been updated in accordance with the deprecation warning generated by 3.5 will not be affected.All optional arguments of the\ndump()\n,dumps()\n,load()\nandloads()\nfunctions andJSONEncoder\nandJSONDecoder\nclass constructors in thejson\nmodule are now keyword-only. (Contributed by Serhiy Storchaka in bpo-18726.)Subclasses of\ntype\nwhich don\u2019t overridetype.__new__\nmay no longer use the one-argument form to get the type of an object.As part of PEP 487, the handling of keyword arguments passed to\ntype\n(other than the metaclass hint,metaclass\n) is now consistently delegated toobject.__init_subclass__()\n. This means thattype.__new__\nandtype.__init__\nboth now accept arbitrary keyword arguments, butobject.__init_subclass__()\n(which is called fromtype.__new__\n) will reject them by default. Custom metaclasses accepting additional keyword arguments will need to adjust their calls totype.__new__\n(whether direct or viasuper\n) accordingly.In\ndistutils.command.sdist.sdist\n, thedefault_format\nattribute has been removed and is no longer honored. Instead, the gzipped tarfile format is the default on all platforms and no platform-specific selection is made. In environments where distributions are built on Windows and zip distributions are required, configure the project with asetup.cfg\nfile containing the following:[sdist] formats=zip\nThis behavior has also been backported to earlier Python versions by Setuptools 26.0.0.\nIn the\nurllib.request\nmodule and thehttp.client.HTTPConnection.request()\nmethod, if no Content-Length header field has been specified and the request body is a file object, it is now sent with HTTP 1.1 chunked encoding. If a file object has to be sent to a HTTP 1.0 server, the Content-Length value now has to be specified by the caller. (Contributed by Demian Brecht and Rolf Krahl with tweaks from Martin Panter in bpo-12319.)The\nDictReader\nnow returns rows of typeOrderedDict\n. (Contributed by Steve Holden in bpo-27842.)The\ncrypt.METHOD_CRYPT\nwill no longer be added tocrypt.methods\nif unsupported by the platform. (Contributed by Victor Stinner in bpo-25287.)The verbose and rename arguments for\nnamedtuple()\nare now keyword-only. (Contributed by Raymond Hettinger in bpo-25628.)On Linux,\nctypes.util.find_library()\nnow looks inLD_LIBRARY_PATH\nfor shared libraries. (Contributed by Vinay Sajip in bpo-9998.)The\nimaplib.IMAP4\nclass now handles flags containing the']'\ncharacter in messages sent from the server to improve real-world compatibility. (Contributed by Lita Cho in bpo-21815.)The\nmmap.mmap.write()\nfunction now returns the number of bytes written like other write methods. (Contributed by Jakub Stasiak in bpo-26335.)The\npkgutil.iter_modules()\nandpkgutil.walk_packages()\nfunctions now returnModuleInfo\nnamed tuples. (Contributed by Ramchandra Apte in bpo-17211.)re.sub()\nnow raises an error for invalid numerical group references in replacement templates even if the pattern is not found in the string. The error message for invalid group references now includes the group index and the position of the reference. (Contributed by SilentGhost, Serhiy Storchaka in bpo-25953.)zipfile.ZipFile\nwill now raiseNotImplementedError\nfor unrecognized compression values. Previously a plainRuntimeError\nwas raised. Additionally, callingZipFile\nmethods on a closed ZipFile or calling thewrite()\nmethod on a ZipFile created with mode'r'\nwill raise aValueError\n. Previously, aRuntimeError\nwas raised in those scenarios.when custom metaclasses are combined with zero-argument\nsuper()\nor direct references from methods to the implicit__class__\nclosure variable, the implicit__classcell__\nnamespace entry must now be passed up totype.__new__\nfor initialisation. Failing to do so will result in aDeprecationWarning\nin Python 3.6 and aRuntimeError\nin Python 3.8.With the introduction of\nModuleNotFoundError\n, import system consumers may start expecting import system replacements to raise that more specific exception when appropriate, rather than the less-specificImportError\n. To provide future compatibility with such consumers, implementers of alternative import systems that completely replace__import__()\nwill need to update their implementations to raise the new subclass when a module can\u2019t be found at all. Implementers of compliant plugins to the default import system shouldn\u2019t need to make any changes, as the default import system will raise the new subclass when appropriate.\nChanges in the C API\u00b6\nThe\nPyMem_Malloc()\nallocator family now uses the pymalloc allocator rather than the systemmalloc()\n. Applications callingPyMem_Malloc()\nwithout holding the GIL can now crash. Set thePYTHONMALLOC\nenvironment variable todebug\nto validate the usage of memory allocators in your application. See bpo-26249.Py_Exit()\n(and the main interpreter) now override the exit status with 120 if flushing buffered data failed. See bpo-5319.\nCPython bytecode changes\u00b6\nThere have been several major changes to the bytecode in Python 3.6.\nThe Python interpreter now uses a 16-bit wordcode instead of bytecode. (Contributed by Demur Rumed with input and reviews from Serhiy Storchaka and Victor Stinner in bpo-26647 and bpo-28050.)\nThe new\nFORMAT_VALUE\nandBUILD_STRING\nopcodes as part of the formatted string literal implementation. (Contributed by Eric Smith in bpo-25483 and Serhiy Storchaka in bpo-27078.)The new\nBUILD_CONST_KEY_MAP\nopcode to optimize the creation of dictionaries with constant keys. (Contributed by Serhiy Storchaka in bpo-27140.)The function call opcodes have been heavily reworked for better performance and simpler implementation. The\nMAKE_FUNCTION\n,CALL_FUNCTION\n,CALL_FUNCTION_KW\nandBUILD_MAP_UNPACK_WITH_CALL\nopcodes have been modified, the newCALL_FUNCTION_EX\nandBUILD_TUPLE_UNPACK_WITH_CALL\nhave been added, andCALL_FUNCTION_VAR\n,CALL_FUNCTION_VAR_KW\nandMAKE_CLOSURE\nopcodes have been removed. (Contributed by Demur Rumed in bpo-27095, and Serhiy Storchaka in bpo-27213, bpo-28257.)The new\nSETUP_ANNOTATIONS\nandSTORE_ANNOTATION\nopcodes have been added to support the new variable annotation syntax. (Contributed by Ivan Levkivskyi in bpo-27985.)\nNotable changes in Python 3.6.2\u00b6\nNew make regen-all\nbuild target\u00b6\nTo simplify cross-compilation, and to ensure that CPython can reliably be compiled without requiring an existing version of Python to already be available, the autotools-based build system no longer attempts to implicitly recompile generated files based on file modification times.\nInstead, a new make regen-all\ncommand has been added to force regeneration\nof these files when desired (e.g. after an initial version of Python has\nalready been built based on the pregenerated versions).\nMore selective regeneration targets are also defined - see Makefile.pre.in for details.\n(Contributed by Victor Stinner in bpo-23404.)\nAdded in version 3.6.2.\nRemoval of make touch\nbuild target\u00b6\nThe make touch\nbuild target previously used to request implicit regeneration\nof generated files by updating their modification times has been removed.\nIt has been replaced by the new make regen-all\ntarget.\n(Contributed by Victor Stinner in bpo-23404.)\nChanged in version 3.6.2.\nNotable changes in Python 3.6.4\u00b6\nThe PyExc_RecursionErrorInst\nsingleton that was part of the public API\nhas been removed as its members being never cleared may cause a segfault\nduring finalization of the interpreter.\n(Contributed by Xavier de Gaye in bpo-22898 and bpo-30697.)\nNotable changes in Python 3.6.5\u00b6\nThe locale.localeconv()\nfunction now sets temporarily the LC_CTYPE\nlocale to the LC_NUMERIC\nlocale in some cases.\n(Contributed by Victor Stinner in bpo-31900.)\nNotable changes in Python 3.6.7\u00b6\nxml.dom.minidom\nand xml.sax\nmodules no longer process\nexternal entities by default. See also gh-61441.\nIn 3.6.7 the tokenize\nmodule now implicitly emits a NEWLINE\ntoken\nwhen provided with input that does not have a trailing new line. This behavior\nnow matches what the C tokenizer does internally.\n(Contributed by Ammar Askar in bpo-33899.)\nNotable changes in Python 3.6.10\u00b6\nDue to significant security concerns, the reuse_address parameter of\nasyncio.loop.create_datagram_endpoint()\nis no longer supported. This is\nbecause of the behavior of the socket option SO_REUSEADDR\nin UDP. For more\ndetails, see the documentation for loop.create_datagram_endpoint()\n.\n(Contributed by Kyle Stanley, Antoine Pitrou, and Yury Selivanov in\nbpo-37228.)\nNotable changes in Python 3.6.13\u00b6\nEarlier Python versions allowed using both ;\nand &\nas\nquery parameter separators in urllib.parse.parse_qs()\nand\nurllib.parse.parse_qsl()\n. Due to security concerns, and to conform with\nnewer W3C recommendations, this has been changed to allow only a single\nseparator key, with &\nas the default. This change also affects\ncgi.parse()\nand cgi.parse_multipart()\nas they use the affected\nfunctions internally. For more details, please see their respective\ndocumentation.\n(Contributed by Adam Goldschmidt, Senthil Kumaran and Ken Jin in bpo-42967.)\nNotable changes in Python 3.6.14\u00b6\nA security fix alters the ftplib.FTP\nbehavior to not trust the\nIPv4 address sent from the remote server when setting up a passive data\nchannel. We reuse the ftp server IP address instead. For unusual code\nrequiring the old behavior, set a trust_server_pasv_ipv4_address\nattribute on your FTP instance to True\n. (See gh-87451)\nThe presence of newline or tab characters in parts of a URL allows for some\nforms of attacks. Following the WHATWG specification that updates RFC 3986,\nASCII newline \\n\n, \\r\nand tab \\t\ncharacters are stripped from the\nURL by the parser urllib.parse()\npreventing such attacks. The removal\ncharacters are controlled by a new module level variable\nurllib.parse._UNSAFE_URL_BYTES_TO_REMOVE\n. (See gh-88048)", "code_snippets": [" ", " ", "\n", "\n", "\n", " ", " ", "\n", " ", " ", "\n", " ", " ", "\n", " ", "\n", "\n", " ", " ", " ", "\n\n", " ", " ", "\n\n", "\n ", " ", " ", " ", " ", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", " ", " ", "\n", "\n ", " ", " ", " ", "\n ", " ", "\n ", " ", "\n", " ", " ", " ", " ", " ", " ", " ", " ", " ", " ", " ", "\n", " ", " ", " ", " ", " ", " ", " ", " ", " ", " ", "\n", "\n ", " ", " ", "\n\n ", " ", "\n ", "\n ", "\n\n", "\n ", "\n\n", "\n ", "\n", "\n ", " ", " ", "\n ", " ", "\n\n ", " ", " ", "\n ", " ", " ", " ", "\n ", " ", "\n ", " ", " ", "\n\n ", "\n ", " ", " ", "\n ", " ", " ", "\n\n", "\n ", " ", " ", "\n", "\n", " ", " ", " ", "\n", " ", " ", " ", "\n", "\n", "\n", "\n", "\n", " ", "\n", "\n", "\n", "\n", "\n", " ", " ", " ", " ", " ", " ", "\n", " ", " ", " ", "\n", " ", " ", " ", " ", " ", "\n", " ", " ", " ", "\n", " ", " ", " ", " ", " ", "\n", "\n", "\n", "\n", "\n", "\n", " ", " ", " ", " ", " ", " ", " ", "\n ", " ", " ", " ", "\n ", " ", " ", " ", " ", " ", " ", " ", " ", " ", "\n ", " ", " ", " ", " ", " ", " ", " ", " ", " ", " ", "\n ", " ", " ", " ", " ", "\n ", " ", " ", "\n ", " ", " ", "\n ", " ", " ", "\n ", " ", " ", "\n ", " ", " ", "\n ", " ", " ", "\n ", " ", " ", "\n ", " ", " ", " ", " ", " ", " ", "\n ", " ", " ", " ", " ", " ", " ", "\n\n", " ", " ", " ", " ", " ", " ", " ", "\n ", " ", " ", " ", "\n ", " ", " ", " ", "\n ", " ", " ", " ", "\n ", " ", " ", " ", "\n ", " ", " ", " ", "\n\n", " ", " ", " ", " ", " ", " ", "\n\n", " ", " ", " ", " ", " ", " ", "\n ", " ", " ", " ", " ", " ", "\n ", " ", " ", " ", " ", " ", "\n ", " ", " ", " ", " ", " ", "\n ", " ", " ", " ", " ", " ", "\n ", " ", " ", " ", " ", " ", "\n ", " ", " ", " ", " ", " ", "\n ", " ", " ", " ", " ", " ", "\n ", "\n", "\n", "\n", " ", " ", "\n", "\n", " ", " ", " ", "\n", " ", " ", " ", "\n", " ", " ", " ", "\n", "\n", "\n", "\n", " ", "\n", "\n", "\n", "\n File ", ", line ", ", in ", "\n File ", ", line ", ", in ", "\n File ", ", line ", ", in ", "\n File ", ", line ", ", in ", "\n", "\n", ": ", "\n", " ", "\n\n", " ", " ", " ", "\n", " ", " ", "\n", "\n\n", "\n ", " ", "\n\n", " ", " ", "\n", " ", " ", "\n", " ", " ", " ", " ", " ", " ", " ", "\n ", " ", " ", "\n", " ", " ", " ", " ", " ", " ", "\n ", " ", " ", " ", "\n ", " ", "\n ", " ", " ", " ", "\n ", " ", " ", "\n"], "language": "Python", "source": "python.org", "token_count": 18541} +{"url": "https://docs.python.org/3/howto/mro.html", "title": "The Python 2.3 Method Resolution Order", "content": "The Python 2.3 Method Resolution Order\u00b6\nNote\nThis is a historical document, provided as an appendix to the official documentation. The Method Resolution Order discussed here was introduced in Python 2.3, but it is still used in later versions \u2013 including Python 3.\n- Abstract:\nThis document is intended for Python programmers who want to understand the C3 Method Resolution Order used in Python 2.3. Although it is not intended for newbies, it is quite pedagogical with many worked out examples. I am not aware of other publicly available documents with the same scope, therefore it should be useful.\nDisclaimer:\nI donate this document to the Python Software Foundation, under the Python 2.3 license. As usual in these circumstances, I warn the reader that what follows should be correct, but I don\u2019t give any warranty. Use it at your own risk and peril!\nAcknowledgments:\nAll the people of the Python mailing list who sent me their support. Paul Foley who pointed out various imprecisions and made me to add the part on local precedence ordering. David Goodger for help with the formatting in reStructuredText. David Mertz for help with the editing. Finally, Guido van Rossum who enthusiastically added this document to the official Python 2.3 home-page.\nThe beginning\u00b6\nFelix qui potuit rerum cognoscere causas \u2013 Virgilius\nEverything started with a post by Samuele Pedroni to the Python development mailing list [1]. In his post, Samuele showed that the Python 2.2 method resolution order is not monotonic and he proposed to replace it with the C3 method resolution order. Guido agreed with his arguments and therefore now Python 2.3 uses C3. The C3 method itself has nothing to do with Python, since it was invented by people working on Dylan and it is described in a paper intended for lispers [2]. The present paper gives a (hopefully) readable discussion of the C3 algorithm for Pythonistas who want to understand the reasons for the change.\nFirst of all, let me point out that what I am going to say only applies to the new style classes introduced in Python 2.2: classic classes maintain their old method resolution order, depth first and then left to right. Therefore, there is no breaking of old code for classic classes; and even if in principle there could be breaking of code for Python 2.2 new style classes, in practice the cases in which the C3 resolution order differs from the Python 2.2 method resolution order are so rare that no real breaking of code is expected. Therefore:\nDon\u2019t be scared!\nMoreover, unless you make strong use of multiple inheritance and you have non-trivial hierarchies, you don\u2019t need to understand the C3 algorithm, and you can easily skip this paper. On the other hand, if you really want to know how multiple inheritance works, then this paper is for you. The good news is that things are not as complicated as you might expect.\nLet me begin with some basic definitions.\nGiven a class C in a complicated multiple inheritance hierarchy, it is a non-trivial task to specify the order in which methods are overridden, i.e. to specify the order of the ancestors of C.\nThe list of the ancestors of a class C, including the class itself, ordered from the nearest ancestor to the furthest, is called the class precedence list or the linearization of C.\nThe Method Resolution Order (MRO) is the set of rules that construct the linearization. In the Python literature, the idiom \u201cthe MRO of C\u201d is also used as a synonymous for the linearization of the class C.\nFor instance, in the case of single inheritance hierarchy, if C is a subclass of C1, and C1 is a subclass of C2, then the linearization of C is simply the list [C, C1 , C2]. However, with multiple inheritance hierarchies, the construction of the linearization is more cumbersome, since it is more difficult to construct a linearization that respects local precedence ordering and monotonicity.\nI will discuss the local precedence ordering later, but I can give the definition of monotonicity here. A MRO is monotonic when the following is true: if C1 precedes C2 in the linearization of C, then C1 precedes C2 in the linearization of any subclass of C. Otherwise, the innocuous operation of deriving a new class could change the resolution order of methods, potentially introducing very subtle bugs. Examples where this happens will be shown later.\nNot all classes admit a linearization. There are cases, in complicated hierarchies, where it is not possible to derive a class such that its linearization respects all the desired properties.\nHere I give an example of this situation. Consider the hierarchy\n>>> O = object\n>>> class X(O): pass\n>>> class Y(O): pass\n>>> class A(X,Y): pass\n>>> class B(Y,X): pass\nwhich can be represented with the following inheritance graph, where I\nhave denoted with O the object\nclass, which is the beginning of any\nhierarchy for new style classes:\n----------- | | | O | | / \\ | - X Y / | / | / | / |/ A B \\ / ?\nIn this case, it is not possible to derive a new class C from A and B, since X precedes Y in A, but Y precedes X in B, therefore the method resolution order would be ambiguous in C.\nPython 2.3 raises an exception in this situation (TypeError: MRO conflict among bases Y, X) forbidding the naive programmer from creating ambiguous hierarchies. Python 2.2 instead does not raise an exception, but chooses an ad hoc ordering (CABXYO in this case).\nThe C3 Method Resolution Order\u00b6\nLet me introduce a few simple notations which will be useful for the following discussion. I will use the shortcut notation:\nC1 C2 ... CN\nto indicate the list of classes [C1, C2, \u2026 , CN].\nThe head of the list is its first element:\nhead = C1\nwhereas the tail is the rest of the list:\ntail = C2 ... CN.\nI shall also use the notation:\nC + (C1 C2 ... CN) = C C1 C2 ... CN\nto denote the sum of the lists [C] + [C1, C2, \u2026 ,CN].\nNow I can explain how the MRO works in Python 2.3.\nConsider a class C in a multiple inheritance hierarchy, with C inheriting from the base classes B1, B2, \u2026 , BN. We want to compute the linearization L[C] of the class C. The rule is the following:\nthe linearization of C is the sum of C plus the merge of the linearizations of the parents and the list of the parents.\nIn symbolic notation:\nL[C(B1 ... BN)] = C + merge(L[B1] ... L[BN], B1 ... BN)\nIn particular, if C is the object\nclass, which has no parents, the\nlinearization is trivial:\nL[object] = object.\nHowever, in general one has to compute the merge according to the following prescription:\ntake the head of the first list, i.e L[B1][0]; if this head is not in the tail of any of the other lists, then add it to the linearization of C and remove it from the lists in the merge, otherwise look at the head of the next list and take it, if it is a good head. Then repeat the operation until all the class are removed or it is impossible to find good heads. In this case, it is impossible to construct the merge, Python 2.3 will refuse to create the class C and will raise an exception.\nThis prescription ensures that the merge operation preserves the ordering, if the ordering can be preserved. On the other hand, if the order cannot be preserved (as in the example of serious order disagreement discussed above) then the merge cannot be computed.\nThe computation of the merge is trivial if C has only one parent (single inheritance); in this case:\nL[C(B)] = C + merge(L[B],B) = C + L[B]\nHowever, in the case of multiple inheritance things are more cumbersome and I don\u2019t expect you can understand the rule without a couple of examples ;-)\nExamples\u00b6\nFirst example. Consider the following hierarchy:\n>>> O = object\n>>> class F(O): pass\n>>> class E(O): pass\n>>> class D(O): pass\n>>> class C(D,F): pass\n>>> class B(D,E): pass\n>>> class A(B,C): pass\nIn this case the inheritance graph can be drawn as:\n6 --- Level 3 | O | (more general) / --- \\ / | \\ | / | \\ | / | \\ | --- --- --- | Level 2 3 | D | 4| E | | F | 5 | --- --- --- | \\ \\ _ / | | \\ / \\ _ | | \\ / \\ | | --- --- | Level 1 1 | B | | C | 2 | --- --- | \\ / | \\ / \\ / --- Level 0 0 | A | (more specialized) ---\nThe linearizations of O,D,E and F are trivial:\nL[O] = O\nL[D] = D O\nL[E] = E O\nL[F] = F O\nThe linearization of B can be computed as:\nL[B] = B + merge(DO, EO, DE)\nWe see that D is a good head, therefore we take it and we are reduced to\ncompute merge(O,EO,E)\n. Now O is not a good head, since it is in the\ntail of the sequence EO. In this case the rule says that we have to\nskip to the next sequence. Then we see that E is a good head; we take\nit and we are reduced to compute merge(O,O)\nwhich gives O. Therefore:\nL[B] = B D E O\nUsing the same procedure one finds:\nL[C] = C + merge(DO,FO,DF)\n= C + D + merge(O,FO,F)\n= C + D + F + merge(O,O)\n= C D F O\nNow we can compute:\nL[A] = A + merge(BDEO,CDFO,BC)\n= A + B + merge(DEO,CDFO,C)\n= A + B + C + merge(DEO,DFO)\n= A + B + C + D + merge(EO,FO)\n= A + B + C + D + E + merge(O,FO)\n= A + B + C + D + E + F + merge(O,O)\n= A B C D E F O\nIn this example, the linearization is ordered in a pretty nice way according to the inheritance level, in the sense that lower levels (i.e. more specialized classes) have higher precedence (see the inheritance graph). However, this is not the general case.\nI leave as an exercise for the reader to compute the linearization for my second example:\n>>> O = object\n>>> class F(O): pass\n>>> class E(O): pass\n>>> class D(O): pass\n>>> class C(D,F): pass\n>>> class B(E,D): pass\n>>> class A(B,C): pass\nThe only difference with the previous example is the change B(D,E) \u2013> B(E,D); however even such a little modification completely changes the ordering of the hierarchy:\n6 --- Level 3 | O | / --- \\ / | \\ / | \\ / | \\ --- --- --- Level 2 2 | E | 4 | D | | F | 5 --- --- --- \\ / \\ / \\ / \\ / \\ / \\ / --- --- Level 1 1 | B | | C | 3 --- --- \\ / \\ / --- Level 0 0 | A | ---\nNotice that the class E, which is in the second level of the hierarchy, precedes the class C, which is in the first level of the hierarchy, i.e. E is more specialized than C, even if it is in a higher level.\nA lazy programmer can obtain the MRO directly from Python 2.2, since in\nthis case it coincides with the Python 2.3 linearization. It is enough\nto invoke the mro()\nmethod of class A:\n>>> A.mro()\n[, , ,\n, , ,\n]\nFinally, let me consider the example discussed in the first section, involving a serious order disagreement. In this case, it is straightforward to compute the linearizations of O, X, Y, A and B:\nL[O] = 0 L[X] = X O L[Y] = Y O L[A] = A X Y O L[B] = B Y X O\nHowever, it is impossible to compute the linearization for a class C that inherits from A and B:\nL[C] = C + merge(AXYO, BYXO, AB)\n= C + A + merge(XYO, BYXO, B)\n= C + A + B + merge(XYO, YXO)\nAt this point we cannot merge the lists XYO and YXO, since X is in the tail of YXO whereas Y is in the tail of XYO: therefore there are no good heads and the C3 algorithm stops. Python 2.3 raises an error and refuses to create the class C.\nBad Method Resolution Orders\u00b6\nA MRO is bad when it breaks such fundamental properties as local precedence ordering and monotonicity. In this section, I will show that both the MRO for classic classes and the MRO for new style classes in Python 2.2 are bad.\nIt is easier to start with the local precedence ordering. Consider the following example:\n>>> F=type('Food',(),{'remember2buy':'spam'})\n>>> E=type('Eggs',(F,),{'remember2buy':'eggs'})\n>>> G=type('GoodFood',(F,E),{}) # under Python 2.3 this is an error!\nwith inheritance diagram\nO | (buy spam) F | \\ | E (buy eggs) | / G (buy eggs or spam ?)\nWe see that class G inherits from F and E, with F before E: therefore we would expect the attribute G.remember2buy to be inherited by F.remember2buy and not by E.remember2buy: nevertheless Python 2.2 gives\n>>> G.remember2buy\n'eggs'\nThis is a breaking of local precedence ordering since the order in the local precedence list, i.e. the list of the parents of G, is not preserved in the Python 2.2 linearization of G:\nL[G,P22]= G E F object # F *follows* E\nOne could argue that the reason why F follows E in the Python 2.2 linearization is that F is less specialized than E, since F is the superclass of E; nevertheless the breaking of local precedence ordering is quite non-intuitive and error prone. This is particularly true since it is a different from old style classes:\n>>> class F: remember2buy='spam'\n>>> class E(F): remember2buy='eggs'\n>>> class G(F,E): pass\n>>> G.remember2buy\n'spam'\nIn this case the MRO is GFEF and the local precedence ordering is preserved.\nAs a general rule, hierarchies such as the previous one should be avoided, since it is unclear if F should override E or vice-versa. Python 2.3 solves the ambiguity by raising an exception in the creation of class G, effectively stopping the programmer from generating ambiguous hierarchies. The reason for that is that the C3 algorithm fails when the merge:\nmerge(FO,EFO,FE)\ncannot be computed, because F is in the tail of EFO and E is in the tail of FE.\nThe real solution is to design a non-ambiguous hierarchy, i.e. to derive G from E and F (the more specific first) and not from F and E; in this case the MRO is GEF without any doubt.\nO | F (spam) / | (eggs) E | \\ | G (eggs, no doubt)\nPython 2.3 forces the programmer to write good hierarchies (or, at least, less error-prone ones).\nOn a related note, let me point out that the Python 2.3 algorithm is smart enough to recognize obvious mistakes, as the duplication of classes in the list of parents:\n>>> class A(object): pass\n>>> class C(A,A): pass # error\nTraceback (most recent call last):\nFile \"\", line 1, in ?\nTypeError: duplicate base class A\nPython 2.2 (both for classic classes and new style classes) in this situation, would not raise any exception.\nFinally, I would like to point out two lessons we have learned from this example:\ndespite the name, the MRO determines the resolution order of attributes, not only of methods;\nthe default food for Pythonistas is spam ! (but you already knew that ;-)\nHaving discussed the issue of local precedence ordering, let me now consider the issue of monotonicity. My goal is to show that neither the MRO for classic classes nor that for Python 2.2 new style classes is monotonic.\nTo prove that the MRO for classic classes is non-monotonic is rather trivial, it is enough to look at the diamond diagram:\nC / \\ / \\ A B \\ / \\ / D\nOne easily discerns the inconsistency:\nL[B,P21] = B C # B precedes C : B's methods win\nL[D,P21] = D A C B C # B follows C : C's methods win!\nOn the other hand, there are no problems with the Python 2.2 and 2.3 MROs, they give both:\nL[D] = D A B C\nGuido points out in his essay [3] that the classic MRO is not so bad in\npractice, since one can typically avoids diamonds for classic classes.\nBut all new style classes inherit from object\n, therefore diamonds are\nunavoidable and inconsistencies shows up in every multiple inheritance\ngraph.\nThe MRO of Python 2.2 makes breaking monotonicity difficult, but not impossible. The following example, originally provided by Samuele Pedroni, shows that the MRO of Python 2.2 is non-monotonic:\n>>> class A(object): pass\n>>> class B(object): pass\n>>> class C(object): pass\n>>> class D(object): pass\n>>> class E(object): pass\n>>> class K1(A,B,C): pass\n>>> class K2(D,B,E): pass\n>>> class K3(D,A): pass\n>>> class Z(K1,K2,K3): pass\nHere are the linearizations according to the C3 MRO (the reader should verify these linearizations as an exercise and draw the inheritance diagram ;-)\nL[A] = A O\nL[B] = B O\nL[C] = C O\nL[D] = D O\nL[E] = E O\nL[K1]= K1 A B C O\nL[K2]= K2 D B E O\nL[K3]= K3 D A O\nL[Z] = Z K1 K2 K3 D A B C E O\nPython 2.2 gives exactly the same linearizations for A, B, C, D, E, K1, K2 and K3, but a different linearization for Z:\nL[Z,P22] = Z K1 K3 A K2 D B C E O\nIt is clear that this linearization is wrong, since A comes before D whereas in the linearization of K3 A comes after D. In other words, in K3 methods derived by D override methods derived by A, but in Z, which still is a subclass of K3, methods derived by A override methods derived by D! This is a violation of monotonicity. Moreover, the Python 2.2 linearization of Z is also inconsistent with local precedence ordering, since the local precedence list of the class Z is [K1, K2, K3] (K2 precedes K3), whereas in the linearization of Z K2 follows K3. These problems explain why the 2.2 rule has been dismissed in favor of the C3 rule.\nThe end\u00b6\nThis section is for the impatient reader, who skipped all the previous sections and jumped immediately to the end. This section is for the lazy programmer too, who didn\u2019t want to exercise her/his brain. Finally, it is for the programmer with some hubris, otherwise s/he would not be reading a paper on the C3 method resolution order in multiple inheritance hierarchies ;-) These three virtues taken all together (and not separately) deserve a prize: the prize is a short Python 2.2 script that allows you to compute the 2.3 MRO without risk to your brain. Simply change the last line to play with the various examples I have discussed in this paper.:\n#\n\"\"\"C3 algorithm by Samuele Pedroni (with readability enhanced by me).\"\"\"\nclass __metaclass__(type):\n\"All classes are metamagically modified to be nicely printed\"\n__repr__ = lambda cls: cls.__name__\nclass ex_2:\n\"Serious order disagreement\" #From Guido\nclass O: pass\nclass X(O): pass\nclass Y(O): pass\nclass A(X,Y): pass\nclass B(Y,X): pass\ntry:\nclass Z(A,B): pass #creates Z(A,B) in Python 2.2\nexcept TypeError:\npass # Z(A,B) cannot be created in Python 2.3\nclass ex_5:\n\"My first example\"\nclass O: pass\nclass F(O): pass\nclass E(O): pass\nclass D(O): pass\nclass C(D,F): pass\nclass B(D,E): pass\nclass A(B,C): pass\nclass ex_6:\n\"My second example\"\nclass O: pass\nclass F(O): pass\nclass E(O): pass\nclass D(O): pass\nclass C(D,F): pass\nclass B(E,D): pass\nclass A(B,C): pass\nclass ex_9:\n\"Difference between Python 2.2 MRO and C3\" #From Samuele\nclass O: pass\nclass A(O): pass\nclass B(O): pass\nclass C(O): pass\nclass D(O): pass\nclass E(O): pass\nclass K1(A,B,C): pass\nclass K2(D,B,E): pass\nclass K3(D,A): pass\nclass Z(K1,K2,K3): pass\ndef merge(seqs):\nprint '\\n\\nCPL[%s]=%s' % (seqs[0][0],seqs),\nres = []; i=0\nwhile 1:\nnonemptyseqs=[seq for seq in seqs if seq]\nif not nonemptyseqs: return res\ni+=1; print '\\n',i,'round: candidates...',\nfor seq in nonemptyseqs: # find merge candidates among seq heads\ncand = seq[0]; print ' ',cand,\nnothead=[s for s in nonemptyseqs if cand in s[1:]]\nif nothead: cand=None #reject candidate\nelse: break\nif not cand: raise \"Inconsistent hierarchy\"\nres.append(cand)\nfor seq in nonemptyseqs: # remove cand\nif seq[0] == cand: del seq[0]\ndef mro(C):\n\"Compute the class precedence list (mro) according to C3\"\nreturn merge([[C]]+map(mro,C.__bases__)+[list(C.__bases__)])\ndef print_mro(C):\nprint '\\nMRO[%s]=%s' % (C,mro(C))\nprint '\\nP22 MRO[%s]=%s' % (C,C.mro())\nprint_mro(ex_9.Z)\n#\nThat\u2019s all folks,\nenjoy !", "code_snippets": [" ", " ", " ", "\n", " ", " ", "\n", " ", " ", " ", " ", "\n", " ", " ", " ", " ", " ", " ", " ", " ", " ", " ", " ", "\n", " ", " ", " ", " ", " ", " ", " ", " ", " ", " ", " ", "\n", " ", " ", "\n", " ", " ", " ", " ", " ", " ", " ", " ", "\n", " ", " ", "\n", " ", " ", " ", "\n", " ", " ", " ", "\n", " ", " ", " ", "\n", " ", " ", " ", " ", " ", " ", "\n", " ", " ", " ", " ", " ", "\n", " ", " ", " ", " ", "\n ", " ", " ", " ", " ", " ", "\n ", " ", " ", " ", " ", " ", " ", " ", "\n ", " ", " ", " ", " ", "\n", " ", " ", " ", " ", "\n ", " ", " ", " ", " ", " ", "\n ", " ", " ", " ", " ", " ", " ", " ", "\n ", " ", " ", " ", " ", " ", " ", " ", " ", " ", "\n ", " ", " ", " ", " ", " ", " ", " ", " ", " ", " ", " ", "\n ", " ", " ", " ", " ", " ", " ", " ", " ", " ", " ", " ", " ", " ", "\n ", " ", " ", " ", " ", " ", " ", " ", "\n", " ", " ", " ", " ", " ", " ", "\n ", " ", " ", " ", " ", " ", " ", " ", "\n ", " ", " ", " ", " ", " ", " ", " ", " ", "\n", " ", " ", " ", " ", " ", "\n", "\n", " ", " ", " ", " ", "\n", " ", " ", " ", " ", " ", " ", " ", "\n", " ", " ", " ", " ", " ", "\n", " ", " ", " ", "\n", " ", " ", " ", "\n", " ", " ", " ", "\n", " ", " ", " ", "\n", " ", " ", " ", "\n", " ", " ", " ", " ", " ", "\n", " ", " ", " ", " ", " ", "\n", " ", " ", " ", " ", "\n", " ", " ", " ", " ", " ", " ", " ", " ", " ", " ", " ", "\n", " ", " ", " ", " ", " ", " ", " ", " ", " ", " ", " ", "\n", "\n\n", "\n\n", "\n ", "\n ", " ", " ", " ", " ", "\n\n", "\n ", " ", "\n ", " ", "\n ", " ", "\n ", " ", "\n ", " ", "\n ", " ", "\n ", "\n ", " ", " ", "\n ", " ", "\n ", " ", "\n\n", "\n ", "\n ", " ", "\n ", " ", "\n ", " ", "\n ", " ", "\n ", " ", "\n ", " ", "\n ", " ", "\n\n", "\n ", "\n ", " ", "\n ", " ", "\n ", " ", "\n ", " ", "\n ", " ", "\n ", " ", "\n ", " ", "\n\n", "\n ", " ", "\n ", " ", "\n ", " ", "\n ", " ", "\n ", " ", "\n ", " ", "\n ", " ", "\n ", " ", "\n ", " ", "\n ", " ", "\n ", " ", "\n\n", "\n ", " ", " ", " ", "\n ", " ", " ", " ", "\n ", " ", "\n ", " ", " ", " ", " ", " ", " ", "\n ", " ", " ", " ", " ", "\n ", " ", " ", "\n ", " ", " ", " ", " ", "\n ", " ", " ", " ", " ", "\n ", " ", " ", " ", " ", " ", " ", " ", " ", "\n ", " ", " ", " ", "\n ", " ", "\n ", " ", " ", " ", " ", "\n ", "\n ", " ", " ", " ", " ", "\n ", " ", " ", " ", " ", " ", "\n\n", "\n ", "\n ", " ", "\n\n", "\n ", " ", " ", " ", "\n ", " ", " ", " ", "\n\n", "\n\n", "\n"], "language": "Python", "source": "python.org", "token_count": 4731} +{"url": "https://docs.python.org/3/reference/datamodel.html", "title": "Data model", "content": "3. Data model\u00b6\n3.1. Objects, values and types\u00b6\nObjects are Python\u2019s abstraction for data. All data in a Python program is represented by objects or by relations between objects. Even code is represented by objects.\nEvery object has an identity, a type and a value. An object\u2019s identity never\nchanges once it has been created; you may think of it as the object\u2019s address in\nmemory. The is\noperator compares the identity of two objects; the\nid()\nfunction returns an integer representing its identity.\nCPython implementation detail: For CPython, id(x)\nis the memory address where x\nis stored.\nAn object\u2019s type determines the operations that the object supports (e.g., \u201cdoes\nit have a length?\u201d) and also defines the possible values for objects of that\ntype. The type()\nfunction returns an object\u2019s type (which is an object\nitself). Like its identity, an object\u2019s type is also unchangeable.\n[1]\nThe value of some objects can change. Objects whose value can change are said to be mutable; objects whose value is unchangeable once they are created are called immutable. (The value of an immutable container object that contains a reference to a mutable object can change when the latter\u2019s value is changed; however the container is still considered immutable, because the collection of objects it contains cannot be changed. So, immutability is not strictly the same as having an unchangeable value, it is more subtle.) An object\u2019s mutability is determined by its type; for instance, numbers, strings and tuples are immutable, while dictionaries and lists are mutable.\nObjects are never explicitly destroyed; however, when they become unreachable they may be garbage-collected. An implementation is allowed to postpone garbage collection or omit it altogether \u2014 it is a matter of implementation quality how garbage collection is implemented, as long as no objects are collected that are still reachable.\nCPython implementation detail: CPython currently uses a reference-counting scheme with (optional) delayed\ndetection of cyclically linked garbage, which collects most objects as soon\nas they become unreachable, but is not guaranteed to collect garbage\ncontaining circular references. See the documentation of the gc\nmodule for information on controlling the collection of cyclic garbage.\nOther implementations act differently and CPython may change.\nDo not depend on immediate finalization of objects when they become\nunreachable (so you should always close files explicitly).\nNote that the use of the implementation\u2019s tracing or debugging facilities may\nkeep objects alive that would normally be collectable. Also note that catching\nan exception with a try\n\u2026except\nstatement may keep\nobjects alive.\nSome objects contain references to \u201cexternal\u201d resources such as open files or\nwindows. It is understood that these resources are freed when the object is\ngarbage-collected, but since garbage collection is not guaranteed to happen,\nsuch objects also provide an explicit way to release the external resource,\nusually a close()\nmethod. Programs are strongly recommended to explicitly\nclose such objects. The try\n\u2026finally\nstatement\nand the with\nstatement provide convenient ways to do this.\nSome objects contain references to other objects; these are called containers. Examples of containers are tuples, lists and dictionaries. The references are part of a container\u2019s value. In most cases, when we talk about the value of a container, we imply the values, not the identities of the contained objects; however, when we talk about the mutability of a container, only the identities of the immediately contained objects are implied. So, if an immutable container (like a tuple) contains a reference to a mutable object, its value changes if that mutable object is changed.\nTypes affect almost all aspects of object behavior. Even the importance of\nobject identity is affected in some sense: for immutable types, operations that\ncompute new values may actually return a reference to any existing object with\nthe same type and value, while for mutable objects this is not allowed.\nFor example, after a = 1; b = 1\n, a and b may or may not refer to\nthe same object with the value one, depending on the implementation.\nThis is because int\nis an immutable type, so the reference to 1\ncan be reused. This behaviour depends on the implementation used, so should\nnot be relied upon, but is something to be aware of when making use of object\nidentity tests.\nHowever, after c = []; d = []\n, c and d are guaranteed to refer to two\ndifferent, unique, newly created empty lists. (Note that e = f = []\nassigns\nthe same object to both e and f.)\n3.2. The standard type hierarchy\u00b6\nBelow is a list of the types that are built into Python. Extension modules (written in C, Java, or other languages, depending on the implementation) can define additional types. Future versions of Python may add types to the type hierarchy (e.g., rational numbers, efficiently stored arrays of integers, etc.), although such additions will often be provided via the standard library instead.\nSome of the type descriptions below contain a paragraph listing \u2018special attributes.\u2019 These are attributes that provide access to the implementation and are not intended for general use. Their definition may change in the future.\n3.2.1. None\u00b6\nThis type has a single value. There is a single object with this value. This\nobject is accessed through the built-in name None\n. It is used to signify the\nabsence of a value in many situations, e.g., it is returned from functions that\ndon\u2019t explicitly return anything. Its truth value is false.\n3.2.2. NotImplemented\u00b6\nThis type has a single value. There is a single object with this value. This\nobject is accessed through the built-in name NotImplemented\n. Numeric methods\nand rich comparison methods should return this value if they do not implement the\noperation for the operands provided. (The interpreter will then try the\nreflected operation, or some other fallback, depending on the operator.) It\nshould not be evaluated in a boolean context.\nSee Implementing the arithmetic operations for more details.\nChanged in version 3.9: Evaluating NotImplemented\nin a boolean context was deprecated.\nChanged in version 3.14: Evaluating NotImplemented\nin a boolean context now raises a TypeError\n.\nIt previously evaluated to True\nand emitted a DeprecationWarning\nsince Python 3.9.\n3.2.3. Ellipsis\u00b6\nThis type has a single value. There is a single object with this value. This\nobject is accessed through the literal ...\nor the built-in name\nEllipsis\n. Its truth value is true.\n3.2.4. numbers.Number\n\u00b6\nThese are created by numeric literals and returned as results by arithmetic operators and arithmetic built-in functions. Numeric objects are immutable; once created their value never changes. Python numbers are of course strongly related to mathematical numbers, but subject to the limitations of numerical representation in computers.\nThe string representations of the numeric classes, computed by\n__repr__()\nand __str__()\n, have the following\nproperties:\nThey are valid numeric literals which, when passed to their class constructor, produce an object having the value of the original numeric.\nThe representation is in base 10, when possible.\nLeading zeros, possibly excepting a single zero before a decimal point, are not shown.\nTrailing zeros, possibly excepting a single zero after a decimal point, are not shown.\nA sign is shown only when the number is negative.\nPython distinguishes between integers, floating-point numbers, and complex numbers:\n3.2.4.1. numbers.Integral\n\u00b6\nThese represent elements from the mathematical set of integers (positive and negative).\nNote\nThe rules for integer representation are intended to give the most meaningful interpretation of shift and mask operations involving negative integers.\nThere are two types of integers:\n- Integers (\nint\n) These represent numbers in an unlimited range, subject to available (virtual) memory only. For the purpose of shift and mask operations, a binary representation is assumed, and negative numbers are represented in a variant of 2\u2019s complement which gives the illusion of an infinite string of sign bits extending to the left.\n- Booleans (\nbool\n) These represent the truth values False and True. The two objects representing the values\nFalse\nandTrue\nare the only Boolean objects. The Boolean type is a subtype of the integer type, and Boolean values behave like the values 0 and 1, respectively, in almost all contexts, the exception being that when converted to a string, the strings\"False\"\nor\"True\"\nare returned, respectively.\n3.2.4.2. numbers.Real\n(float\n)\u00b6\nThese represent machine-level double precision floating-point numbers. You are at the mercy of the underlying machine architecture (and C or Java implementation) for the accepted range and handling of overflow. Python does not support single-precision floating-point numbers; the savings in processor and memory usage that are usually the reason for using these are dwarfed by the overhead of using objects in Python, so there is no reason to complicate the language with two kinds of floating-point numbers.\n3.2.4.3. numbers.Complex\n(complex\n)\u00b6\nThese represent complex numbers as a pair of machine-level double precision\nfloating-point numbers. The same caveats apply as for floating-point numbers.\nThe real and imaginary parts of a complex number z\ncan be retrieved through\nthe read-only attributes z.real\nand z.imag\n.\n3.2.5. Sequences\u00b6\nThese represent finite ordered sets indexed by non-negative numbers. The\nbuilt-in function len()\nreturns the number of items of a sequence. When\nthe length of a sequence is n, the index set contains the numbers 0, 1,\n\u2026, n-1. Item i of sequence a is selected by a[i]\n. Some sequences,\nincluding built-in sequences, interpret negative subscripts by adding the\nsequence length. For example, a[-2]\nequals a[n-2]\n, the second to last\nitem of sequence a with length n\n.\nThe resulting value must be a nonnegative integer less than the number of items\nin the sequence. If it is not, an IndexError\nis raised.\nSequences also support slicing: a[start:stop]\nselects all items with index k such\nthat start <=\nk <\nstop. When used as an expression, a slice is a\nsequence of the same type. The comment above about negative subscripts also applies\nto negative slice positions.\nNote that no error is raised if a slice position is less than zero or larger\nthan the length of the sequence.\nIf start is missing or None\n, slicing behaves as if start was zero.\nIf stop is missing or None\n, slicing behaves as if stop was equal to\nthe length of the sequence.\nSome sequences also support \u201cextended slicing\u201d with a third \u201cstep\u201d parameter:\na[i:j:k]\nselects all items of a with index x where x = i + n*k\n, n\n>=\n0\nand i <=\nx <\nj.\nSequences are distinguished according to their mutability:\n3.2.5.1. Immutable sequences\u00b6\nAn object of an immutable sequence type cannot change once it is created. (If the object contains references to other objects, these other objects may be mutable and may be changed; however, the collection of objects directly referenced by an immutable object cannot change.)\nThe following types are immutable sequences:\n- Strings\nA string (\nstr\n) is a sequence of values that represent characters, or more formally, Unicode code points. All the code points in the range0\nto0x10FFFF\ncan be represented in a string.Python doesn\u2019t have a dedicated character type. Instead, every code point in the string is represented as a string object with length\n1\n.The built-in function\nord()\nconverts a code point from its string form to an integer in the range0\nto0x10FFFF\n;chr()\nconverts an integer in the range0\nto0x10FFFF\nto the corresponding length1\nstring object.str.encode()\ncan be used to convert astr\ntobytes\nusing the given text encoding, andbytes.decode()\ncan be used to achieve the opposite.- Tuples\nThe items of a\ntuple\nare arbitrary Python objects. Tuples of two or more items are formed by comma-separated lists of expressions. A tuple of one item (a \u2018singleton\u2019) can be formed by affixing a comma to an expression (an expression by itself does not create a tuple, since parentheses must be usable for grouping of expressions). An empty tuple can be formed by an empty pair of parentheses.- Bytes\nA\nbytes\nobject is an immutable array. The items are 8-bit bytes, represented by integers in the range 0 <= x < 256. Bytes literals (likeb'abc'\n) and the built-inbytes()\nconstructor can be used to create bytes objects. Also, bytes objects can be decoded to strings via thedecode()\nmethod.\n3.2.5.2. Mutable sequences\u00b6\nMutable sequences can be changed after they are created. The subscription and\nslicing notations can be used as the target of assignment and del\n(delete) statements.\nNote\nThe collections\nand array\nmodule provide\nadditional examples of mutable sequence types.\nThere are currently two intrinsic mutable sequence types:\n- Lists\nThe items of a list are arbitrary Python objects. Lists are formed by placing a comma-separated list of expressions in square brackets. (Note that there are no special cases needed to form lists of length 0 or 1.)\n- Byte Arrays\nA bytearray object is a mutable array. They are created by the built-in\nbytearray()\nconstructor. Aside from being mutable (and hence unhashable), byte arrays otherwise provide the same interface and functionality as immutablebytes\nobjects.\n3.2.6. Set types\u00b6\nThese represent unordered, finite sets of unique, immutable objects. As such,\nthey cannot be indexed by any subscript. However, they can be iterated over, and\nthe built-in function len()\nreturns the number of items in a set. Common\nuses for sets are fast membership testing, removing duplicates from a sequence,\nand computing mathematical operations such as intersection, union, difference,\nand symmetric difference.\nFor set elements, the same immutability rules apply as for dictionary keys. Note\nthat numeric types obey the normal rules for numeric comparison: if two numbers\ncompare equal (e.g., 1\nand 1.0\n), only one of them can be contained in a\nset.\nThere are currently two intrinsic set types:\n- Sets\nThese represent a mutable set. They are created by the built-in\nset()\nconstructor and can be modified afterwards by several methods, such asadd()\n.- Frozen sets\nThese represent an immutable set. They are created by the built-in\nfrozenset()\nconstructor. As a frozenset is immutable and hashable, it can be used again as an element of another set, or as a dictionary key.\n3.2.7. Mappings\u00b6\nThese represent finite sets of objects indexed by arbitrary index sets. The\nsubscript notation a[k]\nselects the item indexed by k\nfrom the mapping\na\n; this can be used in expressions and as the target of assignments or\ndel\nstatements. The built-in function len()\nreturns the number\nof items in a mapping.\nThere is currently a single intrinsic mapping type:\n3.2.7.1. Dictionaries\u00b6\nThese represent finite sets of objects indexed by nearly arbitrary values. The\nonly types of values not acceptable as keys are values containing lists or\ndictionaries or other mutable types that are compared by value rather than by\nobject identity, the reason being that the efficient implementation of\ndictionaries requires a key\u2019s hash value to remain constant. Numeric types used\nfor keys obey the normal rules for numeric comparison: if two numbers compare\nequal (e.g., 1\nand 1.0\n) then they can be used interchangeably to index\nthe same dictionary entry.\nDictionaries preserve insertion order, meaning that keys will be produced in the same order they were added sequentially over the dictionary. Replacing an existing key does not change the order, however removing a key and re-inserting it will add it to the end instead of keeping its old place.\nDictionaries are mutable; they can be created by the {}\nnotation (see\nsection Dictionary displays).\nThe extension modules dbm.ndbm\nand dbm.gnu\nprovide\nadditional examples of mapping types, as does the collections\nmodule.\nChanged in version 3.7: Dictionaries did not preserve insertion order in versions of Python before 3.6. In CPython 3.6, insertion order was preserved, but it was considered an implementation detail at that time rather than a language guarantee.\n3.2.8. Callable types\u00b6\nThese are the types to which the function call operation (see section Calls) can be applied:\n3.2.8.1. User-defined functions\u00b6\nA user-defined function object is created by a function definition (see section Function definitions). It should be called with an argument list containing the same number of items as the function\u2019s formal parameter list.\n3.2.8.1.1. Special read-only attributes\u00b6\nAttribute |\nMeaning |\n|---|---|\n|\nA reference to the Added in version 3.10. |\n|\nA reference to the |\n|\nA cell object has the attribute |\n3.2.8.1.2. Special writable attributes\u00b6\nMost of these attributes check the type of the assigned value:\nAttribute |\nMeaning |\n|---|---|\n|\nThe function\u2019s documentation string, or |\n|\nThe function\u2019s name.\nSee also: |\n|\nThe function\u2019s qualified name.\nSee also: Added in version 3.3. |\n|\nThe name of the module the function was defined in,\nor |\n|\nA |\n|\nThe code object representing the compiled function body. |\n|\nThe namespace supporting arbitrary function attributes.\nSee also: |\n|\nA Changed in version 3.14: Annotations are now lazily evaluated. See PEP 649. |\n|\nThe annotate function for this function, or Added in version 3.14. |\n|\nA |\n|\nA Added in version 3.12. |\nFunction objects also support getting and setting arbitrary attributes, which can be used, for example, to attach metadata to functions. Regular attribute dot-notation is used to get and set such attributes.\nCPython implementation detail: CPython\u2019s current implementation only supports function attributes on user-defined functions. Function attributes on built-in functions may be supported in the future.\nAdditional information about a function\u2019s definition can be retrieved from its\ncode object\n(accessible via the __code__\nattribute).\n3.2.8.2. Instance methods\u00b6\nAn instance method object combines a class, a class instance and any callable object (normally a user-defined function).\nSpecial read-only attributes:\n|\nRefers to the class instance object to which the method is bound |\n|\nRefers to the original function object |\n|\nThe method\u2019s documentation\n(same as |\n|\nThe name of the method\n(same as |\n|\nThe name of the module the method was defined in, or |\nMethods also support accessing (but not setting) the arbitrary function attributes on the underlying function object.\nUser-defined method objects may be created when getting an attribute of a\nclass (perhaps via an instance of that class), if that attribute is a\nuser-defined function object or a\nclassmethod\nobject.\nWhen an instance method object is created by retrieving a user-defined\nfunction object from a class via one of its\ninstances, its __self__\nattribute is the instance, and the\nmethod object is said to be bound. The new method\u2019s __func__\nattribute is the original function object.\nWhen an instance method object is created by retrieving a classmethod\nobject from a class or instance, its __self__\nattribute is the\nclass itself, and its __func__\nattribute is the function object\nunderlying the class method.\nWhen an instance method object is called, the underlying function\n(__func__\n) is called, inserting the class instance\n(__self__\n) in front of the argument list. For instance, when\nC\nis a class which contains a definition for a function\nf()\n, and x\nis an instance of C\n, calling x.f(1)\nis\nequivalent to calling C.f(x, 1)\n.\nWhen an instance method object is derived from a classmethod\nobject, the\n\u201cclass instance\u201d stored in __self__\nwill actually be the class\nitself, so that calling either x.f(1)\nor C.f(1)\nis equivalent to\ncalling f(C,1)\nwhere f\nis the underlying function.\nIt is important to note that user-defined functions which are attributes of a class instance are not converted to bound methods; this only happens when the function is an attribute of the class.\n3.2.8.3. Generator functions\u00b6\nA function or method which uses the yield\nstatement (see section\nThe yield statement) is called a generator function. Such a function, when\ncalled, always returns an iterator object which can be used to\nexecute the body of the function: calling the iterator\u2019s\niterator.__next__()\nmethod will cause the function to execute until\nit provides a value using the yield\nstatement. When the\nfunction executes a return\nstatement or falls off the end, a\nStopIteration\nexception is raised and the iterator will have\nreached the end of the set of values to be returned.\n3.2.8.4. Coroutine functions\u00b6\nA function or method which is defined using async def\nis called\na coroutine function. Such a function, when called, returns a\ncoroutine object. It may contain await\nexpressions,\nas well as async with\nand async for\nstatements. See\nalso the Coroutine Objects section.\n3.2.8.5. Asynchronous generator functions\u00b6\nA function or method which is defined using async def\nand\nwhich uses the yield\nstatement is called a\nasynchronous generator function. Such a function, when called,\nreturns an asynchronous iterator object which can be used in an\nasync for\nstatement to execute the body of the function.\nCalling the asynchronous iterator\u2019s\naiterator.__anext__\nmethod\nwill return an awaitable which when awaited\nwill execute until it provides a value using the yield\nexpression. When the function executes an empty return\nstatement or falls off the end, a StopAsyncIteration\nexception\nis raised and the asynchronous iterator will have reached the end of\nthe set of values to be yielded.\n3.2.8.6. Built-in functions\u00b6\nA built-in function object is a wrapper around a C function. Examples of\nbuilt-in functions are len()\nand math.sin()\n(math\nis a\nstandard built-in module). The number and type of the arguments are\ndetermined by the C function. Special read-only attributes:\n__doc__\nis the function\u2019s documentation string, orNone\nif unavailable. Seefunction.__doc__\n.__name__\nis the function\u2019s name. Seefunction.__name__\n.__self__\nis set toNone\n(but see the next item).__module__\nis the name of the module the function was defined in orNone\nif unavailable. Seefunction.__module__\n.\n3.2.8.7. Built-in methods\u00b6\nThis is really a different disguise of a built-in function, this time containing\nan object passed to the C function as an implicit extra argument. An example of\na built-in method is alist.append()\n, assuming alist is a list object. In\nthis case, the special read-only attribute __self__\nis set to the object\ndenoted by alist. (The attribute has the same semantics as it does with\nother instance methods\n.)\n3.2.8.8. Classes\u00b6\nClasses are callable. These objects normally act as factories for new\ninstances of themselves, but variations are possible for class types that\noverride __new__()\n. The arguments of the call are passed to\n__new__()\nand, in the typical case, to __init__()\nto\ninitialize the new instance.\n3.2.8.9. Class Instances\u00b6\nInstances of arbitrary classes can be made callable by defining a\n__call__()\nmethod in their class.\n3.2.9. Modules\u00b6\nModules are a basic organizational unit of Python code, and are created by\nthe import system as invoked either by the\nimport\nstatement, or by calling\nfunctions such as importlib.import_module()\nand built-in\n__import__()\n. A module object has a namespace implemented by a\ndictionary\nobject (this is the dictionary referenced by the\n__globals__\nattribute of functions defined in the module). Attribute references are\ntranslated to lookups in this dictionary, e.g., m.x\nis equivalent to\nm.__dict__[\"x\"]\n. A module object does not contain the code object used\nto initialize the module (since it isn\u2019t needed once the initialization is\ndone).\nAttribute assignment updates the module\u2019s namespace dictionary, e.g.,\nm.x = 1\nis equivalent to m.__dict__[\"x\"] = 1\n.\n3.2.9.2. Other writable attributes on module objects\u00b6\nAs well as the import-related attributes listed above, module objects also have the following writable attributes:\n- module.__doc__\u00b6\nThe module\u2019s documentation string, or\nNone\nif unavailable. See also:__doc__ attributes\n.\n- module.__annotations__\u00b6\nA dictionary containing variable annotations collected during module body execution. For best practices on working with\n__annotations__\n, seeannotationlib\n.Changed in version 3.14: Annotations are now lazily evaluated. See PEP 649.\n- module.__annotate__\u00b6\nThe annotate function for this module, or\nNone\nif the module has no annotations. See also:__annotate__\nattributes.Added in version 3.14.\n3.2.9.3. Module dictionaries\u00b6\nModule objects also have the following special read-only attribute:\n- module.__dict__\u00b6\nThe module\u2019s namespace as a dictionary object. Uniquely among the attributes listed here,\n__dict__\ncannot be accessed as a global variable from within a module; it can only be accessed as an attribute on module objects.CPython implementation detail: Because of the way CPython clears module dictionaries, the module dictionary will be cleared when the module falls out of scope even if the dictionary still has live references. To avoid this, copy the dictionary or keep the module around while using its dictionary directly.\n3.2.10. Custom classes\u00b6\nCustom class types are typically created by class definitions (see section\nClass definitions). A class has a namespace implemented by a dictionary object.\nClass attribute references are translated to lookups in this dictionary, e.g.,\nC.x\nis translated to C.__dict__[\"x\"]\n(although there are a number of\nhooks which allow for other means of locating attributes). When the attribute\nname is not found there, the attribute search continues in the base classes.\nThis search of the base classes uses the C3 method resolution order which\nbehaves correctly even in the presence of \u2018diamond\u2019 inheritance structures\nwhere there are multiple inheritance paths leading back to a common ancestor.\nAdditional details on the C3 MRO used by Python can be found at\nThe Python 2.3 Method Resolution Order.\nWhen a class attribute reference (for class C\n, say) would yield a\nclass method object, it is transformed into an instance method object whose\n__self__\nattribute is C\n.\nWhen it would yield a staticmethod\nobject,\nit is transformed into the object wrapped by the static method\nobject. See section Implementing Descriptors for another way in which attributes\nretrieved from a class may differ from those actually contained in its\n__dict__\n.\nClass attribute assignments update the class\u2019s dictionary, never the dictionary of a base class.\nA class object can be called (see above) to yield a class instance (see below).\n3.2.10.1. Special attributes\u00b6\nAttribute |\nMeaning |\n|---|---|\n|\nThe class\u2019s name.\nSee also: |\n|\nThe class\u2019s qualified name.\nSee also: |\n|\nThe name of the module in which the class was defined. |\n|\nA |\n|\nA |\n|\nCPython implementation detail: The single base class in the inheritance chain that is responsible\nfor the memory layout of instances. This attribute corresponds to\n|\n|\nThe class\u2019s documentation string, or |\n|\nA dictionary containing\nvariable annotations\ncollected during class body execution. See also:\nFor best practices on working with Warning Accessing the This attribute does not exist on certain builtin classes. On\nuser-defined classes without Changed in version 3.14: Annotations are now lazily evaluated. See PEP 649. |\n|\nThe annotate function for this class, or Added in version 3.14. |\n|\nA Added in version 3.12. |\n|\nA Added in version 3.13. |\n|\nThe line number of the first line of the class definition,\nincluding decorators.\nSetting the Added in version 3.13. |\n|\nThe |\n3.2.10.2. Special methods\u00b6\nIn addition to the special attributes described above, all Python classes also have the following two methods available:\n- type.mro()\u00b6\nThis method can be overridden by a metaclass to customize the method resolution order for its instances. It is called at class instantiation, and its result is stored in\n__mro__\n.\n- type.__subclasses__()\u00b6\nEach class keeps a list of weak references to its immediate subclasses. This method returns a list of all those references still alive. The list is in definition order. Example:\n>>> class A: pass >>> class B(A): pass >>> A.__subclasses__() []\n3.2.11. Class instances\u00b6\nA class instance is created by calling a class object (see above). A class\ninstance has a namespace implemented as a dictionary which is the first place\nin which attribute references are searched. When an attribute is not found\nthere, and the instance\u2019s class has an attribute by that name, the search\ncontinues with the class attributes. If a class attribute is found that is a\nuser-defined function object, it is transformed into an instance method\nobject whose __self__\nattribute is the instance. Static method and\nclass method objects are also transformed; see above under \u201cClasses\u201d. See\nsection Implementing Descriptors for another way in which attributes of a class\nretrieved via its instances may differ from the objects actually stored in\nthe class\u2019s __dict__\n. If no class attribute is found, and the\nobject\u2019s class has a __getattr__()\nmethod, that is called to satisfy\nthe lookup.\nAttribute assignments and deletions update the instance\u2019s dictionary, never a\nclass\u2019s dictionary. If the class has a __setattr__()\nor\n__delattr__()\nmethod, this is called instead of updating the instance\ndictionary directly.\nClass instances can pretend to be numbers, sequences, or mappings if they have methods with certain special names. See section Special method names.\n3.2.11.1. Special attributes\u00b6\n- object.__class__\u00b6\nThe class to which a class instance belongs.\n3.2.12. I/O objects (also known as file objects)\u00b6\nA file object represents an open file. Various shortcuts are\navailable to create file objects: the open()\nbuilt-in function, and\nalso os.popen()\n, os.fdopen()\n, and the\nmakefile()\nmethod of socket objects (and perhaps by\nother functions or methods provided by extension modules).\nThe objects sys.stdin\n, sys.stdout\nand sys.stderr\nare\ninitialized to file objects corresponding to the interpreter\u2019s standard\ninput, output and error streams; they are all open in text mode and\ntherefore follow the interface defined by the io.TextIOBase\nabstract class.\n3.2.13. Internal types\u00b6\nA few types used internally by the interpreter are exposed to the user. Their definitions may change with future versions of the interpreter, but they are mentioned here for completeness.\n3.2.13.1. Code objects\u00b6\nCode objects represent byte-compiled executable Python code, or bytecode. The difference between a code object and a function object is that the function object contains an explicit reference to the function\u2019s globals (the module in which it was defined), while a code object contains no context; also the default argument values are stored in the function object, not in the code object (because they represent values calculated at run-time). Unlike function objects, code objects are immutable and contain no references (directly or indirectly) to mutable objects.\n3.2.13.1.1. Special read-only attributes\u00b6\n|\nThe function name |\n|\nThe fully qualified function name Added in version 3.11. |\n|\nThe total number of positional parameters (including positional-only parameters and parameters with default values) that the function has |\n|\nThe number of positional-only parameters (including arguments with default values) that the function has |\n|\nThe number of keyword-only parameters (including arguments with default values) that the function has |\n|\nThe number of local variables used by the function (including parameters) |\n|\nA |\n|\nA |\n|\nA Note: references to global and builtin names are not included. |\n|\nA string representing the sequence of bytecode instructions in the function |\n|\nA |\n|\nA |\n|\nThe name of the file from which the code was compiled |\n|\nThe line number of the first line of the function |\n|\nA string encoding the mapping from bytecode offsets to line numbers. For details, see the source code of the interpreter. Deprecated since version 3.12: This attribute of code objects is deprecated, and may be removed in Python 3.15. |\n|\nThe required stack size of the code object |\n|\nAn |\nThe following flag bits are defined for co_flags\n:\nbit 0x04\nis set if\nthe function uses the *arguments\nsyntax to accept an arbitrary number of\npositional arguments; bit 0x08\nis set if the function uses the\n**keywords\nsyntax to accept arbitrary keyword arguments; bit 0x20\nis set\nif the function is a generator. See Code Objects Bit Flags for details\non the semantics of each flags that might be present.\nFuture feature declarations (for example, from __future__ import division\n) also use bits\nin co_flags\nto indicate whether a code object was compiled with a\nparticular feature enabled. See compiler_flag\n.\nOther bits in co_flags\nare reserved for internal use.\nIf a code object represents a function and has a docstring,\nthe CO_HAS_DOCSTRING\nbit is set in co_flags\nand the first item in co_consts\nis\nthe docstring of the function.\n3.2.13.1.2. Methods on code objects\u00b6\n- codeobject.co_positions()\u00b6\nReturns an iterable over the source code positions of each bytecode instruction in the code object.\nThe iterator returns\ntuple\ns containing the(start_line, end_line, start_column, end_column)\n. The i-th tuple corresponds to the position of the source code that compiled to the i-th code unit. Column information is 0-indexed utf-8 byte offsets on the given source line.This positional information can be missing. A non-exhaustive lists of cases where this may happen:\nRunning the interpreter with\n-X\nno_debug_ranges\n.Loading a pyc file compiled while using\n-X\nno_debug_ranges\n.Position tuples corresponding to artificial instructions.\nLine and column numbers that can\u2019t be represented due to implementation specific limitations.\nWhen this occurs, some or all of the tuple elements can be\nNone\n.Added in version 3.11.\nNote\nThis feature requires storing column positions in code objects which may result in a small increase of disk usage of compiled Python files or interpreter memory usage. To avoid storing the extra information and/or deactivate printing the extra traceback information, the\n-X\nno_debug_ranges\ncommand line flag or thePYTHONNODEBUGRANGES\nenvironment variable can be used.\n- codeobject.co_lines()\u00b6\nReturns an iterator that yields information about successive ranges of bytecodes. Each item yielded is a\n(start, end, lineno)\ntuple\n:start\n(anint\n) represents the offset (inclusive) of the start of the bytecode rangeend\n(anint\n) represents the offset (exclusive) of the end of the bytecode rangelineno\nis anint\nrepresenting the line number of the bytecode range, orNone\nif the bytecodes in the given range have no line number\nThe items yielded will have the following properties:\nThe first range yielded will have a\nstart\nof 0.The\n(start, end)\nranges will be non-decreasing and consecutive. That is, for any pair oftuple\ns, thestart\nof the second will be equal to theend\nof the first.No range will be backwards:\nend >= start\nfor all triples.The last\ntuple\nyielded will haveend\nequal to the size of the bytecode.\nZero-width ranges, where\nstart == end\n, are allowed. Zero-width ranges are used for lines that are present in the source code, but have been eliminated by the bytecode compiler.Added in version 3.10.\nSee also\n- PEP 626 - Precise line numbers for debugging and other tools.\nThe PEP that introduced the\nco_lines()\nmethod.\n- codeobject.replace(**kwargs)\u00b6\nReturn a copy of the code object with new values for the specified fields.\nCode objects are also supported by the generic function\ncopy.replace()\n.Added in version 3.8.\n3.2.13.2. Frame objects\u00b6\nFrame objects represent execution frames. They may occur in traceback objects, and are also passed to registered trace functions.\n3.2.13.2.1. Special read-only attributes\u00b6\n|\nPoints to the previous stack frame (towards the caller),\nor |\n|\nThe code object being executed in this frame.\nAccessing this attribute raises an auditing event\n|\n|\nThe mapping used by the frame to look up local variables. If the frame refers to an optimized scope, this may return a write-through proxy object. Changed in version 3.13: Return a proxy for optimized scopes. |\n|\nThe dictionary used by the frame to look up global variables |\n|\nThe dictionary used by the frame to look up built-in (intrinsic) names |\n|\nThe \u201cprecise instruction\u201d of the frame object (this is an index into the bytecode string of the code object) |\n|\nThe generator or coroutine object that owns this frame,\nor Added in version 3.14. |\n3.2.13.2.2. Special writable attributes\u00b6\n|\nIf not |\n|\nSet this attribute to |\n|\nSet this attribute to |\n|\nThe current line number of the frame \u2013 writing to this from within a trace function jumps to the given line (only for the bottom-most frame). A debugger can implement a Jump command (aka Set Next Statement) by writing to this attribute. |\n3.2.13.2.3. Frame object methods\u00b6\nFrame objects support one method:\n- frame.clear()\u00b6\nThis method clears all references to local variables held by the frame. Also, if the frame belonged to a generator, the generator is finalized. This helps break reference cycles involving frame objects (for example when catching an exception and storing its traceback for later use).\nRuntimeError\nis raised if the frame is currently executing or suspended.Added in version 3.4.\nChanged in version 3.13: Attempting to clear a suspended frame raises\nRuntimeError\n(as has always been the case for executing frames).\n3.2.13.3. Traceback objects\u00b6\nTraceback objects represent the stack trace of an exception.\nA traceback object\nis implicitly created when an exception occurs, and may also be explicitly\ncreated by calling types.TracebackType\n.\nChanged in version 3.7: Traceback objects can now be explicitly instantiated from Python code.\nFor implicitly created tracebacks, when the search for an exception handler\nunwinds the execution stack, at each unwound level a traceback object is\ninserted in front of the current traceback. When an exception handler is\nentered, the stack trace is made available to the program. (See section\nThe try statement.) It is accessible as the third item of the\ntuple returned by sys.exc_info()\n, and as the\n__traceback__\nattribute\nof the caught exception.\nWhen the program contains no suitable\nhandler, the stack trace is written (nicely formatted) to the standard error\nstream; if the interpreter is interactive, it is also made available to the user\nas sys.last_traceback\n.\nFor explicitly created tracebacks, it is up to the creator of the traceback\nto determine how the tb_next\nattributes should be linked to\nform a full stack trace.\nSpecial read-only attributes:\n|\nPoints to the execution frame of the current level. Accessing this attribute raises an\nauditing event |\n|\nGives the line number where the exception occurred |\n|\nIndicates the \u201cprecise instruction\u201d. |\nThe line number and last instruction in the traceback may differ from the\nline number of its frame object if the exception\noccurred in a\ntry\nstatement with no matching except clause or with a\nfinally\nclause.\n- traceback.tb_next\u00b6\nThe special writable attribute\ntb_next\nis the next level in the stack trace (towards the frame where the exception occurred), orNone\nif there is no next level.Changed in version 3.7: This attribute is now writable\n3.2.13.4. Slice objects\u00b6\nSlice objects are used to represent slices for\n__getitem__()\nmethods. They are also created by the built-in slice()\nfunction.\nSpecial read-only attributes: start\nis the lower bound;\nstop\nis the upper bound; step\nis the step\nvalue; each is None\nif omitted. These attributes can have any type.\nSlice objects support one method:\n- slice.indices(self, length)\u00b6\nThis method takes a single integer argument length and computes information about the slice that the slice object would describe if applied to a sequence of length items. It returns a tuple of three integers; respectively these are the start and stop indices and the step or stride length of the slice. Missing or out-of-bounds indices are handled in a manner consistent with regular slices.\n3.2.13.5. Static method objects\u00b6\nStatic method objects provide a way of defeating the transformation of function\nobjects to method objects described above. A static method object is a wrapper\naround any other object, usually a user-defined method object. When a static\nmethod object is retrieved from a class or a class instance, the object actually\nreturned is the wrapped object, which is not subject to any further\ntransformation. Static method objects are also callable. Static method\nobjects are created by the built-in staticmethod()\nconstructor.\n3.2.13.6. Class method objects\u00b6\nA class method object, like a static method object, is a wrapper around another\nobject that alters the way in which that object is retrieved from classes and\nclass instances. The behaviour of class method objects upon such retrieval is\ndescribed above, under \u201cinstance methods\u201d. Class method objects are created\nby the built-in classmethod()\nconstructor.\n3.3. Special method names\u00b6\nA class can implement certain operations that are invoked by special syntax\n(such as arithmetic operations or subscripting and slicing) by defining methods\nwith special names. This is Python\u2019s approach to operator overloading,\nallowing classes to define their own behavior with respect to language\noperators. For instance, if a class defines a method named\n__getitem__()\n,\nand x\nis an instance of this class, then x[i]\nis roughly equivalent\nto type(x).__getitem__(x, i)\n. Except where mentioned, attempts to execute an\noperation raise an exception when no appropriate method is defined (typically\nAttributeError\nor TypeError\n).\nSetting a special method to None\nindicates that the corresponding\noperation is not available. For example, if a class sets\n__iter__()\nto None\n, the class is not iterable, so calling\niter()\non its instances will raise a TypeError\n(without\nfalling back to __getitem__()\n). [2]\nWhen implementing a class that emulates any built-in type, it is important that the emulation only be implemented to the degree that it makes sense for the object being modelled. For example, some sequences may work well with retrieval of individual elements, but extracting a slice may not make sense. (One example of this is the NodeList interface in the W3C\u2019s Document Object Model.)\n3.3.1. Basic customization\u00b6\n- object.__new__(cls[, ...])\u00b6\nCalled to create a new instance of class cls.\n__new__()\nis a static method (special-cased so you need not declare it as such) that takes the class of which an instance was requested as its first argument. The remaining arguments are those passed to the object constructor expression (the call to the class). The return value of__new__()\nshould be the new object instance (usually an instance of cls).Typical implementations create a new instance of the class by invoking the superclass\u2019s\n__new__()\nmethod usingsuper().__new__(cls[, ...])\nwith appropriate arguments and then modifying the newly created instance as necessary before returning it.If\n__new__()\nis invoked during object construction and it returns an instance of cls, then the new instance\u2019s__init__()\nmethod will be invoked like__init__(self[, ...])\n, where self is the new instance and the remaining arguments are the same as were passed to the object constructor.If\n__new__()\ndoes not return an instance of cls, then the new instance\u2019s__init__()\nmethod will not be invoked.__new__()\nis intended mainly to allow subclasses of immutable types (like int, str, or tuple) to customize instance creation. It is also commonly overridden in custom metaclasses in order to customize class creation.\n- object.__init__(self[, ...])\u00b6\nCalled after the instance has been created (by\n__new__()\n), but before it is returned to the caller. The arguments are those passed to the class constructor expression. If a base class has an__init__()\nmethod, the derived class\u2019s__init__()\nmethod, if any, must explicitly call it to ensure proper initialization of the base class part of the instance; for example:super().__init__([args...])\n.Because\n__new__()\nand__init__()\nwork together in constructing objects (__new__()\nto create it, and__init__()\nto customize it), no non-None\nvalue may be returned by__init__()\n; doing so will cause aTypeError\nto be raised at runtime.\n- object.__del__(self)\u00b6\nCalled when the instance is about to be destroyed. This is also called a finalizer or (improperly) a destructor. If a base class has a\n__del__()\nmethod, the derived class\u2019s__del__()\nmethod, if any, must explicitly call it to ensure proper deletion of the base class part of the instance.It is possible (though not recommended!) for the\n__del__()\nmethod to postpone destruction of the instance by creating a new reference to it. This is called object resurrection. It is implementation-dependent whether__del__()\nis called a second time when a resurrected object is about to be destroyed; the current CPython implementation only calls it once.It is not guaranteed that\n__del__()\nmethods are called for objects that still exist when the interpreter exits.weakref.finalize\nprovides a straightforward way to register a cleanup function to be called when an object is garbage collected.Note\ndel x\ndoesn\u2019t directly callx.__del__()\n\u2014 the former decrements the reference count forx\nby one, and the latter is only called whenx\n\u2019s reference count reaches zero.CPython implementation detail: It is possible for a reference cycle to prevent the reference count of an object from going to zero. In this case, the cycle will be later detected and deleted by the cyclic garbage collector. A common cause of reference cycles is when an exception has been caught in a local variable. The frame\u2019s locals then reference the exception, which references its own traceback, which references the locals of all frames caught in the traceback.\nSee also\nDocumentation for the\ngc\nmodule.Warning\nDue to the precarious circumstances under which\n__del__()\nmethods are invoked, exceptions that occur during their execution are ignored, and a warning is printed tosys.stderr\ninstead. In particular:__del__()\ncan be invoked when arbitrary code is being executed, including from any arbitrary thread. If__del__()\nneeds to take a lock or invoke any other blocking resource, it may deadlock as the resource may already be taken by the code that gets interrupted to execute__del__()\n.__del__()\ncan be executed during interpreter shutdown. As a consequence, the global variables it needs to access (including other modules) may already have been deleted or set toNone\n. Python guarantees that globals whose name begins with a single underscore are deleted from their module before other globals are deleted; if no other references to such globals exist, this may help in assuring that imported modules are still available at the time when the__del__()\nmethod is called.\n- object.__repr__(self)\u00b6\nCalled by the\nrepr()\nbuilt-in function to compute the \u201cofficial\u201d string representation of an object. If at all possible, this should look like a valid Python expression that could be used to recreate an object with the same value (given an appropriate environment). If this is not possible, a string of the form<...some useful description...>\nshould be returned. The return value must be a string object. If a class defines__repr__()\nbut not__str__()\n, then__repr__()\nis also used when an \u201cinformal\u201d string representation of instances of that class is required.This is typically used for debugging, so it is important that the representation is information-rich and unambiguous. A default implementation is provided by the\nobject\nclass itself.\n- object.__str__(self)\u00b6\nCalled by\nstr(object)\n, the default__format__()\nimplementation, and the built-in functionprint()\n, to compute the \u201cinformal\u201d or nicely printable string representation of an object. The return value must be a str object.This method differs from\nobject.__repr__()\nin that there is no expectation that__str__()\nreturn a valid Python expression: a more convenient or concise representation can be used.The default implementation defined by the built-in type\nobject\ncallsobject.__repr__()\n.\n- object.__bytes__(self)\u00b6\nCalled by bytes to compute a byte-string representation of an object. This should return a\nbytes\nobject. Theobject\nclass itself does not provide this method.\n- object.__format__(self, format_spec)\u00b6\nCalled by the\nformat()\nbuilt-in function, and by extension, evaluation of formatted string literals and thestr.format()\nmethod, to produce a \u201cformatted\u201d string representation of an object. The format_spec argument is a string that contains a description of the formatting options desired. The interpretation of the format_spec argument is up to the type implementing__format__()\n, however most classes will either delegate formatting to one of the built-in types, or use a similar formatting option syntax.See Format Specification Mini-Language for a description of the standard formatting syntax.\nThe return value must be a string object.\nThe default implementation by the\nobject\nclass should be given an empty format_spec string. It delegates to__str__()\n.Changed in version 3.4: The __format__ method of\nobject\nitself raises aTypeError\nif passed any non-empty string.Changed in version 3.7:\nobject.__format__(x, '')\nis now equivalent tostr(x)\nrather thanformat(str(x), '')\n.\n- object.__lt__(self, other)\u00b6\n- object.__le__(self, other)\u00b6\n- object.__eq__(self, other)\u00b6\n- object.__ne__(self, other)\u00b6\n- object.__gt__(self, other)\u00b6\n- object.__ge__(self, other)\u00b6\nThese are the so-called \u201crich comparison\u201d methods. The correspondence between operator symbols and method names is as follows:\nxy\ncallsx.__gt__(y)\n, andx>=y\ncallsx.__ge__(y)\n.A rich comparison method may return the singleton\nNotImplemented\nif it does not implement the operation for a given pair of arguments. By convention,False\nandTrue\nare returned for a successful comparison. However, these methods can return any value, so if the comparison operator is used in a Boolean context (e.g., in the condition of anif\nstatement), Python will callbool()\non the value to determine if the result is true or false.By default,\nobject\nimplements__eq__()\nby usingis\n, returningNotImplemented\nin the case of a false comparison:True if x is y else NotImplemented\n. For__ne__()\n, by default it delegates to__eq__()\nand inverts the result unless it isNotImplemented\n. There are no other implied relationships among the comparison operators or default implementations; for example, the truth of(x.__hash__\n.If a class that does not override\n__eq__()\nwishes to suppress hash support, it should include__hash__ = None\nin the class definition. A class which defines its own__hash__()\nthat explicitly raises aTypeError\nwould be incorrectly identified as hashable by anisinstance(obj, collections.abc.Hashable)\ncall.Note\nBy default, the\n__hash__()\nvalues of str and bytes objects are \u201csalted\u201d with an unpredictable random value. Although they remain constant within an individual Python process, they are not predictable between repeated invocations of Python.This is intended to provide protection against a denial-of-service caused by carefully chosen inputs that exploit the worst case performance of a dict insertion, O(n2) complexity. See http://ocert.org/advisories/ocert-2011-003.html for details.\nChanging hash values affects the iteration order of sets. Python has never made guarantees about this ordering (and it typically varies between 32-bit and 64-bit builds).\nSee also\nPYTHONHASHSEED\n.Changed in version 3.3: Hash randomization is enabled by default.\n- object.__bool__(self)\u00b6\nCalled to implement truth value testing and the built-in operation\nbool()\n; should returnFalse\norTrue\n. When this method is not defined,__len__()\nis called, if it is defined, and the object is considered true if its result is nonzero. If a class defines neither__len__()\nnor__bool__()\n(which is true of theobject\nclass itself), all its instances are considered true.\n3.3.2. Customizing attribute access\u00b6\nThe following methods can be defined to customize the meaning of attribute\naccess (use of, assignment to, or deletion of x.name\n) for class instances.\n- object.__getattr__(self, name)\u00b6\nCalled when the default attribute access fails with an\nAttributeError\n(either__getattribute__()\nraises anAttributeError\nbecause name is not an instance attribute or an attribute in the class tree forself\n; or__get__()\nof a name property raisesAttributeError\n). This method should either return the (computed) attribute value or raise anAttributeError\nexception. Theobject\nclass itself does not provide this method.Note that if the attribute is found through the normal mechanism,\n__getattr__()\nis not called. (This is an intentional asymmetry between__getattr__()\nand__setattr__()\n.) This is done both for efficiency reasons and because otherwise__getattr__()\nwould have no way to access other attributes of the instance. Note that at least for instance variables, you can take total control by not inserting any values in the instance attribute dictionary (but instead inserting them in another object). See the__getattribute__()\nmethod below for a way to actually get total control over attribute access.\n- object.__getattribute__(self, name)\u00b6\nCalled unconditionally to implement attribute accesses for instances of the class. If the class also defines\n__getattr__()\n, the latter will not be called unless__getattribute__()\neither calls it explicitly or raises anAttributeError\n. This method should return the (computed) attribute value or raise anAttributeError\nexception. In order to avoid infinite recursion in this method, its implementation should always call the base class method with the same name to access any attributes it needs, for example,object.__getattribute__(self, name)\n.Note\nThis method may still be bypassed when looking up special methods as the result of implicit invocation via language syntax or built-in functions. See Special method lookup.\nFor certain sensitive attribute accesses, raises an auditing event\nobject.__getattr__\nwith argumentsobj\nandname\n.\n- object.__setattr__(self, name, value)\u00b6\nCalled when an attribute assignment is attempted. This is called instead of the normal mechanism (i.e. store the value in the instance dictionary). name is the attribute name, value is the value to be assigned to it.\nIf\n__setattr__()\nwants to assign to an instance attribute, it should call the base class method with the same name, for example,object.__setattr__(self, name, value)\n.For certain sensitive attribute assignments, raises an auditing event\nobject.__setattr__\nwith argumentsobj\n,name\n,value\n.\n- object.__delattr__(self, name)\u00b6\nLike\n__setattr__()\nbut for attribute deletion instead of assignment. This should only be implemented ifdel obj.name\nis meaningful for the object.For certain sensitive attribute deletions, raises an auditing event\nobject.__delattr__\nwith argumentsobj\nandname\n.\n- object.__dir__(self)\u00b6\nCalled when\ndir()\nis called on the object. An iterable must be returned.dir()\nconverts the returned iterable to a list and sorts it.\n3.3.2.1. Customizing module attribute access\u00b6\nSpecial names __getattr__\nand __dir__\ncan be also used to customize\naccess to module attributes. The __getattr__\nfunction at the module level\nshould accept one argument which is the name of an attribute and return the\ncomputed value or raise an AttributeError\n. If an attribute is\nnot found on a module object through the normal lookup, i.e.\nobject.__getattribute__()\n, then __getattr__\nis searched in\nthe module __dict__\nbefore raising an AttributeError\n. If found,\nit is called with the attribute name and the result is returned.\nThe __dir__\nfunction should accept no arguments, and return an iterable of\nstrings that represents the names accessible on module. If present, this\nfunction overrides the standard dir()\nsearch on a module.\n- module.__class__\u00b6\nFor a more fine grained customization of the module behavior (setting\nattributes, properties, etc.), one can set the __class__\nattribute of\na module object to a subclass of types.ModuleType\n. For example:\nimport sys\nfrom types import ModuleType\nclass VerboseModule(ModuleType):\ndef __repr__(self):\nreturn f'Verbose {self.__name__}'\ndef __setattr__(self, attr, value):\nprint(f'Setting {attr}...')\nsuper().__setattr__(attr, value)\nsys.modules[__name__].__class__ = VerboseModule\nNote\nDefining module __getattr__\nand setting module __class__\nonly\naffect lookups made using the attribute access syntax \u2013 directly accessing\nthe module globals (whether by code within the module, or via a reference\nto the module\u2019s globals dictionary) is unaffected.\nChanged in version 3.5: __class__\nmodule attribute is now writable.\nAdded in version 3.7: __getattr__\nand __dir__\nmodule attributes.\nSee also\n- PEP 562 - Module __getattr__ and __dir__\nDescribes the\n__getattr__\nand__dir__\nfunctions on modules.\n3.3.2.2. Implementing Descriptors\u00b6\nThe following methods only apply when an instance of the class containing the\nmethod (a so-called descriptor class) appears in an owner class (the\ndescriptor must be in either the owner\u2019s class dictionary or in the class\ndictionary for one of its parents). In the examples below, \u201cthe attribute\u201d\nrefers to the attribute whose name is the key of the property in the owner\nclass\u2019 __dict__\n. The object\nclass itself does not\nimplement any of these protocols.\n- object.__get__(self, instance, owner=None)\u00b6\nCalled to get the attribute of the owner class (class attribute access) or of an instance of that class (instance attribute access). The optional owner argument is the owner class, while instance is the instance that the attribute was accessed through, or\nNone\nwhen the attribute is accessed through the owner.This method should return the computed attribute value or raise an\nAttributeError\nexception.PEP 252 specifies that\n__get__()\nis callable with one or two arguments. Python\u2019s own built-in descriptors support this specification; however, it is likely that some third-party tools have descriptors that require both arguments. Python\u2019s own__getattribute__()\nimplementation always passes in both arguments whether they are required or not.\n- object.__set__(self, instance, value)\u00b6\nCalled to set the attribute on an instance instance of the owner class to a new value, value.\nNote, adding\n__set__()\nor__delete__()\nchanges the kind of descriptor to a \u201cdata descriptor\u201d. See Invoking Descriptors for more details.\n- object.__delete__(self, instance)\u00b6\nCalled to delete the attribute on an instance instance of the owner class.\nInstances of descriptors may also have the __objclass__\nattribute\npresent:\n- object.__objclass__\u00b6\nThe attribute\n__objclass__\nis interpreted by theinspect\nmodule as specifying the class where this object was defined (setting this appropriately can assist in runtime introspection of dynamic class attributes). For callables, it may indicate that an instance of the given type (or a subclass) is expected or required as the first positional argument (for example, CPython sets this attribute for unbound methods that are implemented in C).\n3.3.2.3. Invoking Descriptors\u00b6\nIn general, a descriptor is an object attribute with \u201cbinding behavior\u201d, one\nwhose attribute access has been overridden by methods in the descriptor\nprotocol: __get__()\n, __set__()\n, and\n__delete__()\n. If any of\nthose methods are defined for an object, it is said to be a descriptor.\nThe default behavior for attribute access is to get, set, or delete the\nattribute from an object\u2019s dictionary. For instance, a.x\nhas a lookup chain\nstarting with a.__dict__['x']\n, then type(a).__dict__['x']\n, and\ncontinuing through the base classes of type(a)\nexcluding metaclasses.\nHowever, if the looked-up value is an object defining one of the descriptor methods, then Python may override the default behavior and invoke the descriptor method instead. Where this occurs in the precedence chain depends on which descriptor methods were defined and how they were called.\nThe starting point for descriptor invocation is a binding, a.x\n. How the\narguments are assembled depends on a\n:\n- Direct Call\nThe simplest and least common call is when user code directly invokes a descriptor method:\nx.__get__(a)\n.- Instance Binding\nIf binding to an object instance,\na.x\nis transformed into the call:type(a).__dict__['x'].__get__(a, type(a))\n.- Class Binding\nIf binding to a class,\nA.x\nis transformed into the call:A.__dict__['x'].__get__(None, A)\n.- Super Binding\nA dotted lookup such as\nsuper(A, a).x\nsearchesa.__class__.__mro__\nfor a base classB\nfollowingA\nand then returnsB.__dict__['x'].__get__(a, A)\n. If not a descriptor,x\nis returned unchanged.\nFor instance bindings, the precedence of descriptor invocation depends on\nwhich descriptor methods are defined. A descriptor can define any combination\nof __get__()\n, __set__()\nand\n__delete__()\n. If it does not\ndefine __get__()\n, then accessing the attribute will return the descriptor\nobject itself unless there is a value in the object\u2019s instance dictionary. If\nthe descriptor defines __set__()\nand/or __delete__()\n, it is a data\ndescriptor; if it defines neither, it is a non-data descriptor. Normally, data\ndescriptors define both __get__()\nand __set__()\n, while non-data\ndescriptors have just the __get__()\nmethod. Data descriptors with\n__get__()\nand __set__()\n(and/or __delete__()\n) defined\nalways override a redefinition in an\ninstance dictionary. In contrast, non-data descriptors can be overridden by\ninstances.\nPython methods (including those decorated with\n@staticmethod\nand @classmethod\n) are\nimplemented as non-data descriptors. Accordingly, instances can redefine and\noverride methods. This allows individual instances to acquire behaviors that\ndiffer from other instances of the same class.\nThe property()\nfunction is implemented as a data descriptor. Accordingly,\ninstances cannot override the behavior of a property.\n3.3.2.4. __slots__\u00b6\n__slots__ allow us to explicitly declare data members (like\nproperties) and deny the creation of __dict__\nand __weakref__\n(unless explicitly declared in __slots__ or available in a parent.)\nThe space saved over using __dict__\ncan be significant.\nAttribute lookup speed can be significantly improved as well.\n- object.__slots__\u00b6\nThis class variable can be assigned a string, iterable, or sequence of strings with variable names used by instances. __slots__ reserves space for the declared variables and prevents the automatic creation of\n__dict__\nand __weakref__ for each instance.\nNotes on using __slots__:\nWhen inheriting from a class without __slots__, the\n__dict__\nand __weakref__ attribute of the instances will always be accessible.Without a\n__dict__\nvariable, instances cannot be assigned new variables not listed in the __slots__ definition. Attempts to assign to an unlisted variable name raisesAttributeError\n. If dynamic assignment of new variables is desired, then add'__dict__'\nto the sequence of strings in the __slots__ declaration.Without a __weakref__ variable for each instance, classes defining __slots__ do not support\nweak references\nto its instances. If weak reference support is needed, then add'__weakref__'\nto the sequence of strings in the __slots__ declaration.__slots__ are implemented at the class level by creating descriptors for each variable name. As a result, class attributes cannot be used to set default values for instance variables defined by __slots__; otherwise, the class attribute would overwrite the descriptor assignment.\nThe action of a __slots__ declaration is not limited to the class where it is defined. __slots__ declared in parents are available in child classes. However, instances of a child subclass will get a\n__dict__\nand __weakref__ unless the subclass also defines __slots__ (which should only contain names of any additional slots).If a class defines a slot also defined in a base class, the instance variable defined by the base class slot is inaccessible (except by retrieving its descriptor directly from the base class). This renders the meaning of the program undefined. In the future, a check may be added to prevent this.\nTypeError\nwill be raised if nonempty __slots__ are defined for a class derived from a\"variable-length\" built-in type\nsuch asint\n,bytes\n, andtuple\n.Any non-string iterable may be assigned to __slots__.\nIf a\ndictionary\nis used to assign __slots__, the dictionary keys will be used as the slot names. The values of the dictionary can be used to provide per-attribute docstrings that will be recognised byinspect.getdoc()\nand displayed in the output ofhelp()\n.__class__\nassignment works only if both classes have the same __slots__.Multiple inheritance with multiple slotted parent classes can be used, but only one parent is allowed to have attributes created by slots (the other bases must have empty slot layouts) - violations raise\nTypeError\n.If an iterator is used for __slots__ then a descriptor is created for each of the iterator\u2019s values. However, the __slots__ attribute will be an empty iterator.\n3.3.3. Customizing class creation\u00b6\nWhenever a class inherits from another class, __init_subclass__()\nis\ncalled on the parent class. This way, it is possible to write classes which\nchange the behavior of subclasses. This is closely related to class\ndecorators, but where class decorators only affect the specific class they\u2019re\napplied to, __init_subclass__\nsolely applies to future subclasses of the\nclass defining the method.\n- classmethod object.__init_subclass__(cls)\u00b6\nThis method is called whenever the containing class is subclassed. cls is then the new subclass. If defined as a normal instance method, this method is implicitly converted to a class method.\nKeyword arguments which are given to a new class are passed to the parent class\u2019s\n__init_subclass__\n. For compatibility with other classes using__init_subclass__\n, one should take out the needed keyword arguments and pass the others over to the base class, as in:class Philosopher: def __init_subclass__(cls, /, default_name, **kwargs): super().__init_subclass__(**kwargs) cls.default_name = default_name class AustralianPhilosopher(Philosopher, default_name=\"Bruce\"): pass\nThe default implementation\nobject.__init_subclass__\ndoes nothing, but raises an error if it is called with any arguments.Note\nThe metaclass hint\nmetaclass\nis consumed by the rest of the type machinery, and is never passed to__init_subclass__\nimplementations. The actual metaclass (rather than the explicit hint) can be accessed astype(cls)\n.Added in version 3.6.\nWhen a class is created, type.__new__()\nscans the class variables\nand makes callbacks to those with a __set_name__()\nhook.\n- object.__set_name__(self, owner, name)\u00b6\nAutomatically called at the time the owning class owner is created. The object has been assigned to name in that class:\nclass A: x = C() # Automatically calls: x.__set_name__(A, 'x')\nIf the class variable is assigned after the class is created,\n__set_name__()\nwill not be called automatically. If needed,__set_name__()\ncan be called directly:class A: pass c = C() A.x = c # The hook is not called c.__set_name__(A, 'x') # Manually invoke the hook\nSee Creating the class object for more details.\nAdded in version 3.6.\n3.3.3.1. Metaclasses\u00b6\nBy default, classes are constructed using type()\n. The class body is\nexecuted in a new namespace and the class name is bound locally to the\nresult of type(name, bases, namespace)\n.\nThe class creation process can be customized by passing the metaclass\nkeyword argument in the class definition line, or by inheriting from an\nexisting class that included such an argument. In the following example,\nboth MyClass\nand MySubclass\nare instances of Meta\n:\nclass Meta(type):\npass\nclass MyClass(metaclass=Meta):\npass\nclass MySubclass(MyClass):\npass\nAny other keyword arguments that are specified in the class definition are passed through to all metaclass operations described below.\nWhen a class definition is executed, the following steps occur:\nMRO entries are resolved;\nthe appropriate metaclass is determined;\nthe class namespace is prepared;\nthe class body is executed;\nthe class object is created.\n3.3.3.2. Resolving MRO entries\u00b6\n- object.__mro_entries__(self, bases)\u00b6\nIf a base that appears in a class definition is not an instance of\ntype\n, then an__mro_entries__()\nmethod is searched on the base. If an__mro_entries__()\nmethod is found, the base is substituted with the result of a call to__mro_entries__()\nwhen creating the class. The method is called with the original bases tuple passed to the bases parameter, and must return a tuple of classes that will be used instead of the base. The returned tuple may be empty: in these cases, the original base is ignored.\nSee also\ntypes.resolve_bases()\nDynamically resolve bases that are not instances of\ntype\n.types.get_original_bases()\nRetrieve a class\u2019s \u201coriginal bases\u201d prior to modifications by\n__mro_entries__()\n.- PEP 560\nCore support for typing module and generic types.\n3.3.3.3. Determining the appropriate metaclass\u00b6\nThe appropriate metaclass for a class definition is determined as follows:\nif no bases and no explicit metaclass are given, then\ntype()\nis used;if an explicit metaclass is given and it is not an instance of\ntype()\n, then it is used directly as the metaclass;if an instance of\ntype()\nis given as the explicit metaclass, or bases are defined, then the most derived metaclass is used.\nThe most derived metaclass is selected from the explicitly specified\nmetaclass (if any) and the metaclasses (i.e. type(cls)\n) of all specified\nbase classes. The most derived metaclass is one which is a subtype of all\nof these candidate metaclasses. If none of the candidate metaclasses meets\nthat criterion, then the class definition will fail with TypeError\n.\n3.3.3.4. Preparing the class namespace\u00b6\nOnce the appropriate metaclass has been identified, then the class namespace\nis prepared. If the metaclass has a __prepare__\nattribute, it is called\nas namespace = metaclass.__prepare__(name, bases, **kwds)\n(where the\nadditional keyword arguments, if any, come from the class definition). The\n__prepare__\nmethod should be implemented as a\nclassmethod\n. The\nnamespace returned by __prepare__\nis passed in to __new__\n, but when\nthe final class object is created the namespace is copied into a new dict\n.\nIf the metaclass has no __prepare__\nattribute, then the class namespace\nis initialised as an empty ordered mapping.\nSee also\n- PEP 3115 - Metaclasses in Python 3000\nIntroduced the\n__prepare__\nnamespace hook\n3.3.3.5. Executing the class body\u00b6\nThe class body is executed (approximately) as\nexec(body, globals(), namespace)\n. The key difference from a normal\ncall to exec()\nis that lexical scoping allows the class body (including\nany methods) to reference names from the current and outer scopes when the\nclass definition occurs inside a function.\nHowever, even when the class definition occurs inside the function, methods\ndefined inside the class still cannot see names defined at the class scope.\nClass variables must be accessed through the first parameter of instance or\nclass methods, or through the implicit lexically scoped __class__\nreference\ndescribed in the next section.\n3.3.3.6. Creating the class object\u00b6\nOnce the class namespace has been populated by executing the class body,\nthe class object is created by calling\nmetaclass(name, bases, namespace, **kwds)\n(the additional keywords\npassed here are the same as those passed to __prepare__\n).\nThis class object is the one that will be referenced by the zero-argument\nform of super()\n. __class__\nis an implicit closure reference\ncreated by the compiler if any methods in a class body refer to either\n__class__\nor super\n. This allows the zero argument form of\nsuper()\nto correctly identify the class being defined based on\nlexical scoping, while the class or instance that was used to make the\ncurrent call is identified based on the first argument passed to the method.\nCPython implementation detail: In CPython 3.6 and later, the __class__\ncell is passed to the metaclass\nas a __classcell__\nentry in the class namespace. If present, this must\nbe propagated up to the type.__new__\ncall in order for the class to be\ninitialised correctly.\nFailing to do so will result in a RuntimeError\nin Python 3.8.\nWhen using the default metaclass type\n, or any metaclass that ultimately\ncalls type.__new__\n, the following additional customization steps are\ninvoked after creating the class object:\nThe\ntype.__new__\nmethod collects all of the attributes in the class namespace that define a__set_name__()\nmethod;Those\n__set_name__\nmethods are called with the class being defined and the assigned name of that particular attribute;The\n__init_subclass__()\nhook is called on the immediate parent of the new class in its method resolution order.\nAfter the class object is created, it is passed to the class decorators included in the class definition (if any) and the resulting object is bound in the local namespace as the defined class.\nWhen a new class is created by type.__new__\n, the object provided as the\nnamespace parameter is copied to a new ordered mapping and the original\nobject is discarded. The new copy is wrapped in a read-only proxy, which\nbecomes the __dict__\nattribute of the class object.\nSee also\n- PEP 3135 - New super\nDescribes the implicit\n__class__\nclosure reference\n3.3.3.7. Uses for metaclasses\u00b6\nThe potential uses for metaclasses are boundless. Some ideas that have been explored include enum, logging, interface checking, automatic delegation, automatic property creation, proxies, frameworks, and automatic resource locking/synchronization.\n3.3.4. Customizing instance and subclass checks\u00b6\nThe following methods are used to override the default behavior of the\nisinstance()\nand issubclass()\nbuilt-in functions.\nIn particular, the metaclass abc.ABCMeta\nimplements these methods in\norder to allow the addition of Abstract Base Classes (ABCs) as \u201cvirtual base\nclasses\u201d to any class or type (including built-in types), including other\nABCs.\n- type.__instancecheck__(self, instance)\u00b6\nReturn true if instance should be considered a (direct or indirect) instance of class. If defined, called to implement\nisinstance(instance, class)\n.\n- type.__subclasscheck__(self, subclass)\u00b6\nReturn true if subclass should be considered a (direct or indirect) subclass of class. If defined, called to implement\nissubclass(subclass, class)\n.\nNote that these methods are looked up on the type (metaclass) of a class. They cannot be defined as class methods in the actual class. This is consistent with the lookup of special methods that are called on instances, only in this case the instance is itself a class.\nSee also\n- PEP 3119 - Introducing Abstract Base Classes\nIncludes the specification for customizing\nisinstance()\nandissubclass()\nbehavior through__instancecheck__()\nand__subclasscheck__()\n, with motivation for this functionality in the context of adding Abstract Base Classes (see theabc\nmodule) to the language.\n3.3.5. Emulating generic types\u00b6\nWhen using type annotations, it is often useful to\nparameterize a generic type using Python\u2019s square-brackets notation.\nFor example, the annotation list[int]\nmight be used to signify a\nlist\nin which all the elements are of type int\n.\nSee also\n- PEP 484 - Type Hints\nIntroducing Python\u2019s framework for type annotations\n- Generic Alias Types\nDocumentation for objects representing parameterized generic classes\n- Generics, user-defined generics and\ntyping.Generic\nDocumentation on how to implement generic classes that can be parameterized at runtime and understood by static type-checkers.\nA class can generally only be parameterized if it defines the special\nclass method __class_getitem__()\n.\n- classmethod object.__class_getitem__(cls, key)\u00b6\nReturn an object representing the specialization of a generic class by type arguments found in key.\nWhen defined on a class,\n__class_getitem__()\nis automatically a class method. As such, there is no need for it to be decorated with@classmethod\nwhen it is defined.\n3.3.5.1. The purpose of __class_getitem__\u00b6\nThe purpose of __class_getitem__()\nis to allow runtime\nparameterization of standard-library generic classes in order to more easily\napply type hints to these classes.\nTo implement custom generic classes that can be parameterized at runtime and\nunderstood by static type-checkers, users should either inherit from a standard\nlibrary class that already implements __class_getitem__()\n, or\ninherit from typing.Generic\n, which has its own implementation of\n__class_getitem__()\n.\nCustom implementations of __class_getitem__()\non classes defined\noutside of the standard library may not be understood by third-party\ntype-checkers such as mypy. Using __class_getitem__()\non any class for\npurposes other than type hinting is discouraged.\n3.3.5.2. __class_getitem__ versus __getitem__\u00b6\nUsually, the subscription of an object using square\nbrackets will call the __getitem__()\ninstance method defined on\nthe object\u2019s class. However, if the object being subscribed is itself a class,\nthe class method __class_getitem__()\nmay be called instead.\n__class_getitem__()\nshould return a GenericAlias\nobject if it is properly defined.\nPresented with the expression obj[x]\n, the Python interpreter\nfollows something like the following process to decide whether\n__getitem__()\nor __class_getitem__()\nshould be\ncalled:\nfrom inspect import isclass\ndef subscribe(obj, x):\n\"\"\"Return the result of the expression 'obj[x]'\"\"\"\nclass_of_obj = type(obj)\n# If the class of obj defines __getitem__,\n# call class_of_obj.__getitem__(obj, x)\nif hasattr(class_of_obj, '__getitem__'):\nreturn class_of_obj.__getitem__(obj, x)\n# Else, if obj is a class and defines __class_getitem__,\n# call obj.__class_getitem__(x)\nelif isclass(obj) and hasattr(obj, '__class_getitem__'):\nreturn obj.__class_getitem__(x)\n# Else, raise an exception\nelse:\nraise TypeError(\nf\"'{class_of_obj.__name__}' object is not subscriptable\"\n)\nIn Python, all classes are themselves instances of other classes. The class of\na class is known as that class\u2019s metaclass, and most classes have the\ntype\nclass as their metaclass. type\ndoes not define\n__getitem__()\n, meaning that expressions such as list[int]\n,\ndict[str, float]\nand tuple[str, bytes]\nall result in\n__class_getitem__()\nbeing called:\n>>> # list has class \"type\" as its metaclass, like most classes:\n>>> type(list)\n\n>>> type(dict) == type(list) == type(tuple) == type(str) == type(bytes)\nTrue\n>>> # \"list[int]\" calls \"list.__class_getitem__(int)\"\n>>> list[int]\nlist[int]\n>>> # list.__class_getitem__ returns a GenericAlias object:\n>>> type(list[int])\n\nHowever, if a class has a custom metaclass that defines\n__getitem__()\n, subscribing the class may result in different\nbehaviour. An example of this can be found in the enum\nmodule:\n>>> from enum import Enum\n>>> class Menu(Enum):\n... \"\"\"A breakfast menu\"\"\"\n... SPAM = 'spam'\n... BACON = 'bacon'\n...\n>>> # Enum classes have a custom metaclass:\n>>> type(Menu)\n\n>>> # EnumMeta defines __getitem__,\n>>> # so __class_getitem__ is not called,\n>>> # and the result is not a GenericAlias object:\n>>> Menu['SPAM']\n\n>>> type(Menu['SPAM'])\n\nSee also\n- PEP 560 - Core Support for typing module and generic types\nIntroducing\n__class_getitem__()\n, and outlining when a subscription results in__class_getitem__()\nbeing called instead of__getitem__()\n3.3.6. Emulating callable objects\u00b6\n3.3.7. Emulating container types\u00b6\nThe following methods can be defined to implement container objects. None of them\nare provided by the object\nclass itself. Containers usually are\nsequences (such as lists\nor\ntuples\n) or mappings (like\ndictionaries),\nbut can represent other containers as well. The first set of methods is used\neither to emulate a sequence or to emulate a mapping; the difference is that for\na sequence, the allowable keys should be the integers k for which 0 <= k <\nN\nwhere N is the length of the sequence, or slice\nobjects, which define a\nrange of items. It is also recommended that mappings provide the methods\nkeys()\n, values()\n, items()\n, get()\n, clear()\n,\nsetdefault()\n, pop()\n, popitem()\n, copy()\n, and\nupdate()\nbehaving similar to those for Python\u2019s standard dictionary\nobjects. The collections.abc\nmodule provides a\nMutableMapping\nabstract base class to help create those methods from a base set of\n__getitem__()\n, __setitem__()\n,\n__delitem__()\n, and keys()\n.\nMutable sequences should provide methods\nappend()\n, clear()\n, count()\n,\nextend()\n, index()\n, insert()\n,\npop()\n, remove()\n, and reverse()\n,\nlike Python standard list\nobjects.\nFinally, sequence types should implement addition (meaning concatenation) and\nmultiplication (meaning repetition) by defining the methods\n__add__()\n, __radd__()\n, __iadd__()\n,\n__mul__()\n, __rmul__()\nand __imul__()\ndescribed below; they should not define other numerical\noperators.\nIt is recommended that both mappings and sequences implement the\n__contains__()\nmethod to allow efficient use of the in\noperator; for\nmappings, in\nshould search the mapping\u2019s keys; for sequences, it should\nsearch through the values. It is further recommended that both mappings and\nsequences implement the __iter__()\nmethod to allow efficient iteration\nthrough the container; for mappings, __iter__()\nshould iterate\nthrough the object\u2019s keys; for sequences, it should iterate through the values.\n- object.__len__(self)\u00b6\nCalled to implement the built-in function\nlen()\n. Should return the length of the object, an integer>=\n0. Also, an object that doesn\u2019t define a__bool__()\nmethod and whose__len__()\nmethod returns zero is considered to be false in a Boolean context.CPython implementation detail: In CPython, the length is required to be at most\nsys.maxsize\n. If the length is larger thansys.maxsize\nsome features (such aslen()\n) may raiseOverflowError\n. To prevent raisingOverflowError\nby truth value testing, an object must define a__bool__()\nmethod.\n- object.__length_hint__(self)\u00b6\nCalled to implement\noperator.length_hint()\n. Should return an estimated length for the object (which may be greater or less than the actual length). The length must be an integer>=\n0. The return value may also beNotImplemented\n, which is treated the same as if the__length_hint__\nmethod didn\u2019t exist at all. This method is purely an optimization and is never required for correctness.Added in version 3.4.\nNote\nSlicing is done exclusively with the following three methods. A call like\na[1:2] = b\nis translated to\na[slice(1, 2, None)] = b\nand so forth. Missing slice items are always filled in with None\n.\n- object.__getitem__(self, subscript)\u00b6\nCalled to implement subscription, that is,\nself[subscript]\n. See Subscriptions and slicings for details on the syntax.There are two types of built-in objects that support subscription via\n__getitem__()\n:sequences, where subscript (also called index) should be an integer or a\nslice\nobject. See the sequence documentation for the expected behavior, including handlingslice\nobjects and negative indices.mappings, where subscript is also called the key. See mapping documentation for the expected behavior.\nIf subscript is of an inappropriate type,\n__getitem__()\nshould raiseTypeError\n. If subscript has an inappropriate value,__getitem__()\nshould raise anLookupError\nor one of its subclasses (IndexError\nfor sequences;KeyError\nfor mappings).Note\nThe sequence iteration protocol (used, for example, in\nfor\nloops), expects that anIndexError\nwill be raised for illegal indexes to allow proper detection of the end of a sequence.Note\nWhen subscripting a class, the special class method\n__class_getitem__()\nmay be called instead of__getitem__()\n. See __class_getitem__ versus __getitem__ for more details.\n- object.__setitem__(self, key, value)\u00b6\nCalled to implement assignment to\nself[key]\n. Same note as for__getitem__()\n. This should only be implemented for mappings if the objects support changes to the values for keys, or if new keys can be added, or for sequences if elements can be replaced. The same exceptions should be raised for improper key values as for the__getitem__()\nmethod.\n- object.__delitem__(self, key)\u00b6\nCalled to implement deletion of\nself[key]\n. Same note as for__getitem__()\n. This should only be implemented for mappings if the objects support removal of keys, or for sequences if elements can be removed from the sequence. The same exceptions should be raised for improper key values as for the__getitem__()\nmethod.\n- object.__missing__(self, key)\u00b6\nCalled by\ndict\n.__getitem__()\nto implementself[key]\nfor dict subclasses when key is not in the dictionary.\n- object.__iter__(self)\u00b6\nThis method is called when an iterator is required for a container. This method should return a new iterator object that can iterate over all the objects in the container. For mappings, it should iterate over the keys of the container.\n- object.__reversed__(self)\u00b6\nCalled (if present) by the\nreversed()\nbuilt-in to implement reverse iteration. It should return a new iterator object that iterates over all the objects in the container in reverse order.If the\n__reversed__()\nmethod is not provided, thereversed()\nbuilt-in will fall back to using the sequence protocol (__len__()\nand__getitem__()\n). Objects that support the sequence protocol should only provide__reversed__()\nif they can provide an implementation that is more efficient than the one provided byreversed()\n.\nThe membership test operators (in\nand not in\n) are normally\nimplemented as an iteration through a container. However, container objects can\nsupply the following special method with a more efficient implementation, which\nalso does not require the object be iterable.\n- object.__contains__(self, item)\u00b6\nCalled to implement membership test operators. Should return true if item is in self, false otherwise. For mapping objects, this should consider the keys of the mapping rather than the values or the key-item pairs.\nFor objects that don\u2019t define\n__contains__()\n, the membership test first tries iteration via__iter__()\n, then the old sequence iteration protocol via__getitem__()\n, see this section in the language reference.\n3.3.8. Emulating numeric types\u00b6\nThe following methods can be defined to emulate numeric objects. Methods corresponding to operations that are not supported by the particular kind of number implemented (e.g., bitwise operations for non-integral numbers) should be left undefined.\n- object.__add__(self, other)\u00b6\n- object.__sub__(self, other)\u00b6\n- object.__mul__(self, other)\u00b6\n- object.__matmul__(self, other)\u00b6\n- object.__truediv__(self, other)\u00b6\n- object.__floordiv__(self, other)\u00b6\n- object.__mod__(self, other)\u00b6\n- object.__divmod__(self, other)\u00b6\n- object.__pow__(self, other[, modulo])\u00b6\n- object.__lshift__(self, other)\u00b6\n- object.__rshift__(self, other)\u00b6\n- object.__and__(self, other)\u00b6\n- object.__xor__(self, other)\u00b6\n- object.__or__(self, other)\u00b6\nThese methods are called to implement the binary arithmetic operations (\n+\n,-\n,*\n,@\n,/\n,//\n,%\n,divmod()\n,pow()\n,**\n,<<\n,>>\n,&\n,^\n,|\n). For instance, to evaluate the expressionx + y\n, where x is an instance of a class that has an__add__()\nmethod,type(x).__add__(x, y)\nis called. The__divmod__()\nmethod should be the equivalent to using__floordiv__()\nand__mod__()\n; it should not be related to__truediv__()\n. Note that__pow__()\nshould be defined to accept an optional third argument if the three-argument version of the built-inpow()\nfunction is to be supported.If one of those methods does not support the operation with the supplied arguments, it should return\nNotImplemented\n.\n- object.__radd__(self, other)\u00b6\n- object.__rsub__(self, other)\u00b6\n- object.__rmul__(self, other)\u00b6\n- object.__rmatmul__(self, other)\u00b6\n- object.__rtruediv__(self, other)\u00b6\n- object.__rfloordiv__(self, other)\u00b6\n- object.__rmod__(self, other)\u00b6\n- object.__rdivmod__(self, other)\u00b6\n- object.__rpow__(self, other[, modulo])\u00b6\n- object.__rlshift__(self, other)\u00b6\n- object.__rrshift__(self, other)\u00b6\n- object.__rand__(self, other)\u00b6\n- object.__rxor__(self, other)\u00b6\n- object.__ror__(self, other)\u00b6\nThese methods are called to implement the binary arithmetic operations (\n+\n,-\n,*\n,@\n,/\n,//\n,%\n,divmod()\n,pow()\n,**\n,<<\n,>>\n,&\n,^\n,|\n) with reflected (swapped) operands. These functions are only called if the operands are of different types, when the left operand does not support the corresponding operation [3], or the right operand\u2019s class is derived from the left operand\u2019s class. [4] For instance, to evaluate the expressionx - y\n, where y is an instance of a class that has an__rsub__()\nmethod,type(y).__rsub__(y, x)\nis called iftype(x).__sub__(x, y)\nreturnsNotImplemented\nortype(y)\nis a subclass oftype(x)\n. [5]Note that\n__rpow__()\nshould be defined to accept an optional third argument if the three-argument version of the built-inpow()\nfunction is to be supported.Changed in version 3.14: Three-argument\npow()\nnow try calling__rpow__()\nif necessary. Previously it was only called in two-argumentpow()\nand the binary power operator.Note\nIf the right operand\u2019s type is a subclass of the left operand\u2019s type and that subclass provides a different implementation of the reflected method for the operation, this method will be called before the left operand\u2019s non-reflected method. This behavior allows subclasses to override their ancestors\u2019 operations.\n- object.__iadd__(self, other)\u00b6\n- object.__isub__(self, other)\u00b6\n- object.__imul__(self, other)\u00b6\n- object.__imatmul__(self, other)\u00b6\n- object.__itruediv__(self, other)\u00b6\n- object.__ifloordiv__(self, other)\u00b6\n- object.__imod__(self, other)\u00b6\n- object.__ipow__(self, other[, modulo])\u00b6\n- object.__ilshift__(self, other)\u00b6\n- object.__irshift__(self, other)\u00b6\n- object.__iand__(self, other)\u00b6\n- object.__ixor__(self, other)\u00b6\n- object.__ior__(self, other)\u00b6\nThese methods are called to implement the augmented arithmetic assignments (\n+=\n,-=\n,*=\n,@=\n,/=\n,//=\n,%=\n,**=\n,<<=\n,>>=\n,&=\n,^=\n,|=\n). These methods should attempt to do the operation in-place (modifying self) and return the result (which could be, but does not have to be, self). If a specific method is not defined, or if that method returnsNotImplemented\n, the augmented assignment falls back to the normal methods. For instance, if x is an instance of a class with an__iadd__()\nmethod,x += y\nis equivalent tox = x.__iadd__(y)\n. If__iadd__()\ndoes not exist, or ifx.__iadd__(y)\nreturnsNotImplemented\n,x.__add__(y)\nandy.__radd__(x)\nare considered, as with the evaluation ofx + y\n. In certain situations, augmented assignment can result in unexpected errors (see Why does a_tuple[i] += [\u2018item\u2019] raise an exception when the addition works?), but this behavior is in fact part of the data model.\n- object.__neg__(self)\u00b6\n- object.__pos__(self)\u00b6\n- object.__abs__(self)\u00b6\n- object.__invert__(self)\u00b6\nCalled to implement the unary arithmetic operations (\n-\n,+\n,abs()\nand~\n).\n- object.__complex__(self)\u00b6\n- object.__int__(self)\u00b6\n- object.__float__(self)\u00b6\nCalled to implement the built-in functions\ncomplex()\n,int()\nandfloat()\n. Should return a value of the appropriate type.\n- object.__index__(self)\u00b6\nCalled to implement\noperator.index()\n, and whenever Python needs to losslessly convert the numeric object to an integer object (such as in slicing, or in the built-inbin()\n,hex()\nandoct()\nfunctions). Presence of this method indicates that the numeric object is an integer type. Must return an integer.If\n__int__()\n,__float__()\nand__complex__()\nare not defined then corresponding built-in functionsint()\n,float()\nandcomplex()\nfall back to__index__()\n.\n- object.__round__(self[, ndigits])\u00b6\n- object.__trunc__(self)\u00b6\n- object.__floor__(self)\u00b6\n- object.__ceil__(self)\u00b6\nCalled to implement the built-in function\nround()\nandmath\nfunctionstrunc()\n,floor()\nandceil()\n. Unless ndigits is passed to__round__()\nall these methods should return the value of the object truncated to anIntegral\n(typically anint\n).Changed in version 3.14:\nint()\nno longer delegates to the__trunc__()\nmethod.\n3.3.9. With Statement Context Managers\u00b6\nA context manager is an object that defines the runtime context to be\nestablished when executing a with\nstatement. The context manager\nhandles the entry into, and the exit from, the desired runtime context for the\nexecution of the block of code. Context managers are normally invoked using the\nwith\nstatement (described in section The with statement), but can also be\nused by directly invoking their methods.\nTypical uses of context managers include saving and restoring various kinds of global state, locking and unlocking resources, closing opened files, etc.\nFor more information on context managers, see Context Manager Types.\nThe object\nclass itself does not provide the context manager methods.\n- object.__enter__(self)\u00b6\nEnter the runtime context related to this object. The\nwith\nstatement will bind this method\u2019s return value to the target(s) specified in theas\nclause of the statement, if any.\n- object.__exit__(self, exc_type, exc_value, traceback)\u00b6\nExit the runtime context related to this object. The parameters describe the exception that caused the context to be exited. If the context was exited without an exception, all three arguments will be\nNone\n.If an exception is supplied, and the method wishes to suppress the exception (i.e., prevent it from being propagated), it should return a true value. Otherwise, the exception will be processed normally upon exit from this method.\nNote that\n__exit__()\nmethods should not reraise the passed-in exception; this is the caller\u2019s responsibility.\n3.3.10. Customizing positional arguments in class pattern matching\u00b6\nWhen using a class name in a pattern, positional arguments in the pattern are not\nallowed by default, i.e. case MyClass(x, y)\nis typically invalid without special\nsupport in MyClass\n. To be able to use that kind of pattern, the class needs to\ndefine a __match_args__ attribute.\n- object.__match_args__\u00b6\nThis class variable can be assigned a tuple of strings. When this class is used in a class pattern with positional arguments, each positional argument will be converted into a keyword argument, using the corresponding value in __match_args__ as the keyword. The absence of this attribute is equivalent to setting it to\n()\n.\nFor example, if MyClass.__match_args__\nis (\"left\", \"center\", \"right\")\nthat means\nthat case MyClass(x, y)\nis equivalent to case MyClass(left=x, center=y)\n. Note\nthat the number of arguments in the pattern must be smaller than or equal to the number\nof elements in __match_args__; if it is larger, the pattern match attempt will raise\na TypeError\n.\nAdded in version 3.10.\nSee also\n- PEP 634 - Structural Pattern Matching\nThe specification for the Python\nmatch\nstatement.\n3.3.11. Emulating buffer types\u00b6\nThe buffer protocol provides a way for Python\nobjects to expose efficient access to a low-level memory array. This protocol\nis implemented by builtin types such as bytes\nand memoryview\n,\nand third-party libraries may define additional buffer types.\nWhile buffer types are usually implemented in C, it is also possible to implement the protocol in Python.\n- object.__buffer__(self, flags)\u00b6\nCalled when a buffer is requested from self (for example, by the\nmemoryview\nconstructor). The flags argument is an integer representing the kind of buffer requested, affecting for example whether the returned buffer is read-only or writable.inspect.BufferFlags\nprovides a convenient way to interpret the flags. The method must return amemoryview\nobject.\n- object.__release_buffer__(self, buffer)\u00b6\nCalled when a buffer is no longer needed. The buffer argument is a\nmemoryview\nobject that was previously returned by__buffer__()\n. The method must release any resources associated with the buffer. This method should returnNone\n. Buffer objects that do not need to perform any cleanup are not required to implement this method.\nAdded in version 3.12.\nSee also\n- PEP 688 - Making the buffer protocol accessible in Python\nIntroduces the Python\n__buffer__\nand__release_buffer__\nmethods.collections.abc.Buffer\nABC for buffer types.\n3.3.12. Annotations\u00b6\nFunctions, classes, and modules may contain annotations, which are a way to associate information (usually type hints) with a symbol.\n- object.__annotations__\u00b6\nThis attribute contains the annotations for an object. It is lazily evaluated, so accessing the attribute may execute arbitrary code and raise exceptions. If evaluation is successful, the attribute is set to a dictionary mapping from variable names to annotations.\nChanged in version 3.14: Annotations are now lazily evaluated.\n- object.__annotate__(format)\u00b6\nAn annotate function. Returns a new dictionary object mapping attribute/parameter names to their annotation values.\nTakes a format parameter specifying the format in which annotations values should be provided. It must be a member of the\nannotationlib.Format\nenum, or an integer with a value corresponding to a member of the enum.If an annotate function doesn\u2019t support the requested format, it must raise\nNotImplementedError\n. Annotate functions must always supportVALUE\nformat; they must not raiseNotImplementedError()\nwhen called with this format.When called with\nVALUE\nformat, an annotate function may raiseNameError\n; it must not raiseNameError\nwhen called requesting any other format.If an object does not have any annotations,\n__annotate__\nshould preferably be set toNone\n(it can\u2019t be deleted), rather than set to a function that returns an empty dict.Added in version 3.14.\nSee also\n- PEP 649 \u2014 Deferred evaluation of annotation using descriptors\nIntroduces lazy evaluation of annotations and the\n__annotate__\nfunction.\n3.3.13. Special method lookup\u00b6\nFor custom classes, implicit invocations of special methods are only guaranteed to work correctly if defined on an object\u2019s type, not in the object\u2019s instance dictionary. That behaviour is the reason why the following code raises an exception:\n>>> class C:\n... pass\n...\n>>> c = C()\n>>> c.__len__ = lambda: 5\n>>> len(c)\nTraceback (most recent call last):\nFile \"\", line 1, in \nTypeError: object of type 'C' has no len()\nThe rationale behind this behaviour lies with a number of special methods such\nas __hash__()\nand __repr__()\nthat are implemented\nby all objects,\nincluding type objects. If the implicit lookup of these methods used the\nconventional lookup process, they would fail when invoked on the type object\nitself:\n>>> 1 .__hash__() == hash(1)\nTrue\n>>> int.__hash__() == hash(int)\nTraceback (most recent call last):\nFile \"\", line 1, in \nTypeError: descriptor '__hash__' of 'int' object needs an argument\nIncorrectly attempting to invoke an unbound method of a class in this way is sometimes referred to as \u2018metaclass confusion\u2019, and is avoided by bypassing the instance when looking up special methods:\n>>> type(1).__hash__(1) == hash(1)\nTrue\n>>> type(int).__hash__(int) == hash(int)\nTrue\nIn addition to bypassing any instance attributes in the interest of\ncorrectness, implicit special method lookup generally also bypasses the\n__getattribute__()\nmethod even of the object\u2019s metaclass:\n>>> class Meta(type):\n... def __getattribute__(*args):\n... print(\"Metaclass getattribute invoked\")\n... return type.__getattribute__(*args)\n...\n>>> class C(object, metaclass=Meta):\n... def __len__(self):\n... return 10\n... def __getattribute__(*args):\n... print(\"Class getattribute invoked\")\n... return object.__getattribute__(*args)\n...\n>>> c = C()\n>>> c.__len__() # Explicit lookup via instance\nClass getattribute invoked\n10\n>>> type(c).__len__(c) # Explicit lookup via type\nMetaclass getattribute invoked\n10\n>>> len(c) # Implicit lookup\n10\nBypassing the __getattribute__()\nmachinery in this fashion\nprovides significant scope for speed optimisations within the\ninterpreter, at the cost of some flexibility in the handling of\nspecial methods (the special method must be set on the class\nobject itself in order to be consistently invoked by the interpreter).\n3.4. Coroutines\u00b6\n3.4.1. Awaitable Objects\u00b6\nAn awaitable object generally implements an __await__()\nmethod.\nCoroutine objects returned from async def\nfunctions\nare awaitable.\nNote\nThe generator iterator objects returned from generators\ndecorated with types.coroutine()\nare also awaitable, but they do not implement __await__()\n.\n- object.__await__(self)\u00b6\nMust return an iterator. Should be used to implement awaitable objects. For instance,\nasyncio.Future\nimplements this method to be compatible with theawait\nexpression. Theobject\nclass itself is not awaitable and does not provide this method.\nAdded in version 3.5.\nSee also\nPEP 492 for additional information about awaitable objects.\n3.4.2. Coroutine Objects\u00b6\nCoroutine objects are awaitable objects.\nA coroutine\u2019s execution can be controlled by calling __await__()\nand\niterating over the result. When the coroutine has finished executing and\nreturns, the iterator raises StopIteration\n, and the exception\u2019s\nvalue\nattribute holds the return value. If the\ncoroutine raises an exception, it is propagated by the iterator. Coroutines\nshould not directly raise unhandled StopIteration\nexceptions.\nCoroutines also have the methods listed below, which are analogous to those of generators (see Generator-iterator methods). However, unlike generators, coroutines do not directly support iteration.\nChanged in version 3.5.2: It is a RuntimeError\nto await on a coroutine more than once.\n- coroutine.send(value)\u00b6\nStarts or resumes execution of the coroutine. If value is\nNone\n, this is equivalent to advancing the iterator returned by__await__()\n. If value is notNone\n, this method delegates to thesend()\nmethod of the iterator that caused the coroutine to suspend. The result (return value,StopIteration\n, or other exception) is the same as when iterating over the__await__()\nreturn value, described above.\n- coroutine.throw(value)\u00b6\n- coroutine.throw(type[, value[, traceback]])\nRaises the specified exception in the coroutine. This method delegates to the\nthrow()\nmethod of the iterator that caused the coroutine to suspend, if it has such a method. Otherwise, the exception is raised at the suspension point. The result (return value,StopIteration\n, or other exception) is the same as when iterating over the__await__()\nreturn value, described above. If the exception is not caught in the coroutine, it propagates back to the caller.Changed in version 3.12: The second signature (type[, value[, traceback]]) is deprecated and may be removed in a future version of Python.\n- coroutine.close()\u00b6\nCauses the coroutine to clean itself up and exit. If the coroutine is suspended, this method first delegates to the\nclose()\nmethod of the iterator that caused the coroutine to suspend, if it has such a method. Then it raisesGeneratorExit\nat the suspension point, causing the coroutine to immediately clean itself up. Finally, the coroutine is marked as having finished executing, even if it was never started.Coroutine objects are automatically closed using the above process when they are about to be destroyed.\n3.4.3. Asynchronous Iterators\u00b6\nAn asynchronous iterator can call asynchronous code in\nits __anext__\nmethod.\nAsynchronous iterators can be used in an async for\nstatement.\nThe object\nclass itself does not provide these methods.\n- object.__aiter__(self)\u00b6\nMust return an asynchronous iterator object.\n- object.__anext__(self)\u00b6\nMust return an awaitable resulting in a next value of the iterator. Should raise a\nStopAsyncIteration\nerror when the iteration is over.\nAn example of an asynchronous iterable object:\nclass Reader:\nasync def readline(self):\n...\ndef __aiter__(self):\nreturn self\nasync def __anext__(self):\nval = await self.readline()\nif val == b'':\nraise StopAsyncIteration\nreturn val\nAdded in version 3.5.\nChanged in version 3.7: Prior to Python 3.7, __aiter__()\ncould return an awaitable\nthat would resolve to an\nasynchronous iterator.\nStarting with Python 3.7, __aiter__()\nmust return an\nasynchronous iterator object. Returning anything else\nwill result in a TypeError\nerror.\n3.4.4. Asynchronous Context Managers\u00b6\nAn asynchronous context manager is a context manager that is able to\nsuspend execution in its __aenter__\nand __aexit__\nmethods.\nAsynchronous context managers can be used in an async with\nstatement.\nThe object\nclass itself does not provide these methods.\n- object.__aenter__(self)\u00b6\nSemantically similar to\n__enter__()\n, the only difference being that it must return an awaitable.\n- object.__aexit__(self, exc_type, exc_value, traceback)\u00b6\nSemantically similar to\n__exit__()\n, the only difference being that it must return an awaitable.\nAn example of an asynchronous context manager class:\nclass AsyncContextManager:\nasync def __aenter__(self):\nawait log('entering context')\nasync def __aexit__(self, exc_type, exc, tb):\nawait log('exiting context')\nAdded in version 3.5.\nFootnotes", "code_snippets": ["\n ", " ", " ", " ", "\n", "\n", " ", "\n\n", "\n ", "\n ", " ", "\n\n ", " ", " ", "\n ", "\n ", " ", "\n\n", " ", " ", "\n", "\n ", " ", " ", " ", "\n ", "\n ", " ", " ", "\n\n", " ", "\n ", "\n", "\n ", " ", " ", " ", "\n", "\n ", "\n\n", " ", " ", "\n", " ", " ", " ", "\n", " ", " ", "\n", "\n ", "\n\n", "\n ", "\n\n", "\n ", "\n", " ", "\n\n", " ", "\n", "\n\n ", " ", " ", "\n\n ", "\n ", "\n ", " ", " ", "\n ", " ", " ", "\n\n ", "\n ", "\n ", " ", " ", " ", " ", "\n ", " ", "\n\n ", "\n ", "\n ", " ", "\n ", "\n ", "\n", "\n", "\n", "\n", " ", " ", " ", " ", " ", " ", " ", " ", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", " ", "\n", "\n", "\n", " ", " ", " ", "\n", " ", " ", " ", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", " ", " ", "\n", " ", " ", " ", " ", "\n", "\n", " ", "\n", "\n", " ", " ", "\n", " ", " ", " ", "\n", "\n", "\n File ", ", line ", ", in ", "\n", ": ", "\n", " ", " ", " ", "\n", "\n", " ", " ", "\n", "\n File ", ", line ", ", in ", "\n", ": ", "\n", " ", " ", "\n", "\n", " ", " ", "\n", "\n", "\n", " ", "\n", " ", "\n", " ", " ", "\n", "\n", " ", "\n", " ", "\n", " ", " ", "\n", " ", "\n", " ", "\n", " ", " ", "\n", "\n", " ", " ", "\n", " ", "\n", "\n", "\n", " ", "\n", "\n", "\n", " ", "\n", "\n", "\n ", " ", "\n ", "\n\n ", "\n ", " ", "\n\n ", " ", "\n ", " ", " ", " ", "\n ", " ", " ", " ", "\n ", " ", "\n ", " ", "\n", "\n ", " ", "\n ", " ", "\n\n ", " ", " ", " ", " ", "\n ", " ", "\n"], "language": "Python", "source": "python.org", "token_count": 27042} +{"url": "https://docs.python.org/3/library/windows.html", "title": "MS Windows Specific Services", "content": "pickletools \u2014 Tools for pickle developers\npickletools\nmsvcrt \u2014 Useful routines from the MS VC++ runtime\nmsvcrt\nThis chapter describes modules that are only available on MS Windows platforms.\nwinreg\nwinsound", "code_snippets": [], "language": "Python", "source": "python.org", "token_count": 51} +{"url": "https://docs.python.org/3/c-api/memoryview.html", "title": "MemoryView objects", "content": "MemoryView objects\u00b6\nA memoryview\nobject exposes the C level buffer interface as a Python object which can then be passed around like\nany other object.\n-\nPyTypeObject PyMemoryView_Type\u00b6\n- Part of the Stable ABI.\nThis instance of\nPyTypeObject\nrepresents the Python memoryview type. This is the same object asmemoryview\nin the Python layer.\n-\nPyObject *PyMemoryView_FromObject(PyObject *obj)\u00b6\n- Return value: New reference. Part of the Stable ABI.\nCreate a memoryview object from an object that provides the buffer interface. If obj supports writable buffer exports, the memoryview object will be read/write, otherwise it may be either read-only or read/write at the discretion of the exporter.\n-\nPyBUF_READ\u00b6\n- Part of the Stable ABI since version 3.11.\nFlag to request a readonly buffer.\n-\nPyBUF_WRITE\u00b6\n- Part of the Stable ABI since version 3.11.\nFlag to request a writable buffer.\n-\nPyObject *PyMemoryView_FromMemory(char *mem, Py_ssize_t size, int flags)\u00b6\n- Return value: New reference. Part of the Stable ABI since version 3.7.\nCreate a memoryview object using mem as the underlying buffer. flags can be one of\nPyBUF_READ\norPyBUF_WRITE\n.Added in version 3.3.\n-\nPyObject *PyMemoryView_FromBuffer(const Py_buffer *view)\u00b6\n- Return value: New reference. Part of the Stable ABI since version 3.11.\nCreate a memoryview object wrapping the given buffer structure view. For simple byte buffers,\nPyMemoryView_FromMemory()\nis the preferred function.\n-\nPyObject *PyMemoryView_GetContiguous(PyObject *obj, int buffertype, char order)\u00b6\n- Return value: New reference. Part of the Stable ABI.\nCreate a memoryview object to a contiguous chunk of memory (in either \u2018C\u2019 or \u2018F\u2019ortran order) from an object that defines the buffer interface. If memory is contiguous, the memoryview object points to the original memory. Otherwise, a copy is made and the memoryview points to a new bytes object.\nbuffertype can be one of\nPyBUF_READ\norPyBUF_WRITE\n.\n-\nint PyMemoryView_Check(PyObject *obj)\u00b6\nReturn true if the object obj is a memoryview object. It is not currently allowed to create subclasses of\nmemoryview\n. This function always succeeds.\n-\nPy_buffer *PyMemoryView_GET_BUFFER(PyObject *mview)\u00b6\nReturn a pointer to the memoryview\u2019s private copy of the exporter\u2019s buffer. mview must be a memoryview instance; this macro doesn\u2019t check its type, you must do it yourself or you will risk crashes.\n-\nPyObject *PyMemoryView_GET_BASE(PyObject *mview)\u00b6\nReturn either a pointer to the exporting object that the memoryview is based on or\nNULL\nif the memoryview has been created by one of the functionsPyMemoryView_FromMemory()\norPyMemoryView_FromBuffer()\n. mview must be a memoryview instance.", "code_snippets": [], "language": "Python", "source": "python.org", "token_count": 666} +{"url": "https://docs.python.org/3/library/asyncio-policy.html", "title": "Policies", "content": "Policies\u00b6\nWarning\nPolicies are deprecated and will be removed in Python 3.16.\nUsers are encouraged to use the asyncio.run()\nfunction\nor the asyncio.Runner\nwith loop_factory to use\nthe desired loop implementation.\nAn event loop policy is a global object used to get and set the current event loop, as well as create new event loops. The default policy can be replaced with built-in alternatives to use different event loop implementations, or substituted by a custom policy that can override these behaviors.\nThe policy object gets and sets a separate event loop per context. This is per-thread by default, though custom policies could define context differently.\nCustom event loop policies can control the behavior of\nget_event_loop()\n, set_event_loop()\n, and new_event_loop()\n.\nPolicy objects should implement the APIs defined\nin the AbstractEventLoopPolicy\nabstract base class.\nGetting and Setting the Policy\u00b6\nThe following functions can be used to get and set the policy for the current process:\n- asyncio.get_event_loop_policy()\u00b6\nReturn the current process-wide policy.\nDeprecated since version 3.14: The\nget_event_loop_policy()\nfunction is deprecated and will be removed in Python 3.16.\n- asyncio.set_event_loop_policy(policy)\u00b6\nSet the current process-wide policy to policy.\nIf policy is set to\nNone\n, the default policy is restored.Deprecated since version 3.14: The\nset_event_loop_policy()\nfunction is deprecated and will be removed in Python 3.16.\nPolicy Objects\u00b6\nThe abstract event loop policy base class is defined as follows:\n- class asyncio.AbstractEventLoopPolicy\u00b6\nAn abstract base class for asyncio policies.\n- get_event_loop()\u00b6\nGet the event loop for the current context.\nReturn an event loop object implementing the\nAbstractEventLoop\ninterface.This method should never return\nNone\n.Changed in version 3.6.\n- set_event_loop(loop)\u00b6\nSet the event loop for the current context to loop.\n- new_event_loop()\u00b6\nCreate and return a new event loop object.\nThis method should never return\nNone\n.\nDeprecated since version 3.14: The\nAbstractEventLoopPolicy\nclass is deprecated and will be removed in Python 3.16.\nasyncio ships with the following built-in policies:\n- class asyncio.DefaultEventLoopPolicy\u00b6\nThe default asyncio policy. Uses\nSelectorEventLoop\non Unix andProactorEventLoop\non Windows.There is no need to install the default policy manually. asyncio is configured to use the default policy automatically.\nChanged in version 3.8: On Windows,\nProactorEventLoop\nis now used by default.Changed in version 3.14: The\nget_event_loop()\nmethod of the default asyncio policy now raises aRuntimeError\nif there is no set event loop.Deprecated since version 3.14: The\nDefaultEventLoopPolicy\nclass is deprecated and will be removed in Python 3.16.\n- class asyncio.WindowsSelectorEventLoopPolicy\u00b6\nAn alternative event loop policy that uses the\nSelectorEventLoop\nevent loop implementation.Availability: Windows.\nDeprecated since version 3.14: The\nWindowsSelectorEventLoopPolicy\nclass is deprecated and will be removed in Python 3.16.\n- class asyncio.WindowsProactorEventLoopPolicy\u00b6\nAn alternative event loop policy that uses the\nProactorEventLoop\nevent loop implementation.Availability: Windows.\nDeprecated since version 3.14: The\nWindowsProactorEventLoopPolicy\nclass is deprecated and will be removed in Python 3.16.\nCustom Policies\u00b6\nTo implement a new event loop policy, it is recommended to subclass\nDefaultEventLoopPolicy\nand override the methods for which\ncustom behavior is wanted, e.g.:\nclass MyEventLoopPolicy(asyncio.DefaultEventLoopPolicy):\ndef get_event_loop(self):\n\"\"\"Get the event loop.\nThis may be None or an instance of EventLoop.\n\"\"\"\nloop = super().get_event_loop()\n# Do something with loop ...\nreturn loop\nasyncio.set_event_loop_policy(MyEventLoopPolicy())", "code_snippets": ["\n\n ", "\n", "\n\n", "\n", "\n ", " ", " ", "\n ", "\n ", " ", "\n\n", "\n"], "language": "Python", "source": "python.org", "token_count": 942} +{"url": "https://docs.python.org/3/library/imghdr.html", "title": " \u2014 Determine the type of an image", "content": "imghdr\n\u2014 Determine the type of an image\u00b6\nDeprecated since version 3.11, removed in version 3.13.\nThis module is no longer part of the Python standard library. It was removed in Python 3.13 after being deprecated in Python 3.11. The removal was decided in PEP 594.\nPossible replacements are third-party libraries from PyPI: filetype, puremagic, or python-magic. These are not supported or maintained by the Python core team.\nThe last version of Python that provided the imghdr\nmodule was\nPython 3.12.", "code_snippets": [], "language": "Python", "source": "python.org", "token_count": 125} +{"url": "https://docs.python.org/3/c-api/contextvars.html", "title": "Context Variables Objects", "content": "Context Variables Objects\u00b6\nAdded in version 3.7.\nChanged in version 3.7.1:\nNote\nIn Python 3.7.1 the signatures of all context variables\nC APIs were changed to use PyObject\npointers instead\nof PyContext\n, PyContextVar\n, and\nPyContextToken\n, e.g.:\n// in 3.7.0:\nPyContext *PyContext_New(void);\n// in 3.7.1+:\nPyObject *PyContext_New(void);\nSee bpo-34762 for more details.\nThis section details the public C API for the contextvars\nmodule.\n-\ntype PyContext\u00b6\nThe C structure used to represent a\ncontextvars.Context\nobject.\n-\ntype PyContextVar\u00b6\nThe C structure used to represent a\ncontextvars.ContextVar\nobject.\n-\ntype PyContextToken\u00b6\nThe C structure used to represent a\ncontextvars.Token\nobject.\n-\nPyTypeObject PyContext_Type\u00b6\nThe type object representing the context type.\n-\nPyTypeObject PyContextVar_Type\u00b6\nThe type object representing the context variable type.\n-\nPyTypeObject PyContextToken_Type\u00b6\nThe type object representing the context variable token type.\nType-check macros:\n-\nint PyContext_CheckExact(PyObject *o)\u00b6\nReturn true if o is of type\nPyContext_Type\n. o must not beNULL\n. This function always succeeds.\n-\nint PyContextVar_CheckExact(PyObject *o)\u00b6\nReturn true if o is of type\nPyContextVar_Type\n. o must not beNULL\n. This function always succeeds.\n-\nint PyContextToken_CheckExact(PyObject *o)\u00b6\nReturn true if o is of type\nPyContextToken_Type\n. o must not beNULL\n. This function always succeeds.\nContext object management functions:\n-\nPyObject *PyContext_New(void)\u00b6\n- Return value: New reference.\nCreate a new empty context object. Returns\nNULL\nif an error has occurred.\n-\nPyObject *PyContext_Copy(PyObject *ctx)\u00b6\n- Return value: New reference.\nCreate a shallow copy of the passed ctx context object. Returns\nNULL\nif an error has occurred.\n-\nPyObject *PyContext_CopyCurrent(void)\u00b6\n- Return value: New reference.\nCreate a shallow copy of the current thread context. Returns\nNULL\nif an error has occurred.\n-\nint PyContext_Enter(PyObject *ctx)\u00b6\nSet ctx as the current context for the current thread. Returns\n0\non success, and-1\non error.\n-\nint PyContext_Exit(PyObject *ctx)\u00b6\nDeactivate the ctx context and restore the previous context as the current context for the current thread. Returns\n0\non success, and-1\non error.\n-\nint PyContext_AddWatcher(PyContext_WatchCallback callback)\u00b6\nRegister callback as a context object watcher for the current interpreter. Return an ID which may be passed to\nPyContext_ClearWatcher()\n. In case of error (e.g. no more watcher IDs available), return-1\nand set an exception.Added in version 3.14.\n-\nint PyContext_ClearWatcher(int watcher_id)\u00b6\nClear watcher identified by watcher_id previously returned from\nPyContext_AddWatcher()\nfor the current interpreter. Return0\non success, or-1\nand set an exception on error (e.g. if the given watcher_id was never registered.)Added in version 3.14.\n-\ntype PyContextEvent\u00b6\nEnumeration of possible context object watcher events:\nPy_CONTEXT_SWITCHED\n: The current context has switched to a different context. The object passed to the watch callback is the now-currentcontextvars.Context\nobject, or None if no context is current.\nAdded in version 3.14.\n-\ntypedef int (*PyContext_WatchCallback)(PyContextEvent event, PyObject *obj)\u00b6\nContext object watcher callback function. The object passed to the callback is event-specific; see\nPyContextEvent\nfor details.If the callback returns with an exception set, it must return\n-1\n; this exception will be printed as an unraisable exception usingPyErr_FormatUnraisable()\n. Otherwise it should return0\n.There may already be a pending exception set on entry to the callback. In this case, the callback should return\n0\nwith the same exception still set. This means the callback may not call any other API that can set an exception unless it saves and clears the exception state first, and restores it before returning.Added in version 3.14.\nContext variable functions:\n-\nPyObject *PyContextVar_New(const char *name, PyObject *def)\u00b6\n- Return value: New reference.\nCreate a new\nContextVar\nobject. The name parameter is used for introspection and debug purposes. The def parameter specifies a default value for the context variable, orNULL\nfor no default. If an error has occurred, this function returnsNULL\n.\n-\nint PyContextVar_Get(PyObject *var, PyObject *default_value, PyObject **value)\u00b6\nGet the value of a context variable. Returns\n-1\nif an error has occurred during lookup, and0\nif no error occurred, whether or not a value was found.If the context variable was found, value will be a pointer to it. If the context variable was not found, value will point to:\ndefault_value, if not\nNULL\n;the default value of var, if not\nNULL\n;NULL\nExcept for\nNULL\n, the function returns a new reference.\n-\nPyObject *PyContextVar_Set(PyObject *var, PyObject *value)\u00b6\n- Return value: New reference.\nSet the value of var to value in the current context. Returns a new token object for this change, or\nNULL\nif an error has occurred.\n-\nint PyContextVar_Reset(PyObject *var, PyObject *token)\u00b6\nReset the state of the var context variable to that it was in before\nPyContextVar_Set()\nthat returned the token was called. This function returns0\non success and-1\non error.", "code_snippets": [], "language": "Python", "source": "python.org", "token_count": 1289} +{"url": "https://docs.python.org/3/tutorial/modules.html", "title": "Modules", "content": "6. Modules\u00b6\nIf you quit from the Python interpreter and enter it again, the definitions you have made (functions and variables) are lost. Therefore, if you want to write a somewhat longer program, you are better off using a text editor to prepare the input for the interpreter and running it with that file as input instead. This is known as creating a script. As your program gets longer, you may want to split it into several files for easier maintenance. You may also want to use a handy function that you\u2019ve written in several programs without copying its definition into each program.\nTo support this, Python has a way to put definitions in a file and use them in a script or in an interactive instance of the interpreter. Such a file is called a module; definitions from a module can be imported into other modules or into the main module (the collection of variables that you have access to in a script executed at the top level and in calculator mode).\nA module is a file containing Python definitions and statements. The file name\nis the module name with the suffix .py\nappended. Within a module, the\nmodule\u2019s name (as a string) is available as the value of the global variable\n__name__\n. For instance, use your favorite text editor to create a file\ncalled fibo.py\nin the current directory with the following contents:\n# Fibonacci numbers module\ndef fib(n):\n\"\"\"Write Fibonacci series up to n.\"\"\"\na, b = 0, 1\nwhile a < n:\nprint(a, end=' ')\na, b = b, a+b\nprint()\ndef fib2(n):\n\"\"\"Return Fibonacci series up to n.\"\"\"\nresult = []\na, b = 0, 1\nwhile a < n:\nresult.append(a)\na, b = b, a+b\nreturn result\nNow enter the Python interpreter and import this module with the following command:\n>>> import fibo\nThis does not add the names of the functions defined in fibo\ndirectly to\nthe current namespace (see Python Scopes and Namespaces for more details);\nit only adds the module name fibo\nthere. Using\nthe module name you can access the functions:\n>>> fibo.fib(1000)\n0 1 1 2 3 5 8 13 21 34 55 89 144 233 377 610 987\n>>> fibo.fib2(100)\n[0, 1, 1, 2, 3, 5, 8, 13, 21, 34, 55, 89]\n>>> fibo.__name__\n'fibo'\nIf you intend to use a function often you can assign it to a local name:\n>>> fib = fibo.fib\n>>> fib(500)\n0 1 1 2 3 5 8 13 21 34 55 89 144 233 377\n6.1. More on Modules\u00b6\nA module can contain executable statements as well as function definitions. These statements are intended to initialize the module. They are executed only the first time the module name is encountered in an import statement. [1] (They are also run if the file is executed as a script.)\nEach module has its own private namespace, which is used as the global namespace\nby all functions defined in the module. Thus, the author of a module can\nuse global variables in the module without worrying about accidental clashes\nwith a user\u2019s global variables. On the other hand, if you know what you are\ndoing you can touch a module\u2019s global variables with the same notation used to\nrefer to its functions, modname.itemname\n.\nModules can import other modules. It is customary but not required to place all\nimport\nstatements at the beginning of a module (or script, for that\nmatter). The imported module names, if placed at the top level of a module\n(outside any functions or classes), are added to the module\u2019s global namespace.\nThere is a variant of the import\nstatement that imports names from a\nmodule directly into the importing module\u2019s namespace. For example:\n>>> from fibo import fib, fib2\n>>> fib(500)\n0 1 1 2 3 5 8 13 21 34 55 89 144 233 377\nThis does not introduce the module name from which the imports are taken in the\nlocal namespace (so in the example, fibo\nis not defined).\nThere is even a variant to import all names that a module defines:\n>>> from fibo import *\n>>> fib(500)\n0 1 1 2 3 5 8 13 21 34 55 89 144 233 377\nThis imports all names except those beginning with an underscore (_\n).\nIn most cases Python programmers do not use this facility since it introduces\nan unknown set of names into the interpreter, possibly hiding some things\nyou have already defined.\nNote that in general the practice of importing *\nfrom a module or package is\nfrowned upon, since it often causes poorly readable code. However, it is okay to\nuse it to save typing in interactive sessions.\nIf the module name is followed by as\n, then the name\nfollowing as\nis bound directly to the imported module.\n>>> import fibo as fib\n>>> fib.fib(500)\n0 1 1 2 3 5 8 13 21 34 55 89 144 233 377\nThis is effectively importing the module in the same way that import fibo\nwill do, with the only difference of it being available as fib\n.\nIt can also be used when utilising from\nwith similar effects:\n>>> from fibo import fib as fibonacci\n>>> fibonacci(500)\n0 1 1 2 3 5 8 13 21 34 55 89 144 233 377\nNote\nFor efficiency reasons, each module is only imported once per interpreter\nsession. Therefore, if you change your modules, you must restart the\ninterpreter \u2013 or, if it\u2019s just one module you want to test interactively,\nuse importlib.reload()\n, e.g. import importlib;\nimportlib.reload(modulename)\n.\n6.1.1. Executing modules as scripts\u00b6\nWhen you run a Python module with\npython fibo.py \nthe code in the module will be executed, just as if you imported it, but with\nthe __name__\nset to \"__main__\"\n. That means that by adding this code at\nthe end of your module:\nif __name__ == \"__main__\":\nimport sys\nfib(int(sys.argv[1]))\nyou can make the file usable as a script as well as an importable module, because the code that parses the command line only runs if the module is executed as the \u201cmain\u201d file:\n$ python fibo.py 50\n0 1 1 2 3 5 8 13 21 34\nIf the module is imported, the code is not run:\n>>> import fibo\n>>>\nThis is often used either to provide a convenient user interface to a module, or for testing purposes (running the module as a script executes a test suite).\n6.1.2. The Module Search Path\u00b6\nWhen a module named spam\nis imported, the interpreter first searches for\na built-in module with that name. These module names are listed in\nsys.builtin_module_names\n. If not found, it then searches for a file\nnamed spam.py\nin a list of directories given by the variable\nsys.path\n. sys.path\nis initialized from these locations:\nThe directory containing the input script (or the current directory when no file is specified).\nPYTHONPATH\n(a list of directory names, with the same syntax as the shell variablePATH\n).The installation-dependent default (by convention including a\nsite-packages\ndirectory, handled by thesite\nmodule).\nMore details are at The initialization of the sys.path module search path.\nNote\nOn file systems which support symlinks, the directory containing the input script is calculated after the symlink is followed. In other words the directory containing the symlink is not added to the module search path.\nAfter initialization, Python programs can modify sys.path\n. The\ndirectory containing the script being run is placed at the beginning of the\nsearch path, ahead of the standard library path. This means that scripts in that\ndirectory will be loaded instead of modules of the same name in the library\ndirectory. This is an error unless the replacement is intended. See section\nStandard Modules for more information.\n6.1.3. \u201cCompiled\u201d Python files\u00b6\nTo speed up loading modules, Python caches the compiled version of each module\nin the __pycache__\ndirectory under the name module.version.pyc\n,\nwhere the version encodes the format of the compiled file; it generally contains\nthe Python version number. For example, in CPython release 3.3 the compiled\nversion of spam.py would be cached as __pycache__/spam.cpython-33.pyc\n. This\nnaming convention allows compiled modules from different releases and different\nversions of Python to coexist.\nPython checks the modification date of the source against the compiled version to see if it\u2019s out of date and needs to be recompiled. This is a completely automatic process. Also, the compiled modules are platform-independent, so the same library can be shared among systems with different architectures.\nPython does not check the cache in two circumstances. First, it always recompiles and does not store the result for the module that\u2019s loaded directly from the command line. Second, it does not check the cache if there is no source module. To support a non-source (compiled only) distribution, the compiled module must be in the source directory, and there must not be a source module.\nSome tips for experts:\nYou can use the\n-O\nor-OO\nswitches on the Python command to reduce the size of a compiled module. The-O\nswitch removes assert statements, the-OO\nswitch removes both assert statements and __doc__ strings. Since some programs may rely on having these available, you should only use this option if you know what you\u2019re doing. \u201cOptimized\u201d modules have anopt-\ntag and are usually smaller. Future releases may change the effects of optimization.A program doesn\u2019t run any faster when it is read from a\n.pyc\nfile than when it is read from a.py\nfile; the only thing that\u2019s faster about.pyc\nfiles is the speed with which they are loaded.The module\ncompileall\ncan create .pyc files for all modules in a directory.There is more detail on this process, including a flow chart of the decisions, in PEP 3147.\n6.2. Standard Modules\u00b6\nPython comes with a library of standard modules, described in a separate\ndocument, the Python Library Reference (\u201cLibrary Reference\u201d hereafter). Some\nmodules are built into the interpreter; these provide access to operations that\nare not part of the core of the language but are nevertheless built in, either\nfor efficiency or to provide access to operating system primitives such as\nsystem calls. The set of such modules is a configuration option which also\ndepends on the underlying platform. For example, the winreg\nmodule is only\nprovided on Windows systems. One particular module deserves some attention:\nsys\n, which is built into every Python interpreter. The variables\nsys.ps1\nand sys.ps2\ndefine the strings used as primary and secondary\nprompts:\n>>> import sys\n>>> sys.ps1\n'>>> '\n>>> sys.ps2\n'... '\n>>> sys.ps1 = 'C> '\nC> print('Yuck!')\nYuck!\nC>\nThese two variables are only defined if the interpreter is in interactive mode.\nThe variable sys.path\nis a list of strings that determines the interpreter\u2019s\nsearch path for modules. It is initialized to a default path taken from the\nenvironment variable PYTHONPATH\n, or from a built-in default if\nPYTHONPATH\nis not set. You can modify it using standard list\noperations:\n>>> import sys\n>>> sys.path.append('/ufs/guido/lib/python')\n6.3. The dir()\nFunction\u00b6\nThe built-in function dir()\nis used to find out which names a module\ndefines. It returns a sorted list of strings:\n>>> import fibo, sys\n>>> dir(fibo)\n['__name__', 'fib', 'fib2']\n>>> dir(sys)\n['__breakpointhook__', '__displayhook__', '__doc__', '__excepthook__',\n'__interactivehook__', '__loader__', '__name__', '__package__', '__spec__',\n'__stderr__', '__stdin__', '__stdout__', '__unraisablehook__',\n'_clear_type_cache', '_current_frames', '_debugmallocstats', '_framework',\n'_getframe', '_git', '_home', '_xoptions', 'abiflags', 'addaudithook',\n'api_version', 'argv', 'audit', 'base_exec_prefix', 'base_prefix',\n'breakpointhook', 'builtin_module_names', 'byteorder', 'call_tracing',\n'callstats', 'copyright', 'displayhook', 'dont_write_bytecode', 'exc_info',\n'excepthook', 'exec_prefix', 'executable', 'exit', 'flags', 'float_info',\n'float_repr_style', 'get_asyncgen_hooks', 'get_coroutine_origin_tracking_depth',\n'getallocatedblocks', 'getdefaultencoding', 'getdlopenflags',\n'getfilesystemencodeerrors', 'getfilesystemencoding', 'getprofile',\n'getrecursionlimit', 'getrefcount', 'getsizeof', 'getswitchinterval',\n'gettrace', 'hash_info', 'hexversion', 'implementation', 'int_info',\n'intern', 'is_finalizing', 'last_traceback', 'last_type', 'last_value',\n'maxsize', 'maxunicode', 'meta_path', 'modules', 'path', 'path_hooks',\n'path_importer_cache', 'platform', 'prefix', 'ps1', 'ps2', 'pycache_prefix',\n'set_asyncgen_hooks', 'set_coroutine_origin_tracking_depth', 'setdlopenflags',\n'setprofile', 'setrecursionlimit', 'setswitchinterval', 'settrace', 'stderr',\n'stdin', 'stdout', 'thread_info', 'unraisablehook', 'version', 'version_info',\n'warnoptions']\nWithout arguments, dir()\nlists the names you have defined currently:\n>>> a = [1, 2, 3, 4, 5]\n>>> import fibo\n>>> fib = fibo.fib\n>>> dir()\n['__builtins__', '__name__', 'a', 'fib', 'fibo', 'sys']\nNote that it lists all types of names: variables, modules, functions, etc.\ndir()\ndoes not list the names of built-in functions and variables. If you\nwant a list of those, they are defined in the standard module\nbuiltins\n:\n>>> import builtins\n>>> dir(builtins)\n['ArithmeticError', 'AssertionError', 'AttributeError', 'BaseException',\n'BlockingIOError', 'BrokenPipeError', 'BufferError', 'BytesWarning',\n'ChildProcessError', 'ConnectionAbortedError', 'ConnectionError',\n'ConnectionRefusedError', 'ConnectionResetError', 'DeprecationWarning',\n'EOFError', 'Ellipsis', 'EnvironmentError', 'Exception', 'False',\n'FileExistsError', 'FileNotFoundError', 'FloatingPointError',\n'FutureWarning', 'GeneratorExit', 'IOError', 'ImportError',\n'ImportWarning', 'IndentationError', 'IndexError', 'InterruptedError',\n'IsADirectoryError', 'KeyError', 'KeyboardInterrupt', 'LookupError',\n'MemoryError', 'NameError', 'None', 'NotADirectoryError', 'NotImplemented',\n'NotImplementedError', 'OSError', 'OverflowError',\n'PendingDeprecationWarning', 'PermissionError', 'ProcessLookupError',\n'ReferenceError', 'ResourceWarning', 'RuntimeError', 'RuntimeWarning',\n'StopIteration', 'SyntaxError', 'SyntaxWarning', 'SystemError',\n'SystemExit', 'TabError', 'TimeoutError', 'True', 'TypeError',\n'UnboundLocalError', 'UnicodeDecodeError', 'UnicodeEncodeError',\n'UnicodeError', 'UnicodeTranslateError', 'UnicodeWarning', 'UserWarning',\n'ValueError', 'Warning', 'ZeroDivisionError', '_', '__build_class__',\n'__debug__', '__doc__', '__import__', '__name__', '__package__', 'abs',\n'all', 'any', 'ascii', 'bin', 'bool', 'bytearray', 'bytes', 'callable',\n'chr', 'classmethod', 'compile', 'complex', 'copyright', 'credits',\n'delattr', 'dict', 'dir', 'divmod', 'enumerate', 'eval', 'exec', 'exit',\n'filter', 'float', 'format', 'frozenset', 'getattr', 'globals', 'hasattr',\n'hash', 'help', 'hex', 'id', 'input', 'int', 'isinstance', 'issubclass',\n'iter', 'len', 'license', 'list', 'locals', 'map', 'max', 'memoryview',\n'min', 'next', 'object', 'oct', 'open', 'ord', 'pow', 'print', 'property',\n'quit', 'range', 'repr', 'reversed', 'round', 'set', 'setattr', 'slice',\n'sorted', 'staticmethod', 'str', 'sum', 'super', 'tuple', 'type', 'vars',\n'zip']\n6.4. Packages\u00b6\nPackages are a way of structuring Python\u2019s module namespace by using \u201cdotted\nmodule names\u201d. For example, the module name A.B\ndesignates a submodule\nnamed B\nin a package named A\n. Just like the use of modules saves the\nauthors of different modules from having to worry about each other\u2019s global\nvariable names, the use of dotted module names saves the authors of multi-module\npackages like NumPy or Pillow from having to worry about\neach other\u2019s module names.\nSuppose you want to design a collection of modules (a \u201cpackage\u201d) for the uniform\nhandling of sound files and sound data. There are many different sound file\nformats (usually recognized by their extension, for example: .wav\n,\n.aiff\n, .au\n), so you may need to create and maintain a growing\ncollection of modules for the conversion between the various file formats.\nThere are also many different operations you might want to perform on sound data\n(such as mixing, adding echo, applying an equalizer function, creating an\nartificial stereo effect), so in addition you will be writing a never-ending\nstream of modules to perform these operations. Here\u2019s a possible structure for\nyour package (expressed in terms of a hierarchical filesystem):\nsound/ Top-level package\n__init__.py Initialize the sound package\nformats/ Subpackage for file format conversions\n__init__.py\nwavread.py\nwavwrite.py\naiffread.py\naiffwrite.py\nauread.py\nauwrite.py\n...\neffects/ Subpackage for sound effects\n__init__.py\necho.py\nsurround.py\nreverse.py\n...\nfilters/ Subpackage for filters\n__init__.py\nequalizer.py\nvocoder.py\nkaraoke.py\n...\nWhen importing the package, Python searches through the directories on\nsys.path\nlooking for the package subdirectory.\nThe __init__.py\nfiles are required to make Python treat directories\ncontaining the file as packages (unless using a namespace package, a\nrelatively advanced feature). This prevents directories with a common name,\nsuch as string\n, from unintentionally hiding valid modules that occur later\non the module search path. In the simplest case, __init__.py\ncan just be\nan empty file, but it can also execute initialization code for the package or\nset the __all__\nvariable, described later.\nUsers of the package can import individual modules from the package, for example:\nimport sound.effects.echo\nThis loads the submodule sound.effects.echo\n. It must be referenced with\nits full name.\nsound.effects.echo.echofilter(input, output, delay=0.7, atten=4)\nAn alternative way of importing the submodule is:\nfrom sound.effects import echo\nThis also loads the submodule echo\n, and makes it available without its\npackage prefix, so it can be used as follows:\necho.echofilter(input, output, delay=0.7, atten=4)\nYet another variation is to import the desired function or variable directly:\nfrom sound.effects.echo import echofilter\nAgain, this loads the submodule echo\n, but this makes its function\nechofilter()\ndirectly available:\nechofilter(input, output, delay=0.7, atten=4)\nNote that when using from package import item\n, the item can be either a\nsubmodule (or subpackage) of the package, or some other name defined in the\npackage, like a function, class or variable. The import\nstatement first\ntests whether the item is defined in the package; if not, it assumes it is a\nmodule and attempts to load it. If it fails to find it, an ImportError\nexception is raised.\nContrarily, when using syntax like import item.subitem.subsubitem\n, each item\nexcept for the last must be a package; the last item can be a module or a\npackage but can\u2019t be a class or function or variable defined in the previous\nitem.\n6.4.1. Importing * From a Package\u00b6\nNow what happens when the user writes from sound.effects import *\n? Ideally,\none would hope that this somehow goes out to the filesystem, finds which\nsubmodules are present in the package, and imports them all. This could take a\nlong time and importing sub-modules might have unwanted side-effects that should\nonly happen when the sub-module is explicitly imported.\nThe only solution is for the package author to provide an explicit index of the\npackage. The import\nstatement uses the following convention: if a package\u2019s\n__init__.py\ncode defines a list named __all__\n, it is taken to be the\nlist of module names that should be imported when from package import *\nis\nencountered. It is up to the package author to keep this list up-to-date when a\nnew version of the package is released. Package authors may also decide not to\nsupport it, if they don\u2019t see a use for importing * from their package. For\nexample, the file sound/effects/__init__.py\ncould contain the following\ncode:\n__all__ = [\"echo\", \"surround\", \"reverse\"]\nThis would mean that from sound.effects import *\nwould import the three\nnamed submodules of the sound.effects\npackage.\nBe aware that submodules might become shadowed by locally defined names. For\nexample, if you added a reverse\nfunction to the\nsound/effects/__init__.py\nfile, the from sound.effects import *\nwould only import the two submodules echo\nand surround\n, but not the\nreverse\nsubmodule, because it is shadowed by the locally defined\nreverse\nfunction:\n__all__ = [\n\"echo\", # refers to the 'echo.py' file\n\"surround\", # refers to the 'surround.py' file\n\"reverse\", # !!! refers to the 'reverse' function now !!!\n]\ndef reverse(msg: str): # <-- this name shadows the 'reverse.py' submodule\nreturn msg[::-1] # in the case of a 'from sound.effects import *'\nIf __all__\nis not defined, the statement from sound.effects import *\ndoes not import all submodules from the package sound.effects\ninto the\ncurrent namespace; it only ensures that the package sound.effects\nhas\nbeen imported (possibly running any initialization code in __init__.py\n)\nand then imports whatever names are defined in the package. This includes any\nnames defined (and submodules explicitly loaded) by __init__.py\n. It\nalso includes any submodules of the package that were explicitly loaded by\nprevious import\nstatements. Consider this code:\nimport sound.effects.echo\nimport sound.effects.surround\nfrom sound.effects import *\nIn this example, the echo\nand surround\nmodules are imported in the\ncurrent namespace because they are defined in the sound.effects\npackage\nwhen the from...import\nstatement is executed. (This also works when\n__all__\nis defined.)\nAlthough certain modules are designed to export only names that follow certain\npatterns when you use import *\n, it is still considered bad practice in\nproduction code.\nRemember, there is nothing wrong with using from package import\nspecific_submodule\n! In fact, this is the recommended notation unless the\nimporting module needs to use submodules with the same name from different\npackages.\n6.4.2. Intra-package References\u00b6\nWhen packages are structured into subpackages (as with the sound\npackage\nin the example), you can use absolute imports to refer to submodules of siblings\npackages. For example, if the module sound.filters.vocoder\nneeds to use\nthe echo\nmodule in the sound.effects\npackage, it can use from\nsound.effects import echo\n.\nYou can also write relative imports, with the from module import name\nform\nof import statement. These imports use leading dots to indicate the current and\nparent packages involved in the relative import. From the surround\nmodule for example, you might use:\nfrom . import echo\nfrom .. import formats\nfrom ..filters import equalizer\nNote that relative imports are based on the name of the current module\u2019s package. Since the main module does not have a package, modules intended for use as the main module of a Python application must always use absolute imports.\n6.4.3. Packages in Multiple Directories\u00b6\nPackages support one more special attribute, __path__\n. This is\ninitialized to be a sequence of strings containing the name of the\ndirectory holding the\npackage\u2019s __init__.py\nbefore the code in that file is executed. This\nvariable can be modified; doing so affects future searches for modules and\nsubpackages contained in the package.\nWhile this feature is not often needed, it can be used to extend the set of modules found in a package.\nFootnotes", "code_snippets": ["\n\n", "\n", "\n ", " ", " ", " ", " ", "\n ", " ", " ", " ", "\n ", " ", "\n ", " ", " ", " ", " ", "\n ", "\n\n", "\n", "\n ", " ", " ", "\n ", " ", " ", " ", " ", "\n ", " ", " ", " ", "\n ", "\n ", " ", " ", " ", " ", "\n ", " ", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", " ", " ", "\n", "\n", "\n", " ", " ", "\n", "\n", "\n", " ", "\n", "\n", "\n", "\n", "\n", "\n", " ", " ", " ", "\n", "\n", "\n", " ", " ", "\n", " ", " ", " ", "\n ", "\n ", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", " ", " ", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", " ", " ", " ", " ", " ", " ", "\n", "\n", " ", " ", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", " ", " ", " ", "\n", " ", "\n", " ", " ", " ", "\n", " ", "\n", " ", " ", " ", "\n", " ", " ", " ", " ", "\n", " ", " ", "\n ", " ", "\n ", " ", "\n ", " ", "\n", "\n\n", " ", " ", "\n ", " ", " ", "\n", "\n", "\n", " ", "\n", " ", "\n", " ", "\n", " ", "\n"], "language": "Python", "source": "python.org", "token_count": 5661} +{"url": "https://docs.python.org/3/library/py_compile.html", "title": " \u2014 Compile Python source files", "content": "py_compile\n\u2014 Compile Python source files\u00b6\nSource code: Lib/py_compile.py\nThe py_compile\nmodule provides a function to generate a byte-code file\nfrom a source file, and another function used when the module source file is\ninvoked as a script.\nThough not often needed, this function can be useful when installing modules for shared use, especially if some of the users may not have permission to write the byte-code cache files in the directory containing the source code.\n- exception py_compile.PyCompileError\u00b6\nException raised when an error occurs while attempting to compile the file.\n- py_compile.compile(file, cfile=None, dfile=None, doraise=False, optimize=-1, invalidation_mode=PycInvalidationMode.TIMESTAMP, quiet=0)\u00b6\nCompile a source file to byte-code and write out the byte-code cache file. The source code is loaded from the file named file. The byte-code is written to cfile, which defaults to the PEP 3147/PEP 488 path, ending in\n.pyc\n. For example, if file is/foo/bar/baz.py\ncfile will default to/foo/bar/__pycache__/baz.cpython-32.pyc\nfor Python 3.2. If dfile is specified, it is used instead of file as the name of the source file from which source lines are obtained for display in exception tracebacks. If doraise is true, aPyCompileError\nis raised when an error is encountered while compiling file. If doraise is false (the default), an error string is written tosys.stderr\n, but no exception is raised. This function returns the path to byte-compiled file, i.e. whatever cfile value was used.The doraise and quiet arguments determine how errors are handled while compiling file. If quiet is 0 or 1, and doraise is false, the default behaviour is enabled: an error string is written to\nsys.stderr\n, and the function returnsNone\ninstead of a path. If doraise is true, aPyCompileError\nis raised instead. However if quiet is 2, no message is written, and doraise has no effect.If the path that cfile becomes (either explicitly specified or computed) is a symlink or non-regular file,\nFileExistsError\nwill be raised. This is to act as a warning that import will turn those paths into regular files if it is allowed to write byte-compiled files to those paths. This is a side-effect of import using file renaming to place the final byte-compiled file into place to prevent concurrent file writing issues.optimize controls the optimization level and is passed to the built-in\ncompile()\nfunction. The default of-1\nselects the optimization level of the current interpreter.invalidation_mode should be a member of the\nPycInvalidationMode\nenum and controls how the generated bytecode cache is invalidated at runtime. The default isPycInvalidationMode.CHECKED_HASH\nif theSOURCE_DATE_EPOCH\nenvironment variable is set, otherwise the default isPycInvalidationMode.TIMESTAMP\n.Changed in version 3.2: Changed default value of cfile to be PEP 3147-compliant. Previous default was file +\n'c'\n('o'\nif optimization was enabled). Also added the optimize parameter.Changed in version 3.4: Changed code to use\nimportlib\nfor the byte-code cache file writing. This means file creation/writing semantics now match whatimportlib\ndoes, e.g. permissions, write-and-move semantics, etc. Also added the caveat thatFileExistsError\nis raised if cfile is a symlink or non-regular file.Changed in version 3.7: The invalidation_mode parameter was added as specified in PEP 552. If the\nSOURCE_DATE_EPOCH\nenvironment variable is set, invalidation_mode will be forced toPycInvalidationMode.CHECKED_HASH\n.Changed in version 3.7.2: The\nSOURCE_DATE_EPOCH\nenvironment variable no longer overrides the value of the invalidation_mode argument, and determines its default value instead.Changed in version 3.8: The quiet parameter was added.\n- class py_compile.PycInvalidationMode\u00b6\nAn enumeration of possible methods the interpreter can use to determine whether a bytecode file is up to date with a source file. The\n.pyc\nfile indicates the desired invalidation mode in its header. See Cached bytecode invalidation for more information on how Python invalidates.pyc\nfiles at runtime.Added in version 3.7.\n- TIMESTAMP\u00b6\nThe\n.pyc\nfile includes the timestamp and size of the source file, which Python will compare against the metadata of the source file at runtime to determine if the.pyc\nfile needs to be regenerated.\n- CHECKED_HASH\u00b6\nThe\n.pyc\nfile includes a hash of the source file content, which Python will compare against the source at runtime to determine if the.pyc\nfile needs to be regenerated.\n- UNCHECKED_HASH\u00b6\nLike\nCHECKED_HASH\n, the.pyc\nfile includes a hash of the source file content. However, Python will at runtime assume the.pyc\nfile is up to date and not validate the.pyc\nagainst the source file at all.This option is useful when the\n.pycs\nare kept up to date by some system external to Python like a build system.\nCommand-Line Interface\u00b6\nThis module can be invoked as a script to compile several source files. The files named in filenames are compiled and the resulting bytecode is cached in the normal manner. This program does not search a directory structure to locate source files; it only compiles files named explicitly. The exit status is nonzero if one of the files could not be compiled.\n- ... \u00b6\n- -\u00b6\nPositional arguments are files to compile. If\n-\nis the only parameter, the list of files is taken from standard input.\n- -q, --quiet\u00b6\nSuppress errors output.\nChanged in version 3.2: Added support for -\n.\nChanged in version 3.10: Added support for -q\n.\nSee also\n- Module\ncompileall\nUtilities to compile all Python source files in a directory tree.", "code_snippets": [], "language": "Python", "source": "python.org", "token_count": 1388} +{"url": "https://docs.python.org/3/howto/logging-cookbook.html", "title": "Logging Cookbook", "content": "Logging Cookbook\u00b6\n- Author:\nVinay Sajip \nThis page contains a number of recipes related to logging, which have been found useful in the past. For links to tutorial and reference information, please see Other resources.\nUsing logging in multiple modules\u00b6\nMultiple calls to logging.getLogger('someLogger')\nreturn a reference to the\nsame logger object. This is true not only within the same module, but also\nacross modules as long as it is in the same Python interpreter process. It is\ntrue for references to the same object; additionally, application code can\ndefine and configure a parent logger in one module and create (but not\nconfigure) a child logger in a separate module, and all logger calls to the\nchild will pass up to the parent. Here is a main module:\nimport logging\nimport auxiliary_module\n# create logger with 'spam_application'\nlogger = logging.getLogger('spam_application')\nlogger.setLevel(logging.DEBUG)\n# create file handler which logs even debug messages\nfh = logging.FileHandler('spam.log')\nfh.setLevel(logging.DEBUG)\n# create console handler with a higher log level\nch = logging.StreamHandler()\nch.setLevel(logging.ERROR)\n# create formatter and add it to the handlers\nformatter = logging.Formatter('%(asctime)s - %(name)s - %(levelname)s - %(message)s')\nfh.setFormatter(formatter)\nch.setFormatter(formatter)\n# add the handlers to the logger\nlogger.addHandler(fh)\nlogger.addHandler(ch)\nlogger.info('creating an instance of auxiliary_module.Auxiliary')\na = auxiliary_module.Auxiliary()\nlogger.info('created an instance of auxiliary_module.Auxiliary')\nlogger.info('calling auxiliary_module.Auxiliary.do_something')\na.do_something()\nlogger.info('finished auxiliary_module.Auxiliary.do_something')\nlogger.info('calling auxiliary_module.some_function()')\nauxiliary_module.some_function()\nlogger.info('done with auxiliary_module.some_function()')\nHere is the auxiliary module:\nimport logging\n# create logger\nmodule_logger = logging.getLogger('spam_application.auxiliary')\nclass Auxiliary:\ndef __init__(self):\nself.logger = logging.getLogger('spam_application.auxiliary.Auxiliary')\nself.logger.info('creating an instance of Auxiliary')\ndef do_something(self):\nself.logger.info('doing something')\na = 1 + 1\nself.logger.info('done doing something')\ndef some_function():\nmodule_logger.info('received a call to \"some_function\"')\nThe output looks like this:\n2005-03-23 23:47:11,663 - spam_application - INFO -\ncreating an instance of auxiliary_module.Auxiliary\n2005-03-23 23:47:11,665 - spam_application.auxiliary.Auxiliary - INFO -\ncreating an instance of Auxiliary\n2005-03-23 23:47:11,665 - spam_application - INFO -\ncreated an instance of auxiliary_module.Auxiliary\n2005-03-23 23:47:11,668 - spam_application - INFO -\ncalling auxiliary_module.Auxiliary.do_something\n2005-03-23 23:47:11,668 - spam_application.auxiliary.Auxiliary - INFO -\ndoing something\n2005-03-23 23:47:11,669 - spam_application.auxiliary.Auxiliary - INFO -\ndone doing something\n2005-03-23 23:47:11,670 - spam_application - INFO -\nfinished auxiliary_module.Auxiliary.do_something\n2005-03-23 23:47:11,671 - spam_application - INFO -\ncalling auxiliary_module.some_function()\n2005-03-23 23:47:11,672 - spam_application.auxiliary - INFO -\nreceived a call to 'some_function'\n2005-03-23 23:47:11,673 - spam_application - INFO -\ndone with auxiliary_module.some_function()\nLogging from multiple threads\u00b6\nLogging from multiple threads requires no special effort. The following example shows logging from the main (initial) thread and another thread:\nimport logging\nimport threading\nimport time\ndef worker(arg):\nwhile not arg['stop']:\nlogging.debug('Hi from myfunc')\ntime.sleep(0.5)\ndef main():\nlogging.basicConfig(level=logging.DEBUG, format='%(relativeCreated)6d %(threadName)s %(message)s')\ninfo = {'stop': False}\nthread = threading.Thread(target=worker, args=(info,))\nthread.start()\nwhile True:\ntry:\nlogging.debug('Hello from main')\ntime.sleep(0.75)\nexcept KeyboardInterrupt:\ninfo['stop'] = True\nbreak\nthread.join()\nif __name__ == '__main__':\nmain()\nWhen run, the script should print something like the following:\n0 Thread-1 Hi from myfunc\n3 MainThread Hello from main\n505 Thread-1 Hi from myfunc\n755 MainThread Hello from main\n1007 Thread-1 Hi from myfunc\n1507 MainThread Hello from main\n1508 Thread-1 Hi from myfunc\n2010 Thread-1 Hi from myfunc\n2258 MainThread Hello from main\n2512 Thread-1 Hi from myfunc\n3009 MainThread Hello from main\n3013 Thread-1 Hi from myfunc\n3515 Thread-1 Hi from myfunc\n3761 MainThread Hello from main\n4017 Thread-1 Hi from myfunc\n4513 MainThread Hello from main\n4518 Thread-1 Hi from myfunc\nThis shows the logging output interspersed as one might expect. This approach works for more threads than shown here, of course.\nMultiple handlers and formatters\u00b6\nLoggers are plain Python objects. The addHandler()\nmethod has no\nminimum or maximum quota for the number of handlers you may add. Sometimes it\nwill be beneficial for an application to log all messages of all severities to a\ntext file while simultaneously logging errors or above to the console. To set\nthis up, simply configure the appropriate handlers. The logging calls in the\napplication code will remain unchanged. Here is a slight modification to the\nprevious simple module-based configuration example:\nimport logging\nlogger = logging.getLogger('simple_example')\nlogger.setLevel(logging.DEBUG)\n# create file handler which logs even debug messages\nfh = logging.FileHandler('spam.log')\nfh.setLevel(logging.DEBUG)\n# create console handler with a higher log level\nch = logging.StreamHandler()\nch.setLevel(logging.ERROR)\n# create formatter and add it to the handlers\nformatter = logging.Formatter('%(asctime)s - %(name)s - %(levelname)s - %(message)s')\nch.setFormatter(formatter)\nfh.setFormatter(formatter)\n# add the handlers to logger\nlogger.addHandler(ch)\nlogger.addHandler(fh)\n# 'application' code\nlogger.debug('debug message')\nlogger.info('info message')\nlogger.warning('warn message')\nlogger.error('error message')\nlogger.critical('critical message')\nNotice that the \u2018application\u2019 code does not care about multiple handlers. All that changed was the addition and configuration of a new handler named fh.\nThe ability to create new handlers with higher- or lower-severity filters can be\nvery helpful when writing and testing an application. Instead of using many\nprint\nstatements for debugging, use logger.debug\n: Unlike the print\nstatements, which you will have to delete or comment out later, the logger.debug\nstatements can remain intact in the source code and remain dormant until you\nneed them again. At that time, the only change that needs to happen is to\nmodify the severity level of the logger and/or handler to debug.\nLogging to multiple destinations\u00b6\nLet\u2019s say you want to log to console and file with different message formats and in differing circumstances. Say you want to log messages with levels of DEBUG and higher to file, and those messages at level INFO and higher to the console. Let\u2019s also assume that the file should contain timestamps, but the console messages should not. Here\u2019s how you can achieve this:\nimport logging\n# set up logging to file - see previous section for more details\nlogging.basicConfig(level=logging.DEBUG,\nformat='%(asctime)s %(name)-12s %(levelname)-8s %(message)s',\ndatefmt='%m-%d %H:%M',\nfilename='/tmp/myapp.log',\nfilemode='w')\n# define a Handler which writes INFO messages or higher to the sys.stderr\nconsole = logging.StreamHandler()\nconsole.setLevel(logging.INFO)\n# set a format which is simpler for console use\nformatter = logging.Formatter('%(name)-12s: %(levelname)-8s %(message)s')\n# tell the handler to use this format\nconsole.setFormatter(formatter)\n# add the handler to the root logger\nlogging.getLogger().addHandler(console)\n# Now, we can log to the root logger, or any other logger. First the root...\nlogging.info('Jackdaws love my big sphinx of quartz.')\n# Now, define a couple of other loggers which might represent areas in your\n# application:\nlogger1 = logging.getLogger('myapp.area1')\nlogger2 = logging.getLogger('myapp.area2')\nlogger1.debug('Quick zephyrs blow, vexing daft Jim.')\nlogger1.info('How quickly daft jumping zebras vex.')\nlogger2.warning('Jail zesty vixen who grabbed pay from quack.')\nlogger2.error('The five boxing wizards jump quickly.')\nWhen you run this, on the console you will see\nroot : INFO Jackdaws love my big sphinx of quartz.\nmyapp.area1 : INFO How quickly daft jumping zebras vex.\nmyapp.area2 : WARNING Jail zesty vixen who grabbed pay from quack.\nmyapp.area2 : ERROR The five boxing wizards jump quickly.\nand in the file you will see something like\n10-22 22:19 root INFO Jackdaws love my big sphinx of quartz.\n10-22 22:19 myapp.area1 DEBUG Quick zephyrs blow, vexing daft Jim.\n10-22 22:19 myapp.area1 INFO How quickly daft jumping zebras vex.\n10-22 22:19 myapp.area2 WARNING Jail zesty vixen who grabbed pay from quack.\n10-22 22:19 myapp.area2 ERROR The five boxing wizards jump quickly.\nAs you can see, the DEBUG message only shows up in the file. The other messages are sent to both destinations.\nThis example uses console and file handlers, but you can use any number and combination of handlers you choose.\nNote that the above choice of log filename /tmp/myapp.log\nimplies use of a\nstandard location for temporary files on POSIX systems. On Windows, you may need to\nchoose a different directory name for the log - just ensure that the directory exists\nand that you have the permissions to create and update files in it.\nCustom handling of levels\u00b6\nSometimes, you might want to do something slightly different from the standard handling of levels in handlers, where all levels above a threshold get processed by a handler. To do this, you need to use filters. Let\u2019s look at a scenario where you want to arrange things as follows:\nSend messages of severity\nINFO\nandWARNING\ntosys.stdout\nSend messages of severity\nERROR\nand above tosys.stderr\nSend messages of severity\nDEBUG\nand above to fileapp.log\nSuppose you configure logging with the following JSON:\n{\n\"version\": 1,\n\"disable_existing_loggers\": false,\n\"formatters\": {\n\"simple\": {\n\"format\": \"%(levelname)-8s - %(message)s\"\n}\n},\n\"handlers\": {\n\"stdout\": {\n\"class\": \"logging.StreamHandler\",\n\"level\": \"INFO\",\n\"formatter\": \"simple\",\n\"stream\": \"ext://sys.stdout\"\n},\n\"stderr\": {\n\"class\": \"logging.StreamHandler\",\n\"level\": \"ERROR\",\n\"formatter\": \"simple\",\n\"stream\": \"ext://sys.stderr\"\n},\n\"file\": {\n\"class\": \"logging.FileHandler\",\n\"formatter\": \"simple\",\n\"filename\": \"app.log\",\n\"mode\": \"w\"\n}\n},\n\"root\": {\n\"level\": \"DEBUG\",\n\"handlers\": [\n\"stderr\",\n\"stdout\",\n\"file\"\n]\n}\n}\nThis configuration does almost what we want, except that sys.stdout\nwould show messages\nof severity ERROR\nand only events of this severity and higher will be tracked\nas well as INFO\nand WARNING\nmessages. To prevent this, we can set up a filter which\nexcludes those messages and add it to the relevant handler. This can be configured by\nadding a filters\nsection parallel to formatters\nand handlers\n:\n{\n\"filters\": {\n\"warnings_and_below\": {\n\"()\" : \"__main__.filter_maker\",\n\"level\": \"WARNING\"\n}\n}\n}\nand changing the section on the stdout\nhandler to add it:\n{\n\"stdout\": {\n\"class\": \"logging.StreamHandler\",\n\"level\": \"INFO\",\n\"formatter\": \"simple\",\n\"stream\": \"ext://sys.stdout\",\n\"filters\": [\"warnings_and_below\"]\n}\n}\nA filter is just a function, so we can define the filter_maker\n(a factory\nfunction) as follows:\ndef filter_maker(level):\nlevel = getattr(logging, level)\ndef filter(record):\nreturn record.levelno <= level\nreturn filter\nThis converts the string argument passed in to a numeric level, and returns a\nfunction which only returns True\nif the level of the passed in record is\nat or below the specified level. Note that in this example I have defined the\nfilter_maker\nin a test script main.py\nthat I run from the command line,\nso its module will be __main__\n- hence the __main__.filter_maker\nin the\nfilter configuration. You will need to change that if you define it in a\ndifferent module.\nWith the filter added, we can run main.py\n, which in full is:\nimport json\nimport logging\nimport logging.config\nCONFIG = '''\n{\n\"version\": 1,\n\"disable_existing_loggers\": false,\n\"formatters\": {\n\"simple\": {\n\"format\": \"%(levelname)-8s - %(message)s\"\n}\n},\n\"filters\": {\n\"warnings_and_below\": {\n\"()\" : \"__main__.filter_maker\",\n\"level\": \"WARNING\"\n}\n},\n\"handlers\": {\n\"stdout\": {\n\"class\": \"logging.StreamHandler\",\n\"level\": \"INFO\",\n\"formatter\": \"simple\",\n\"stream\": \"ext://sys.stdout\",\n\"filters\": [\"warnings_and_below\"]\n},\n\"stderr\": {\n\"class\": \"logging.StreamHandler\",\n\"level\": \"ERROR\",\n\"formatter\": \"simple\",\n\"stream\": \"ext://sys.stderr\"\n},\n\"file\": {\n\"class\": \"logging.FileHandler\",\n\"formatter\": \"simple\",\n\"filename\": \"app.log\",\n\"mode\": \"w\"\n}\n},\n\"root\": {\n\"level\": \"DEBUG\",\n\"handlers\": [\n\"stderr\",\n\"stdout\",\n\"file\"\n]\n}\n}\n'''\ndef filter_maker(level):\nlevel = getattr(logging, level)\ndef filter(record):\nreturn record.levelno <= level\nreturn filter\nlogging.config.dictConfig(json.loads(CONFIG))\nlogging.debug('A DEBUG message')\nlogging.info('An INFO message')\nlogging.warning('A WARNING message')\nlogging.error('An ERROR message')\nlogging.critical('A CRITICAL message')\nAnd after running it like this:\npython main.py 2>stderr.log >stdout.log\nWe can see the results are as expected:\n$ more *.log\n::::::::::::::\napp.log\n::::::::::::::\nDEBUG - A DEBUG message\nINFO - An INFO message\nWARNING - A WARNING message\nERROR - An ERROR message\nCRITICAL - A CRITICAL message\n::::::::::::::\nstderr.log\n::::::::::::::\nERROR - An ERROR message\nCRITICAL - A CRITICAL message\n::::::::::::::\nstdout.log\n::::::::::::::\nINFO - An INFO message\nWARNING - A WARNING message\nConfiguration server example\u00b6\nHere is an example of a module using the logging configuration server:\nimport logging\nimport logging.config\nimport time\nimport os\n# read initial config file\nlogging.config.fileConfig('logging.conf')\n# create and start listener on port 9999\nt = logging.config.listen(9999)\nt.start()\nlogger = logging.getLogger('simpleExample')\ntry:\n# loop through logging calls to see the difference\n# new configurations make, until Ctrl+C is pressed\nwhile True:\nlogger.debug('debug message')\nlogger.info('info message')\nlogger.warning('warn message')\nlogger.error('error message')\nlogger.critical('critical message')\ntime.sleep(5)\nexcept KeyboardInterrupt:\n# cleanup\nlogging.config.stopListening()\nt.join()\nAnd here is a script that takes a filename and sends that file to the server, properly preceded with the binary-encoded length, as the new logging configuration:\n#!/usr/bin/env python\nimport socket, sys, struct\nwith open(sys.argv[1], 'rb') as f:\ndata_to_send = f.read()\nHOST = 'localhost'\nPORT = 9999\ns = socket.socket(socket.AF_INET, socket.SOCK_STREAM)\nprint('connecting...')\ns.connect((HOST, PORT))\nprint('sending config...')\ns.send(struct.pack('>L', len(data_to_send)))\ns.send(data_to_send)\ns.close()\nprint('complete')\nDealing with handlers that block\u00b6\nSometimes you have to get your logging handlers to do their work without blocking the thread you\u2019re logging from. This is common in web applications, though of course it also occurs in other scenarios.\nA common culprit which demonstrates sluggish behaviour is the\nSMTPHandler\n: sending emails can take a long time, for a\nnumber of reasons outside the developer\u2019s control (for example, a poorly\nperforming mail or network infrastructure). But almost any network-based\nhandler can block: Even a SocketHandler\noperation may do a\nDNS query under the hood which is too slow (and this query can be deep in the\nsocket library code, below the Python layer, and outside your control).\nOne solution is to use a two-part approach. For the first part, attach only a\nQueueHandler\nto those loggers which are accessed from\nperformance-critical threads. They simply write to their queue, which can be\nsized to a large enough capacity or initialized with no upper bound to their\nsize. The write to the queue will typically be accepted quickly, though you\nwill probably need to catch the queue.Full\nexception as a precaution\nin your code. If you are a library developer who has performance-critical\nthreads in their code, be sure to document this (together with a suggestion to\nattach only QueueHandlers\nto your loggers) for the benefit of other\ndevelopers who will use your code.\nThe second part of the solution is QueueListener\n, which has been\ndesigned as the counterpart to QueueHandler\n. A\nQueueListener\nis very simple: it\u2019s passed a queue and some handlers,\nand it fires up an internal thread which listens to its queue for LogRecords\nsent from QueueHandlers\n(or any other source of LogRecords\n, for that\nmatter). The LogRecords\nare removed from the queue and passed to the\nhandlers for processing.\nThe advantage of having a separate QueueListener\nclass is that you\ncan use the same instance to service multiple QueueHandlers\n. This is more\nresource-friendly than, say, having threaded versions of the existing handler\nclasses, which would eat up one thread per handler for no particular benefit.\nAn example of using these two classes follows (imports omitted):\nque = queue.Queue(-1) # no limit on size\nqueue_handler = QueueHandler(que)\nhandler = logging.StreamHandler()\nlistener = QueueListener(que, handler)\nroot = logging.getLogger()\nroot.addHandler(queue_handler)\nformatter = logging.Formatter('%(threadName)s: %(message)s')\nhandler.setFormatter(formatter)\nlistener.start()\n# The log output will display the thread which generated\n# the event (the main thread) rather than the internal\n# thread which monitors the internal queue. This is what\n# you want to happen.\nroot.warning('Look out!')\nlistener.stop()\nwhich, when run, will produce:\nMainThread: Look out!\nNote\nAlthough the earlier discussion wasn\u2019t specifically talking about\nasync code, but rather about slow logging handlers, it should be noted that\nwhen logging from async code, network and even file handlers could lead to\nproblems (blocking the event loop) because some logging is done from\nasyncio\ninternals. It might be best, if any async code is used in an\napplication, to use the above approach for logging, so that any blocking code\nruns only in the QueueListener\nthread.\nChanged in version 3.5: Prior to Python 3.5, the QueueListener\nalways passed every message\nreceived from the queue to every handler it was initialized with. (This was\nbecause it was assumed that level filtering was all done on the other side,\nwhere the queue is filled.) From 3.5 onwards, this behaviour can be changed\nby passing a keyword argument respect_handler_level=True\nto the\nlistener\u2019s constructor. When this is done, the listener compares the level\nof each message with the handler\u2019s level, and only passes a message to a\nhandler if it\u2019s appropriate to do so.\nChanged in version 3.14: The QueueListener\ncan be started (and stopped) via the\nwith\nstatement. For example:\nwith QueueListener(que, handler) as listener:\n# The queue listener automatically starts\n# when the 'with' block is entered.\npass\n# The queue listener automatically stops once\n# the 'with' block is exited.\nSending and receiving logging events across a network\u00b6\nLet\u2019s say you want to send logging events across a network, and handle them at\nthe receiving end. A simple way of doing this is attaching a\nSocketHandler\ninstance to the root logger at the sending end:\nimport logging, logging.handlers\nrootLogger = logging.getLogger()\nrootLogger.setLevel(logging.DEBUG)\nsocketHandler = logging.handlers.SocketHandler('localhost',\nlogging.handlers.DEFAULT_TCP_LOGGING_PORT)\n# don't bother with a formatter, since a socket handler sends the event as\n# an unformatted pickle\nrootLogger.addHandler(socketHandler)\n# Now, we can log to the root logger, or any other logger. First the root...\nlogging.info('Jackdaws love my big sphinx of quartz.')\n# Now, define a couple of other loggers which might represent areas in your\n# application:\nlogger1 = logging.getLogger('myapp.area1')\nlogger2 = logging.getLogger('myapp.area2')\nlogger1.debug('Quick zephyrs blow, vexing daft Jim.')\nlogger1.info('How quickly daft jumping zebras vex.')\nlogger2.warning('Jail zesty vixen who grabbed pay from quack.')\nlogger2.error('The five boxing wizards jump quickly.')\nAt the receiving end, you can set up a receiver using the socketserver\nmodule. Here is a basic working example:\nimport pickle\nimport logging\nimport logging.handlers\nimport socketserver\nimport struct\nclass LogRecordStreamHandler(socketserver.StreamRequestHandler):\n\"\"\"Handler for a streaming logging request.\nThis basically logs the record using whatever logging policy is\nconfigured locally.\n\"\"\"\ndef handle(self):\n\"\"\"\nHandle multiple requests - each expected to be a 4-byte length,\nfollowed by the LogRecord in pickle format. Logs the record\naccording to whatever policy is configured locally.\n\"\"\"\nwhile True:\nchunk = self.connection.recv(4)\nif len(chunk) < 4:\nbreak\nslen = struct.unpack('>L', chunk)[0]\nchunk = self.connection.recv(slen)\nwhile len(chunk) < slen:\nchunk = chunk + self.connection.recv(slen - len(chunk))\nobj = self.unPickle(chunk)\nrecord = logging.makeLogRecord(obj)\nself.handleLogRecord(record)\ndef unPickle(self, data):\nreturn pickle.loads(data)\ndef handleLogRecord(self, record):\n# if a name is specified, we use the named logger rather than the one\n# implied by the record.\nif self.server.logname is not None:\nname = self.server.logname\nelse:\nname = record.name\nlogger = logging.getLogger(name)\n# N.B. EVERY record gets logged. This is because Logger.handle\n# is normally called AFTER logger-level filtering. If you want\n# to do filtering, do it at the client end to save wasting\n# cycles and network bandwidth!\nlogger.handle(record)\nclass LogRecordSocketReceiver(socketserver.ThreadingTCPServer):\n\"\"\"\nSimple TCP socket-based logging receiver suitable for testing.\n\"\"\"\nallow_reuse_address = True\ndef __init__(self, host='localhost',\nport=logging.handlers.DEFAULT_TCP_LOGGING_PORT,\nhandler=LogRecordStreamHandler):\nsocketserver.ThreadingTCPServer.__init__(self, (host, port), handler)\nself.abort = 0\nself.timeout = 1\nself.logname = None\ndef serve_until_stopped(self):\nimport select\nabort = 0\nwhile not abort:\nrd, wr, ex = select.select([self.socket.fileno()],\n[], [],\nself.timeout)\nif rd:\nself.handle_request()\nabort = self.abort\ndef main():\nlogging.basicConfig(\nformat='%(relativeCreated)5d %(name)-15s %(levelname)-8s %(message)s')\ntcpserver = LogRecordSocketReceiver()\nprint('About to start TCP server...')\ntcpserver.serve_until_stopped()\nif __name__ == '__main__':\nmain()\nFirst run the server, and then the client. On the client side, nothing is printed on the console; on the server side, you should see something like:\nAbout to start TCP server...\n59 root INFO Jackdaws love my big sphinx of quartz.\n59 myapp.area1 DEBUG Quick zephyrs blow, vexing daft Jim.\n69 myapp.area1 INFO How quickly daft jumping zebras vex.\n69 myapp.area2 WARNING Jail zesty vixen who grabbed pay from quack.\n69 myapp.area2 ERROR The five boxing wizards jump quickly.\nNote that there are some security issues with pickle in some scenarios. If\nthese affect you, you can use an alternative serialization scheme by overriding\nthe makePickle()\nmethod and implementing your\nalternative there, as well as adapting the above script to use your alternative\nserialization.\nRunning a logging socket listener in production\u00b6\nTo run a logging listener in production, you may need to use a process-management tool such as Supervisor. Here is a Gist which provides the bare-bones files to run the above functionality using Supervisor. It consists of the following files:\nFile |\nPurpose |\n|---|---|\n|\nA Bash script to prepare the environment for testing |\n|\nThe Supervisor configuration file, which has entries for the listener and a multi-process web application |\n|\nA Bash script to ensure that Supervisor is running with the above configuration |\n|\nThe socket listener program which receives log events and records them to a file |\n|\nA simple web application which performs logging via a socket connected to the listener |\n|\nA JSON configuration file for the web application |\n|\nA Python script to exercise the web application |\nThe web application uses Gunicorn, which is a popular web application server that starts multiple worker processes to handle requests. This example setup shows how the workers can write to the same log file without conflicting with one another \u2014 they all go through the socket listener.\nTo test these files, do the following in a POSIX environment:\nDownload the Gist as a ZIP archive using the Download ZIP button.\nUnzip the above files from the archive into a scratch directory.\nIn the scratch directory, run\nbash prepare.sh\nto get things ready. This creates arun\nsubdirectory to contain Supervisor-related and log files, and avenv\nsubdirectory to contain a virtual environment into whichbottle\n,gunicorn\nandsupervisor\nare installed.Run\nbash ensure_app.sh\nto ensure that Supervisor is running with the above configuration.Run\nvenv/bin/python client.py\nto exercise the web application, which will lead to records being written to the log.Inspect the log files in the\nrun\nsubdirectory. You should see the most recent log lines in files matching the patternapp.log*\n. They won\u2019t be in any particular order, since they have been handled concurrently by different worker processes in a non-deterministic way.You can shut down the listener and the web application by running\nvenv/bin/supervisorctl -c supervisor.conf shutdown\n.\nYou may need to tweak the configuration files in the unlikely event that the configured ports clash with something else in your test environment.\nThe default configuration uses a TCP socket on port 9020. You can use a Unix Domain socket instead of a TCP socket by doing the following:\nIn\nlistener.json\n, add asocket\nkey with the path to the domain socket you want to use. If this key is present, the listener listens on the corresponding domain socket and not on a TCP socket (theport\nkey is ignored).In\nwebapp.json\n, change the socket handler configuration dictionary so that thehost\nvalue is the path to the domain socket, and set theport\nvalue tonull\n.\nAdding contextual information to your logging output\u00b6\nSometimes you want logging output to contain contextual information in\naddition to the parameters passed to the logging call. For example, in a\nnetworked application, it may be desirable to log client-specific information\nin the log (e.g. remote client\u2019s username, or IP address). Although you could\nuse the extra parameter to achieve this, it\u2019s not always convenient to pass\nthe information in this way. While it might be tempting to create\nLogger\ninstances on a per-connection basis, this is not a good idea\nbecause these instances are not garbage collected. While this is not a problem\nin practice, when the number of Logger\ninstances is dependent on the\nlevel of granularity you want to use in logging an application, it could\nbe hard to manage if the number of Logger\ninstances becomes\neffectively unbounded.\nUsing LoggerAdapters to impart contextual information\u00b6\nAn easy way in which you can pass contextual information to be output along\nwith logging event information is to use the LoggerAdapter\nclass.\nThis class is designed to look like a Logger\n, so that you can call\ndebug()\n, info()\n, warning()\n, error()\n,\nexception()\n, critical()\nand log()\n. These methods have the\nsame signatures as their counterparts in Logger\n, so you can use the\ntwo types of instances interchangeably.\nWhen you create an instance of LoggerAdapter\n, you pass it a\nLogger\ninstance and a dict-like object which contains your contextual\ninformation. When you call one of the logging methods on an instance of\nLoggerAdapter\n, it delegates the call to the underlying instance of\nLogger\npassed to its constructor, and arranges to pass the contextual\ninformation in the delegated call. Here\u2019s a snippet from the code of\nLoggerAdapter\n:\ndef debug(self, msg, /, *args, **kwargs):\n\"\"\"\nDelegate a debug call to the underlying logger, after adding\ncontextual information from this adapter instance.\n\"\"\"\nmsg, kwargs = self.process(msg, kwargs)\nself.logger.debug(msg, *args, **kwargs)\nThe process()\nmethod of LoggerAdapter\nis where the\ncontextual information is added to the logging output. It\u2019s passed the message\nand keyword arguments of the logging call, and it passes back (potentially)\nmodified versions of these to use in the call to the underlying logger. The\ndefault implementation of this method leaves the message alone, but inserts\nan \u2018extra\u2019 key in the keyword argument whose value is the dict-like object\npassed to the constructor. Of course, if you had passed an \u2018extra\u2019 keyword\nargument in the call to the adapter, it will be silently overwritten.\nThe advantage of using \u2018extra\u2019 is that the values in the dict-like object are\nmerged into the LogRecord\ninstance\u2019s __dict__, allowing you to use\ncustomized strings with your Formatter\ninstances which know about\nthe keys of the dict-like object. If you need a different method, e.g. if you\nwant to prepend or append the contextual information to the message string,\nyou just need to subclass LoggerAdapter\nand override\nprocess()\nto do what you need. Here is a simple example:\nclass CustomAdapter(logging.LoggerAdapter):\n\"\"\"\nThis example adapter expects the passed in dict-like object to have a\n'connid' key, whose value in brackets is prepended to the log message.\n\"\"\"\ndef process(self, msg, kwargs):\nreturn '[%s] %s' % (self.extra['connid'], msg), kwargs\nwhich you can use like this:\nlogger = logging.getLogger(__name__)\nadapter = CustomAdapter(logger, {'connid': some_conn_id})\nThen any events that you log to the adapter will have the value of\nsome_conn_id\nprepended to the log messages.\nUsing objects other than dicts to pass contextual information\u00b6\nYou don\u2019t need to pass an actual dict to a LoggerAdapter\n- you could\npass an instance of a class which implements __getitem__\nand __iter__\nso\nthat it looks like a dict to logging. This would be useful if you want to\ngenerate values dynamically (whereas the values in a dict would be constant).\nUsing Filters to impart contextual information\u00b6\nYou can also add contextual information to log output using a user-defined\nFilter\n. Filter\ninstances are allowed to modify the LogRecords\npassed to them, including adding additional attributes which can then be output\nusing a suitable format string, or if needed a custom Formatter\n.\nFor example in a web application, the request being processed (or at least,\nthe interesting parts of it) can be stored in a threadlocal\n(threading.local\n) variable, and then accessed from a Filter\nto\nadd, say, information from the request - say, the remote IP address and remote\nuser\u2019s username - to the LogRecord\n, using the attribute names \u2018ip\u2019 and\n\u2018user\u2019 as in the LoggerAdapter\nexample above. In that case, the same format\nstring can be used to get similar output to that shown above. Here\u2019s an example\nscript:\nimport logging\nfrom random import choice\nclass ContextFilter(logging.Filter):\n\"\"\"\nThis is a filter which injects contextual information into the log.\nRather than use actual contextual information, we just use random\ndata in this demo.\n\"\"\"\nUSERS = ['jim', 'fred', 'sheila']\nIPS = ['123.231.231.123', '127.0.0.1', '192.168.0.1']\ndef filter(self, record):\nrecord.ip = choice(ContextFilter.IPS)\nrecord.user = choice(ContextFilter.USERS)\nreturn True\nif __name__ == '__main__':\nlevels = (logging.DEBUG, logging.INFO, logging.WARNING, logging.ERROR, logging.CRITICAL)\nlogging.basicConfig(level=logging.DEBUG,\nformat='%(asctime)-15s %(name)-5s %(levelname)-8s IP: %(ip)-15s User: %(user)-8s %(message)s')\na1 = logging.getLogger('a.b.c')\na2 = logging.getLogger('d.e.f')\nf = ContextFilter()\na1.addFilter(f)\na2.addFilter(f)\na1.debug('A debug message')\na1.info('An info message with %s', 'some parameters')\nfor x in range(10):\nlvl = choice(levels)\nlvlname = logging.getLevelName(lvl)\na2.log(lvl, 'A message at %s level with %d %s', lvlname, 2, 'parameters')\nwhich, when run, produces something like:\n2010-09-06 22:38:15,292 a.b.c DEBUG IP: 123.231.231.123 User: fred A debug message\n2010-09-06 22:38:15,300 a.b.c INFO IP: 192.168.0.1 User: sheila An info message with some parameters\n2010-09-06 22:38:15,300 d.e.f CRITICAL IP: 127.0.0.1 User: sheila A message at CRITICAL level with 2 parameters\n2010-09-06 22:38:15,300 d.e.f ERROR IP: 127.0.0.1 User: jim A message at ERROR level with 2 parameters\n2010-09-06 22:38:15,300 d.e.f DEBUG IP: 127.0.0.1 User: sheila A message at DEBUG level with 2 parameters\n2010-09-06 22:38:15,300 d.e.f ERROR IP: 123.231.231.123 User: fred A message at ERROR level with 2 parameters\n2010-09-06 22:38:15,300 d.e.f CRITICAL IP: 192.168.0.1 User: jim A message at CRITICAL level with 2 parameters\n2010-09-06 22:38:15,300 d.e.f CRITICAL IP: 127.0.0.1 User: sheila A message at CRITICAL level with 2 parameters\n2010-09-06 22:38:15,300 d.e.f DEBUG IP: 192.168.0.1 User: jim A message at DEBUG level with 2 parameters\n2010-09-06 22:38:15,301 d.e.f ERROR IP: 127.0.0.1 User: sheila A message at ERROR level with 2 parameters\n2010-09-06 22:38:15,301 d.e.f DEBUG IP: 123.231.231.123 User: fred A message at DEBUG level with 2 parameters\n2010-09-06 22:38:15,301 d.e.f INFO IP: 123.231.231.123 User: fred A message at INFO level with 2 parameters\nUse of contextvars\n\u00b6\nSince Python 3.7, the contextvars\nmodule has provided context-local storage\nwhich works for both threading\nand asyncio\nprocessing needs. This type\nof storage may thus be generally preferable to thread-locals. The following example\nshows how, in a multi-threaded environment, logs can populated with contextual\ninformation such as, for example, request attributes handled by web applications.\nFor the purposes of illustration, say that you have different web applications, each independent of the other but running in the same Python process and using a library common to them. How can each of these applications have their own log, where all logging messages from the library (and other request processing code) are directed to the appropriate application\u2019s log file, while including in the log additional contextual information such as client IP, HTTP request method and client username?\nLet\u2019s assume that the library can be simulated by the following code:\n# webapplib.py\nimport logging\nimport time\nlogger = logging.getLogger(__name__)\ndef useful():\n# Just a representative event logged from the library\nlogger.debug('Hello from webapplib!')\n# Just sleep for a bit so other threads get to run\ntime.sleep(0.01)\nWe can simulate the multiple web applications by means of two simple classes,\nRequest\nand WebApp\n. These simulate how real threaded web applications work -\neach request is handled by a thread:\n# main.py\nimport argparse\nfrom contextvars import ContextVar\nimport logging\nimport os\nfrom random import choice\nimport threading\nimport webapplib\nlogger = logging.getLogger(__name__)\nroot = logging.getLogger()\nroot.setLevel(logging.DEBUG)\nclass Request:\n\"\"\"\nA simple dummy request class which just holds dummy HTTP request method,\nclient IP address and client username\n\"\"\"\ndef __init__(self, method, ip, user):\nself.method = method\nself.ip = ip\nself.user = user\n# A dummy set of requests which will be used in the simulation - we'll just pick\n# from this list randomly. Note that all GET requests are from 192.168.2.XXX\n# addresses, whereas POST requests are from 192.16.3.XXX addresses. Three users\n# are represented in the sample requests.\nREQUESTS = [\nRequest('GET', '192.168.2.20', 'jim'),\nRequest('POST', '192.168.3.20', 'fred'),\nRequest('GET', '192.168.2.21', 'sheila'),\nRequest('POST', '192.168.3.21', 'jim'),\nRequest('GET', '192.168.2.22', 'fred'),\nRequest('POST', '192.168.3.22', 'sheila'),\n]\n# Note that the format string includes references to request context information\n# such as HTTP method, client IP and username\nformatter = logging.Formatter('%(threadName)-11s %(appName)s %(name)-9s %(user)-6s %(ip)s %(method)-4s %(message)s')\n# Create our context variables. These will be filled at the start of request\n# processing, and used in the logging that happens during that processing\nctx_request = ContextVar('request')\nctx_appname = ContextVar('appname')\nclass InjectingFilter(logging.Filter):\n\"\"\"\nA filter which injects context-specific information into logs and ensures\nthat only information for a specific webapp is included in its log\n\"\"\"\ndef __init__(self, app):\nself.app = app\ndef filter(self, record):\nrequest = ctx_request.get()\nrecord.method = request.method\nrecord.ip = request.ip\nrecord.user = request.user\nrecord.appName = appName = ctx_appname.get()\nreturn appName == self.app.name\nclass WebApp:\n\"\"\"\nA dummy web application class which has its own handler and filter for a\nwebapp-specific log.\n\"\"\"\ndef __init__(self, name):\nself.name = name\nhandler = logging.FileHandler(name + '.log', 'w')\nf = InjectingFilter(self)\nhandler.setFormatter(formatter)\nhandler.addFilter(f)\nroot.addHandler(handler)\nself.num_requests = 0\ndef process_request(self, request):\n\"\"\"\nThis is the dummy method for processing a request. It's called on a\ndifferent thread for every request. We store the context information into\nthe context vars before doing anything else.\n\"\"\"\nctx_request.set(request)\nctx_appname.set(self.name)\nself.num_requests += 1\nlogger.debug('Request processing started')\nwebapplib.useful()\nlogger.debug('Request processing finished')\ndef main():\nfn = os.path.splitext(os.path.basename(__file__))[0]\nadhf = argparse.ArgumentDefaultsHelpFormatter\nap = argparse.ArgumentParser(formatter_class=adhf, prog=fn,\ndescription='Simulate a couple of web '\n'applications handling some '\n'requests, showing how request '\n'context can be used to '\n'populate logs')\naa = ap.add_argument\naa('--count', '-c', type=int, default=100, help='How many requests to simulate')\noptions = ap.parse_args()\n# Create the dummy webapps and put them in a list which we can use to select\n# from randomly\napp1 = WebApp('app1')\napp2 = WebApp('app2')\napps = [app1, app2]\nthreads = []\n# Add a common handler which will capture all events\nhandler = logging.FileHandler('app.log', 'w')\nhandler.setFormatter(formatter)\nroot.addHandler(handler)\n# Generate calls to process requests\nfor i in range(options.count):\ntry:\n# Pick an app at random and a request for it to process\napp = choice(apps)\nrequest = choice(REQUESTS)\n# Process the request in its own thread\nt = threading.Thread(target=app.process_request, args=(request,))\nthreads.append(t)\nt.start()\nexcept KeyboardInterrupt:\nbreak\n# Wait for the threads to terminate\nfor t in threads:\nt.join()\nfor app in apps:\nprint('%s processed %s requests' % (app.name, app.num_requests))\nif __name__ == '__main__':\nmain()\nIf you run the above, you should find that roughly half the requests go\ninto app1.log\nand the rest into app2.log\n, and the all the requests are\nlogged to app.log\n. Each webapp-specific log will contain only log entries for\nonly that webapp, and the request information will be displayed consistently in the\nlog (i.e. the information in each dummy request will always appear together in a log\nline). This is illustrated by the following shell output:\n~/logging-contextual-webapp$ python main.py\napp1 processed 51 requests\napp2 processed 49 requests\n~/logging-contextual-webapp$ wc -l *.log\n153 app1.log\n147 app2.log\n300 app.log\n600 total\n~/logging-contextual-webapp$ head -3 app1.log\nThread-3 (process_request) app1 __main__ jim 192.168.3.21 POST Request processing started\nThread-3 (process_request) app1 webapplib jim 192.168.3.21 POST Hello from webapplib!\nThread-5 (process_request) app1 __main__ jim 192.168.3.21 POST Request processing started\n~/logging-contextual-webapp$ head -3 app2.log\nThread-1 (process_request) app2 __main__ sheila 192.168.2.21 GET Request processing started\nThread-1 (process_request) app2 webapplib sheila 192.168.2.21 GET Hello from webapplib!\nThread-2 (process_request) app2 __main__ jim 192.168.2.20 GET Request processing started\n~/logging-contextual-webapp$ head app.log\nThread-1 (process_request) app2 __main__ sheila 192.168.2.21 GET Request processing started\nThread-1 (process_request) app2 webapplib sheila 192.168.2.21 GET Hello from webapplib!\nThread-2 (process_request) app2 __main__ jim 192.168.2.20 GET Request processing started\nThread-3 (process_request) app1 __main__ jim 192.168.3.21 POST Request processing started\nThread-2 (process_request) app2 webapplib jim 192.168.2.20 GET Hello from webapplib!\nThread-3 (process_request) app1 webapplib jim 192.168.3.21 POST Hello from webapplib!\nThread-4 (process_request) app2 __main__ fred 192.168.2.22 GET Request processing started\nThread-5 (process_request) app1 __main__ jim 192.168.3.21 POST Request processing started\nThread-4 (process_request) app2 webapplib fred 192.168.2.22 GET Hello from webapplib!\nThread-6 (process_request) app1 __main__ jim 192.168.3.21 POST Request processing started\n~/logging-contextual-webapp$ grep app1 app1.log | wc -l\n153\n~/logging-contextual-webapp$ grep app2 app2.log | wc -l\n147\n~/logging-contextual-webapp$ grep app1 app.log | wc -l\n153\n~/logging-contextual-webapp$ grep app2 app.log | wc -l\n147\nImparting contextual information in handlers\u00b6\nEach Handler\nhas its own chain of filters.\nIf you want to add contextual information to a LogRecord\nwithout leaking\nit to other handlers, you can use a filter that returns\na new LogRecord\ninstead of modifying it in-place, as shown in the following script:\nimport copy\nimport logging\ndef filter(record: logging.LogRecord):\nrecord = copy.copy(record)\nrecord.user = 'jim'\nreturn record\nif __name__ == '__main__':\nlogger = logging.getLogger()\nlogger.setLevel(logging.INFO)\nhandler = logging.StreamHandler()\nformatter = logging.Formatter('%(message)s from %(user)-8s')\nhandler.setFormatter(formatter)\nhandler.addFilter(filter)\nlogger.addHandler(handler)\nlogger.info('A log message')\nLogging to a single file from multiple processes\u00b6\nAlthough logging is thread-safe, and logging to a single file from multiple\nthreads in a single process is supported, logging to a single file from\nmultiple processes is not supported, because there is no standard way to\nserialize access to a single file across multiple processes in Python. If you\nneed to log to a single file from multiple processes, one way of doing this is\nto have all the processes log to a SocketHandler\n, and have a\nseparate process which implements a socket server which reads from the socket\nand logs to file. (If you prefer, you can dedicate one thread in one of the\nexisting processes to perform this function.)\nThis section documents this approach in more detail and\nincludes a working socket receiver which can be used as a starting point for you\nto adapt in your own applications.\nYou could also write your own handler which uses the Lock\nclass from the multiprocessing\nmodule to serialize access to the\nfile from your processes. The stdlib FileHandler\nand subclasses do\nnot make use of multiprocessing\n.\nAlternatively, you can use a Queue\nand a QueueHandler\nto send\nall logging events to one of the processes in your multi-process application.\nThe following example script demonstrates how you can do this; in the example\na separate listener process listens for events sent by other processes and logs\nthem according to its own logging configuration. Although the example only\ndemonstrates one way of doing it (for example, you may want to use a listener\nthread rather than a separate listener process \u2013 the implementation would be\nanalogous) it does allow for completely different logging configurations for\nthe listener and the other processes in your application, and can be used as\nthe basis for code meeting your own specific requirements:\n# You'll need these imports in your own code\nimport logging\nimport logging.handlers\nimport multiprocessing\n# Next two import lines for this demo only\nfrom random import choice, random\nimport time\n#\n# Because you'll want to define the logging configurations for listener and workers, the\n# listener and worker process functions take a configurer parameter which is a callable\n# for configuring logging for that process. These functions are also passed the queue,\n# which they use for communication.\n#\n# In practice, you can configure the listener however you want, but note that in this\n# simple example, the listener does not apply level or filter logic to received records.\n# In practice, you would probably want to do this logic in the worker processes, to avoid\n# sending events which would be filtered out between processes.\n#\n# The size of the rotated files is made small so you can see the results easily.\ndef listener_configurer():\nroot = logging.getLogger()\nh = logging.handlers.RotatingFileHandler('mptest.log', 'a', 300, 10)\nf = logging.Formatter('%(asctime)s %(processName)-10s %(name)s %(levelname)-8s %(message)s')\nh.setFormatter(f)\nroot.addHandler(h)\n# This is the listener process top-level loop: wait for logging events\n# (LogRecords)on the queue and handle them, quit when you get a None for a\n# LogRecord.\ndef listener_process(queue, configurer):\nconfigurer()\nwhile True:\ntry:\nrecord = queue.get()\nif record is None: # We send this as a sentinel to tell the listener to quit.\nbreak\nlogger = logging.getLogger(record.name)\nlogger.handle(record) # No level or filter logic applied - just do it!\nexcept Exception:\nimport sys, traceback\nprint('Whoops! Problem:', file=sys.stderr)\ntraceback.print_exc(file=sys.stderr)\n# Arrays used for random selections in this demo\nLEVELS = [logging.DEBUG, logging.INFO, logging.WARNING,\nlogging.ERROR, logging.CRITICAL]\nLOGGERS = ['a.b.c', 'd.e.f']\nMESSAGES = [\n'Random message #1',\n'Random message #2',\n'Random message #3',\n]\n# The worker configuration is done at the start of the worker process run.\n# Note that on Windows you can't rely on fork semantics, so each process\n# will run the logging configuration code when it starts.\ndef worker_configurer(queue):\nh = logging.handlers.QueueHandler(queue) # Just the one handler needed\nroot = logging.getLogger()\nroot.addHandler(h)\n# send all messages, for demo; no other level or filter logic applied.\nroot.setLevel(logging.DEBUG)\n# This is the worker process top-level loop, which just logs ten events with\n# random intervening delays before terminating.\n# The print messages are just so you know it's doing something!\ndef worker_process(queue, configurer):\nconfigurer(queue)\nname = multiprocessing.current_process().name\nprint('Worker started: %s' % name)\nfor i in range(10):\ntime.sleep(random())\nlogger = logging.getLogger(choice(LOGGERS))\nlevel = choice(LEVELS)\nmessage = choice(MESSAGES)\nlogger.log(level, message)\nprint('Worker finished: %s' % name)\n# Here's where the demo gets orchestrated. Create the queue, create and start\n# the listener, create ten workers and start them, wait for them to finish,\n# then send a None to the queue to tell the listener to finish.\ndef main():\nqueue = multiprocessing.Queue(-1)\nlistener = multiprocessing.Process(target=listener_process,\nargs=(queue, listener_configurer))\nlistener.start()\nworkers = []\nfor i in range(10):\nworker = multiprocessing.Process(target=worker_process,\nargs=(queue, worker_configurer))\nworkers.append(worker)\nworker.start()\nfor w in workers:\nw.join()\nqueue.put_nowait(None)\nlistener.join()\nif __name__ == '__main__':\nmain()\nA variant of the above script keeps the logging in the main process, in a separate thread:\nimport logging\nimport logging.config\nimport logging.handlers\nfrom multiprocessing import Process, Queue\nimport random\nimport threading\nimport time\ndef logger_thread(q):\nwhile True:\nrecord = q.get()\nif record is None:\nbreak\nlogger = logging.getLogger(record.name)\nlogger.handle(record)\ndef worker_process(q):\nqh = logging.handlers.QueueHandler(q)\nroot = logging.getLogger()\nroot.setLevel(logging.DEBUG)\nroot.addHandler(qh)\nlevels = [logging.DEBUG, logging.INFO, logging.WARNING, logging.ERROR,\nlogging.CRITICAL]\nloggers = ['foo', 'foo.bar', 'foo.bar.baz',\n'spam', 'spam.ham', 'spam.ham.eggs']\nfor i in range(100):\nlvl = random.choice(levels)\nlogger = logging.getLogger(random.choice(loggers))\nlogger.log(lvl, 'Message no. %d', i)\nif __name__ == '__main__':\nq = Queue()\nd = {\n'version': 1,\n'formatters': {\n'detailed': {\n'class': 'logging.Formatter',\n'format': '%(asctime)s %(name)-15s %(levelname)-8s %(processName)-10s %(message)s'\n}\n},\n'handlers': {\n'console': {\n'class': 'logging.StreamHandler',\n'level': 'INFO',\n},\n'file': {\n'class': 'logging.FileHandler',\n'filename': 'mplog.log',\n'mode': 'w',\n'formatter': 'detailed',\n},\n'foofile': {\n'class': 'logging.FileHandler',\n'filename': 'mplog-foo.log',\n'mode': 'w',\n'formatter': 'detailed',\n},\n'errors': {\n'class': 'logging.FileHandler',\n'filename': 'mplog-errors.log',\n'mode': 'w',\n'level': 'ERROR',\n'formatter': 'detailed',\n},\n},\n'loggers': {\n'foo': {\n'handlers': ['foofile']\n}\n},\n'root': {\n'level': 'DEBUG',\n'handlers': ['console', 'file', 'errors']\n},\n}\nworkers = []\nfor i in range(5):\nwp = Process(target=worker_process, name='worker %d' % (i + 1), args=(q,))\nworkers.append(wp)\nwp.start()\nlogging.config.dictConfig(d)\nlp = threading.Thread(target=logger_thread, args=(q,))\nlp.start()\n# At this point, the main process could do some useful work of its own\n# Once it's done that, it can wait for the workers to terminate...\nfor wp in workers:\nwp.join()\n# And now tell the logging thread to finish up, too\nq.put(None)\nlp.join()\nThis variant shows how you can e.g. apply configuration for particular loggers\n- e.g. the foo\nlogger has a special handler which stores all events in the\nfoo\nsubsystem in a file mplog-foo.log\n. This will be used by the logging\nmachinery in the main process (even though the logging events are generated in\nthe worker processes) to direct the messages to the appropriate destinations.\nUsing concurrent.futures.ProcessPoolExecutor\u00b6\nIf you want to use concurrent.futures.ProcessPoolExecutor\nto start\nyour worker processes, you need to create the queue slightly differently.\nInstead of\nqueue = multiprocessing.Queue(-1)\nyou should use\nqueue = multiprocessing.Manager().Queue(-1) # also works with the examples above\nand you can then replace the worker creation from this:\nworkers = []\nfor i in range(10):\nworker = multiprocessing.Process(target=worker_process,\nargs=(queue, worker_configurer))\nworkers.append(worker)\nworker.start()\nfor w in workers:\nw.join()\nto this (remembering to first import concurrent.futures\n):\nwith concurrent.futures.ProcessPoolExecutor(max_workers=10) as executor:\nfor i in range(10):\nexecutor.submit(worker_process, queue, worker_configurer)\nDeploying Web applications using Gunicorn and uWSGI\u00b6\nWhen deploying Web applications using Gunicorn or uWSGI (or similar), multiple worker\nprocesses are created to handle client requests. In such environments, avoid creating\nfile-based handlers directly in your web application. Instead, use a\nSocketHandler\nto log from the web application to a listener in a separate\nprocess. This can be set up using a process management tool such as Supervisor - see\nRunning a logging socket listener in production for more details.\nUsing file rotation\u00b6\nSometimes you want to let a log file grow to a certain size, then open a new\nfile and log to that. You may want to keep a certain number of these files, and\nwhen that many files have been created, rotate the files so that the number of\nfiles and the size of the files both remain bounded. For this usage pattern, the\nlogging package provides a RotatingFileHandler\n:\nimport glob\nimport logging\nimport logging.handlers\nLOG_FILENAME = 'logging_rotatingfile_example.out'\n# Set up a specific logger with our desired output level\nmy_logger = logging.getLogger('MyLogger')\nmy_logger.setLevel(logging.DEBUG)\n# Add the log message handler to the logger\nhandler = logging.handlers.RotatingFileHandler(\nLOG_FILENAME, maxBytes=20, backupCount=5)\nmy_logger.addHandler(handler)\n# Log some messages\nfor i in range(20):\nmy_logger.debug('i = %d' % i)\n# See what files are created\nlogfiles = glob.glob('%s*' % LOG_FILENAME)\nfor filename in logfiles:\nprint(filename)\nThe result should be 6 separate files, each with part of the log history for the application:\nlogging_rotatingfile_example.out\nlogging_rotatingfile_example.out.1\nlogging_rotatingfile_example.out.2\nlogging_rotatingfile_example.out.3\nlogging_rotatingfile_example.out.4\nlogging_rotatingfile_example.out.5\nThe most current file is always logging_rotatingfile_example.out\n,\nand each time it reaches the size limit it is renamed with the suffix\n.1\n. Each of the existing backup files is renamed to increment the suffix\n(.1\nbecomes .2\n, etc.) and the .6\nfile is erased.\nObviously this example sets the log length much too small as an extreme example. You would want to set maxBytes to an appropriate value.\nUse of alternative formatting styles\u00b6\nWhen logging was added to the Python standard library, the only way of\nformatting messages with variable content was to use the %-formatting\nmethod. Since then, Python has gained two new formatting approaches:\nstring.Template\n(added in Python 2.4) and str.format()\n(added in Python 2.6).\nLogging (as of 3.2) provides improved support for these two additional\nformatting styles. The Formatter\nclass been enhanced to take an\nadditional, optional keyword parameter named style\n. This defaults to\n'%'\n, but other possible values are '{'\nand '$'\n, which correspond\nto the other two formatting styles. Backwards compatibility is maintained by\ndefault (as you would expect), but by explicitly specifying a style parameter,\nyou get the ability to specify format strings which work with\nstr.format()\nor string.Template\n. Here\u2019s an example console\nsession to show the possibilities:\n>>> import logging\n>>> root = logging.getLogger()\n>>> root.setLevel(logging.DEBUG)\n>>> handler = logging.StreamHandler()\n>>> bf = logging.Formatter('{asctime} {name} {levelname:8s} {message}',\n... style='{')\n>>> handler.setFormatter(bf)\n>>> root.addHandler(handler)\n>>> logger = logging.getLogger('foo.bar')\n>>> logger.debug('This is a DEBUG message')\n2010-10-28 15:11:55,341 foo.bar DEBUG This is a DEBUG message\n>>> logger.critical('This is a CRITICAL message')\n2010-10-28 15:12:11,526 foo.bar CRITICAL This is a CRITICAL message\n>>> df = logging.Formatter('$asctime $name ${levelname} $message',\n... style='$')\n>>> handler.setFormatter(df)\n>>> logger.debug('This is a DEBUG message')\n2010-10-28 15:13:06,924 foo.bar DEBUG This is a DEBUG message\n>>> logger.critical('This is a CRITICAL message')\n2010-10-28 15:13:11,494 foo.bar CRITICAL This is a CRITICAL message\n>>>\nNote that the formatting of logging messages for final output to logs is completely independent of how an individual logging message is constructed. That can still use %-formatting, as shown here:\n>>> logger.error('This is an%s %s %s', 'other,', 'ERROR,', 'message')\n2010-10-28 15:19:29,833 foo.bar ERROR This is another, ERROR, message\n>>>\nLogging calls (logger.debug()\n, logger.info()\netc.) only take\npositional parameters for the actual logging message itself, with keyword\nparameters used only for determining options for how to handle the actual\nlogging call (e.g. the exc_info\nkeyword parameter to indicate that\ntraceback information should be logged, or the extra\nkeyword parameter\nto indicate additional contextual information to be added to the log). So\nyou cannot directly make logging calls using str.format()\nor\nstring.Template\nsyntax, because internally the logging package\nuses %-formatting to merge the format string and the variable arguments.\nThere would be no changing this while preserving backward compatibility, since\nall logging calls which are out there in existing code will be using %-format\nstrings.\nThere is, however, a way that you can use {}- and $- formatting to construct\nyour individual log messages. Recall that for a message you can use an\narbitrary object as a message format string, and that the logging package will\ncall str()\non that object to get the actual format string. Consider the\nfollowing two classes:\nclass BraceMessage:\ndef __init__(self, fmt, /, *args, **kwargs):\nself.fmt = fmt\nself.args = args\nself.kwargs = kwargs\ndef __str__(self):\nreturn self.fmt.format(*self.args, **self.kwargs)\nclass DollarMessage:\ndef __init__(self, fmt, /, **kwargs):\nself.fmt = fmt\nself.kwargs = kwargs\ndef __str__(self):\nfrom string import Template\nreturn Template(self.fmt).substitute(**self.kwargs)\nEither of these can be used in place of a format string, to allow {}- or\n$-formatting to be used to build the actual \u201cmessage\u201d part which appears in the\nformatted log output in place of \u201c%(message)s\u201d or \u201c{message}\u201d or \u201c$message\u201d.\nIt\u2019s a little unwieldy to use the class names whenever you want to log\nsomething, but it\u2019s quite palatable if you use an alias such as __ (double\nunderscore \u2014 not to be confused with _, the single underscore used as a\nsynonym/alias for gettext.gettext()\nor its brethren).\nThe above classes are not included in Python, though they\u2019re easy enough to\ncopy and paste into your own code. They can be used as follows (assuming that\nthey\u2019re declared in a module called wherever\n):\n>>> from wherever import BraceMessage as __\n>>> print(__('Message with {0} {name}', 2, name='placeholders'))\nMessage with 2 placeholders\n>>> class Point: pass\n...\n>>> p = Point()\n>>> p.x = 0.5\n>>> p.y = 0.5\n>>> print(__('Message with coordinates: ({point.x:.2f}, {point.y:.2f})',\n... point=p))\nMessage with coordinates: (0.50, 0.50)\n>>> from wherever import DollarMessage as __\n>>> print(__('Message with $num $what', num=2, what='placeholders'))\nMessage with 2 placeholders\n>>>\nWhile the above examples use print()\nto show how the formatting works, you\nwould of course use logger.debug()\nor similar to actually log using this\napproach.\nOne thing to note is that you pay no significant performance penalty with this\napproach: the actual formatting happens not when you make the logging call, but\nwhen (and if) the logged message is actually about to be output to a log by a\nhandler. So the only slightly unusual thing which might trip you up is that the\nparentheses go around the format string and the arguments, not just the format\nstring. That\u2019s because the __ notation is just syntax sugar for a constructor\ncall to one of the XXXMessage\nclasses.\nIf you prefer, you can use a LoggerAdapter\nto achieve a similar effect\nto the above, as in the following example:\nimport logging\nclass Message:\ndef __init__(self, fmt, args):\nself.fmt = fmt\nself.args = args\ndef __str__(self):\nreturn self.fmt.format(*self.args)\nclass StyleAdapter(logging.LoggerAdapter):\ndef log(self, level, msg, /, *args, stacklevel=1, **kwargs):\nif self.isEnabledFor(level):\nmsg, kwargs = self.process(msg, kwargs)\nself.logger.log(level, Message(msg, args), **kwargs,\nstacklevel=stacklevel+1)\nlogger = StyleAdapter(logging.getLogger(__name__))\ndef main():\nlogger.debug('Hello, {}', 'world!')\nif __name__ == '__main__':\nlogging.basicConfig(level=logging.DEBUG)\nmain()\nThe above script should log the message Hello, world!\nwhen run with\nPython 3.8 or later.\nCustomizing LogRecord\n\u00b6\nEvery logging event is represented by a LogRecord\ninstance.\nWhen an event is logged and not filtered out by a logger\u2019s level, a\nLogRecord\nis created, populated with information about the event and\nthen passed to the handlers for that logger (and its ancestors, up to and\nincluding the logger where further propagation up the hierarchy is disabled).\nBefore Python 3.2, there were only two places where this creation was done:\nLogger.makeRecord()\n, which is called in the normal process of logging an event. This invokedLogRecord\ndirectly to create an instance.makeLogRecord()\n, which is called with a dictionary containing attributes to be added to the LogRecord. This is typically invoked when a suitable dictionary has been received over the network (e.g. in pickle form via aSocketHandler\n, or in JSON form via anHTTPHandler\n).\nThis has usually meant that if you need to do anything special with a\nLogRecord\n, you\u2019ve had to do one of the following.\nCreate your own\nLogger\nsubclass, which overridesLogger.makeRecord()\n, and set it usingsetLoggerClass()\nbefore any loggers that you care about are instantiated.Add a\nFilter\nto a logger or handler, which does the necessary special manipulation you need when itsfilter()\nmethod is called.\nThe first approach would be a little unwieldy in the scenario where (say)\nseveral different libraries wanted to do different things. Each would attempt\nto set its own Logger\nsubclass, and the one which did this last would\nwin.\nThe second approach works reasonably well for many cases, but does not allow\nyou to e.g. use a specialized subclass of LogRecord\n. Library\ndevelopers can set a suitable filter on their loggers, but they would have to\nremember to do this every time they introduced a new logger (which they would\ndo simply by adding new packages or modules and doing\nlogger = logging.getLogger(__name__)\nat module level). It\u2019s probably one too many things to think about. Developers\ncould also add the filter to a NullHandler\nattached to their\ntop-level logger, but this would not be invoked if an application developer\nattached a handler to a lower-level library logger \u2014 so output from that\nhandler would not reflect the intentions of the library developer.\nIn Python 3.2 and later, LogRecord\ncreation is done through a\nfactory, which you can specify. The factory is just a callable you can set with\nsetLogRecordFactory()\n, and interrogate with\ngetLogRecordFactory()\n. The factory is invoked with the same\nsignature as the LogRecord\nconstructor, as LogRecord\nis the default setting for the factory.\nThis approach allows a custom factory to control all aspects of LogRecord creation. For example, you could return a subclass, or just add some additional attributes to the record once created, using a pattern similar to this:\nold_factory = logging.getLogRecordFactory()\ndef record_factory(*args, **kwargs):\nrecord = old_factory(*args, **kwargs)\nrecord.custom_attribute = 0xdecafbad\nreturn record\nlogging.setLogRecordFactory(record_factory)\nThis pattern allows different libraries to chain factories together, and as\nlong as they don\u2019t overwrite each other\u2019s attributes or unintentionally\noverwrite the attributes provided as standard, there should be no surprises.\nHowever, it should be borne in mind that each link in the chain adds run-time\noverhead to all logging operations, and the technique should only be used when\nthe use of a Filter\ndoes not provide the desired result.\nSubclassing QueueHandler and QueueListener- a ZeroMQ example\u00b6\nSubclass QueueHandler\n\u00b6\nYou can use a QueueHandler\nsubclass to send messages to other kinds\nof queues, for example a ZeroMQ \u2018publish\u2019 socket. In the example below,the\nsocket is created separately and passed to the handler (as its \u2018queue\u2019):\nimport zmq # using pyzmq, the Python binding for ZeroMQ\nimport json # for serializing records portably\nctx = zmq.Context()\nsock = zmq.Socket(ctx, zmq.PUB) # or zmq.PUSH, or other suitable value\nsock.bind('tcp://*:5556') # or wherever\nclass ZeroMQSocketHandler(QueueHandler):\ndef enqueue(self, record):\nself.queue.send_json(record.__dict__)\nhandler = ZeroMQSocketHandler(sock)\nOf course there are other ways of organizing this, for example passing in the data needed by the handler to create the socket:\nclass ZeroMQSocketHandler(QueueHandler):\ndef __init__(self, uri, socktype=zmq.PUB, ctx=None):\nself.ctx = ctx or zmq.Context()\nsocket = zmq.Socket(self.ctx, socktype)\nsocket.bind(uri)\nsuper().__init__(socket)\ndef enqueue(self, record):\nself.queue.send_json(record.__dict__)\ndef close(self):\nself.queue.close()\nSubclass QueueListener\n\u00b6\nYou can also subclass QueueListener\nto get messages from other kinds\nof queues, for example a ZeroMQ \u2018subscribe\u2019 socket. Here\u2019s an example:\nclass ZeroMQSocketListener(QueueListener):\ndef __init__(self, uri, /, *handlers, **kwargs):\nself.ctx = kwargs.get('ctx') or zmq.Context()\nsocket = zmq.Socket(self.ctx, zmq.SUB)\nsocket.setsockopt_string(zmq.SUBSCRIBE, '') # subscribe to everything\nsocket.connect(uri)\nsuper().__init__(socket, *handlers, **kwargs)\ndef dequeue(self):\nmsg = self.queue.recv_json()\nreturn logging.makeLogRecord(msg)\nSubclassing QueueHandler and QueueListener- a pynng\nexample\u00b6\nIn a similar way to the above section, we can implement a listener and handler\nusing pynng, which is a Python binding to\nNNG, billed as a spiritual successor to ZeroMQ.\nThe following snippets illustrate \u2013 you can test them in an environment which has\npynng\ninstalled. Just for variety, we present the listener first.\nSubclass QueueListener\n\u00b6\n# listener.py\nimport json\nimport logging\nimport logging.handlers\nimport pynng\nDEFAULT_ADDR = \"tcp://localhost:13232\"\ninterrupted = False\nclass NNGSocketListener(logging.handlers.QueueListener):\ndef __init__(self, uri, /, *handlers, **kwargs):\n# Have a timeout for interruptability, and open a\n# subscriber socket\nsocket = pynng.Sub0(listen=uri, recv_timeout=500)\n# The b'' subscription matches all topics\ntopics = kwargs.pop('topics', None) or b''\nsocket.subscribe(topics)\n# We treat the socket as a queue\nsuper().__init__(socket, *handlers, **kwargs)\ndef dequeue(self, block):\ndata = None\n# Keep looping while not interrupted and no data received over the\n# socket\nwhile not interrupted:\ntry:\ndata = self.queue.recv(block=block)\nbreak\nexcept pynng.Timeout:\npass\nexcept pynng.Closed: # sometimes happens when you hit Ctrl-C\nbreak\nif data is None:\nreturn None\n# Get the logging event sent from a publisher\nevent = json.loads(data.decode('utf-8'))\nreturn logging.makeLogRecord(event)\ndef enqueue_sentinel(self):\n# Not used in this implementation, as the socket isn't really a\n# queue\npass\nlogging.getLogger('pynng').propagate = False\nlistener = NNGSocketListener(DEFAULT_ADDR, logging.StreamHandler(), topics=b'')\nlistener.start()\nprint('Press Ctrl-C to stop.')\ntry:\nwhile True:\npass\nexcept KeyboardInterrupt:\ninterrupted = True\nfinally:\nlistener.stop()\nSubclass QueueHandler\n\u00b6\n# sender.py\nimport json\nimport logging\nimport logging.handlers\nimport time\nimport random\nimport pynng\nDEFAULT_ADDR = \"tcp://localhost:13232\"\nclass NNGSocketHandler(logging.handlers.QueueHandler):\ndef __init__(self, uri):\nsocket = pynng.Pub0(dial=uri, send_timeout=500)\nsuper().__init__(socket)\ndef enqueue(self, record):\n# Send the record as UTF-8 encoded JSON\nd = dict(record.__dict__)\ndata = json.dumps(d)\nself.queue.send(data.encode('utf-8'))\ndef close(self):\nself.queue.close()\nlogging.getLogger('pynng').propagate = False\nhandler = NNGSocketHandler(DEFAULT_ADDR)\n# Make sure the process ID is in the output\nlogging.basicConfig(level=logging.DEBUG,\nhandlers=[logging.StreamHandler(), handler],\nformat='%(levelname)-8s %(name)10s %(process)6s %(message)s')\nlevels = (logging.DEBUG, logging.INFO, logging.WARNING, logging.ERROR,\nlogging.CRITICAL)\nlogger_names = ('myapp', 'myapp.lib1', 'myapp.lib2')\nmsgno = 1\nwhile True:\n# Just randomly select some loggers and levels and log away\nlevel = random.choice(levels)\nlogger = logging.getLogger(random.choice(logger_names))\nlogger.log(level, 'Message no. %5d' % msgno)\nmsgno += 1\ndelay = random.random() * 2 + 0.5\ntime.sleep(delay)\nYou can run the above two snippets in separate command shells. If we run the listener in one shell and run the sender in two separate shells, we should see something like the following. In the first sender shell:\n$ python sender.py\nDEBUG myapp 613 Message no. 1\nWARNING myapp.lib2 613 Message no. 2\nCRITICAL myapp.lib2 613 Message no. 3\nWARNING myapp.lib2 613 Message no. 4\nCRITICAL myapp.lib1 613 Message no. 5\nDEBUG myapp 613 Message no. 6\nCRITICAL myapp.lib1 613 Message no. 7\nINFO myapp.lib1 613 Message no. 8\n(and so on)\nIn the second sender shell:\n$ python sender.py\nINFO myapp.lib2 657 Message no. 1\nCRITICAL myapp.lib2 657 Message no. 2\nCRITICAL myapp 657 Message no. 3\nCRITICAL myapp.lib1 657 Message no. 4\nINFO myapp.lib1 657 Message no. 5\nWARNING myapp.lib2 657 Message no. 6\nCRITICAL myapp 657 Message no. 7\nDEBUG myapp.lib1 657 Message no. 8\n(and so on)\nIn the listener shell:\n$ python listener.py\nPress Ctrl-C to stop.\nDEBUG myapp 613 Message no. 1\nWARNING myapp.lib2 613 Message no. 2\nINFO myapp.lib2 657 Message no. 1\nCRITICAL myapp.lib2 613 Message no. 3\nCRITICAL myapp.lib2 657 Message no. 2\nCRITICAL myapp 657 Message no. 3\nWARNING myapp.lib2 613 Message no. 4\nCRITICAL myapp.lib1 613 Message no. 5\nCRITICAL myapp.lib1 657 Message no. 4\nINFO myapp.lib1 657 Message no. 5\nDEBUG myapp 613 Message no. 6\nWARNING myapp.lib2 657 Message no. 6\nCRITICAL myapp 657 Message no. 7\nCRITICAL myapp.lib1 613 Message no. 7\nINFO myapp.lib1 613 Message no. 8\nDEBUG myapp.lib1 657 Message no. 8\n(and so on)\nAs you can see, the logging from the two sender processes is interleaved in the listener\u2019s output.\nAn example dictionary-based configuration\u00b6\nBelow is an example of a logging configuration dictionary - it\u2019s taken from\nthe documentation on the Django project.\nThis dictionary is passed to dictConfig()\nto put the configuration into effect:\nLOGGING = {\n'version': 1,\n'disable_existing_loggers': False,\n'formatters': {\n'verbose': {\n'format': '{levelname} {asctime} {module} {process:d} {thread:d} {message}',\n'style': '{',\n},\n'simple': {\n'format': '{levelname} {message}',\n'style': '{',\n},\n},\n'filters': {\n'special': {\n'()': 'project.logging.SpecialFilter',\n'foo': 'bar',\n},\n},\n'handlers': {\n'console': {\n'level': 'INFO',\n'class': 'logging.StreamHandler',\n'formatter': 'simple',\n},\n'mail_admins': {\n'level': 'ERROR',\n'class': 'django.utils.log.AdminEmailHandler',\n'filters': ['special']\n}\n},\n'loggers': {\n'django': {\n'handlers': ['console'],\n'propagate': True,\n},\n'django.request': {\n'handlers': ['mail_admins'],\n'level': 'ERROR',\n'propagate': False,\n},\n'myproject.custom': {\n'handlers': ['console', 'mail_admins'],\n'level': 'INFO',\n'filters': ['special']\n}\n}\n}\nFor more information about this configuration, you can see the relevant section of the Django documentation.\nUsing a rotator and namer to customize log rotation processing\u00b6\nAn example of how you can define a namer and rotator is given in the following runnable script, which shows gzip compression of the log file:\nimport gzip\nimport logging\nimport logging.handlers\nimport os\nimport shutil\ndef namer(name):\nreturn name + \".gz\"\ndef rotator(source, dest):\nwith open(source, 'rb') as f_in:\nwith gzip.open(dest, 'wb') as f_out:\nshutil.copyfileobj(f_in, f_out)\nos.remove(source)\nrh = logging.handlers.RotatingFileHandler('rotated.log', maxBytes=128, backupCount=5)\nrh.rotator = rotator\nrh.namer = namer\nroot = logging.getLogger()\nroot.setLevel(logging.INFO)\nroot.addHandler(rh)\nf = logging.Formatter('%(asctime)s %(message)s')\nrh.setFormatter(f)\nfor i in range(1000):\nroot.info(f'Message no. {i + 1}')\nAfter running this, you will see six new files, five of which are compressed:\n$ ls rotated.log*\nrotated.log rotated.log.2.gz rotated.log.4.gz\nrotated.log.1.gz rotated.log.3.gz rotated.log.5.gz\n$ zcat rotated.log.1.gz\n2023-01-20 02:28:17,767 Message no. 996\n2023-01-20 02:28:17,767 Message no. 997\n2023-01-20 02:28:17,767 Message no. 998\nA more elaborate multiprocessing example\u00b6\nThe following working example shows how logging can be used with multiprocessing using configuration files. The configurations are fairly simple, but serve to illustrate how more complex ones could be implemented in a real multiprocessing scenario.\nIn the example, the main process spawns a listener process and some worker processes. Each of the main process, the listener and the workers have three separate configurations (the workers all share the same configuration). We can see logging in the main process, how the workers log to a QueueHandler and how the listener implements a QueueListener and a more complex logging configuration, and arranges to dispatch events received via the queue to the handlers specified in the configuration. Note that these configurations are purely illustrative, but you should be able to adapt this example to your own scenario.\nHere\u2019s the script - the docstrings and the comments hopefully explain how it works:\nimport logging\nimport logging.config\nimport logging.handlers\nfrom multiprocessing import Process, Queue, Event, current_process\nimport os\nimport random\nimport time\nclass MyHandler:\n\"\"\"\nA simple handler for logging events. It runs in the listener process and\ndispatches events to loggers based on the name in the received record,\nwhich then get dispatched, by the logging system, to the handlers\nconfigured for those loggers.\n\"\"\"\ndef handle(self, record):\nif record.name == \"root\":\nlogger = logging.getLogger()\nelse:\nlogger = logging.getLogger(record.name)\nif logger.isEnabledFor(record.levelno):\n# The process name is transformed just to show that it's the listener\n# doing the logging to files and console\nrecord.processName = '%s (for %s)' % (current_process().name, record.processName)\nlogger.handle(record)\ndef listener_process(q, stop_event, config):\n\"\"\"\nThis could be done in the main process, but is just done in a separate\nprocess for illustrative purposes.\nThis initialises logging according to the specified configuration,\nstarts the listener and waits for the main process to signal completion\nvia the event. The listener is then stopped, and the process exits.\n\"\"\"\nlogging.config.dictConfig(config)\nlistener = logging.handlers.QueueListener(q, MyHandler())\nlistener.start()\nif os.name == 'posix':\n# On POSIX, the setup logger will have been configured in the\n# parent process, but should have been disabled following the\n# dictConfig call.\n# On Windows, since fork isn't used, the setup logger won't\n# exist in the child, so it would be created and the message\n# would appear - hence the \"if posix\" clause.\nlogger = logging.getLogger('setup')\nlogger.critical('Should not appear, because of disabled logger ...')\nstop_event.wait()\nlistener.stop()\ndef worker_process(config):\n\"\"\"\nA number of these are spawned for the purpose of illustration. In\npractice, they could be a heterogeneous bunch of processes rather than\nones which are identical to each other.\nThis initialises logging according to the specified configuration,\nand logs a hundred messages with random levels to randomly selected\nloggers.\nA small sleep is added to allow other processes a chance to run. This\nis not strictly needed, but it mixes the output from the different\nprocesses a bit more than if it's left out.\n\"\"\"\nlogging.config.dictConfig(config)\nlevels = [logging.DEBUG, logging.INFO, logging.WARNING, logging.ERROR,\nlogging.CRITICAL]\nloggers = ['foo', 'foo.bar', 'foo.bar.baz',\n'spam', 'spam.ham', 'spam.ham.eggs']\nif os.name == 'posix':\n# On POSIX, the setup logger will have been configured in the\n# parent process, but should have been disabled following the\n# dictConfig call.\n# On Windows, since fork isn't used, the setup logger won't\n# exist in the child, so it would be created and the message\n# would appear - hence the \"if posix\" clause.\nlogger = logging.getLogger('setup')\nlogger.critical('Should not appear, because of disabled logger ...')\nfor i in range(100):\nlvl = random.choice(levels)\nlogger = logging.getLogger(random.choice(loggers))\nlogger.log(lvl, 'Message no. %d', i)\ntime.sleep(0.01)\ndef main():\nq = Queue()\n# The main process gets a simple configuration which prints to the console.\nconfig_initial = {\n'version': 1,\n'handlers': {\n'console': {\n'class': 'logging.StreamHandler',\n'level': 'INFO'\n}\n},\n'root': {\n'handlers': ['console'],\n'level': 'DEBUG'\n}\n}\n# The worker process configuration is just a QueueHandler attached to the\n# root logger, which allows all messages to be sent to the queue.\n# We disable existing loggers to disable the \"setup\" logger used in the\n# parent process. This is needed on POSIX because the logger will\n# be there in the child following a fork().\nconfig_worker = {\n'version': 1,\n'disable_existing_loggers': True,\n'handlers': {\n'queue': {\n'class': 'logging.handlers.QueueHandler',\n'queue': q\n}\n},\n'root': {\n'handlers': ['queue'],\n'level': 'DEBUG'\n}\n}\n# The listener process configuration shows that the full flexibility of\n# logging configuration is available to dispatch events to handlers however\n# you want.\n# We disable existing loggers to disable the \"setup\" logger used in the\n# parent process. This is needed on POSIX because the logger will\n# be there in the child following a fork().\nconfig_listener = {\n'version': 1,\n'disable_existing_loggers': True,\n'formatters': {\n'detailed': {\n'class': 'logging.Formatter',\n'format': '%(asctime)s %(name)-15s %(levelname)-8s %(processName)-10s %(message)s'\n},\n'simple': {\n'class': 'logging.Formatter',\n'format': '%(name)-15s %(levelname)-8s %(processName)-10s %(message)s'\n}\n},\n'handlers': {\n'console': {\n'class': 'logging.StreamHandler',\n'formatter': 'simple',\n'level': 'INFO'\n},\n'file': {\n'class': 'logging.FileHandler',\n'filename': 'mplog.log',\n'mode': 'w',\n'formatter': 'detailed'\n},\n'foofile': {\n'class': 'logging.FileHandler',\n'filename': 'mplog-foo.log',\n'mode': 'w',\n'formatter': 'detailed'\n},\n'errors': {\n'class': 'logging.FileHandler',\n'filename': 'mplog-errors.log',\n'mode': 'w',\n'formatter': 'detailed',\n'level': 'ERROR'\n}\n},\n'loggers': {\n'foo': {\n'handlers': ['foofile']\n}\n},\n'root': {\n'handlers': ['console', 'file', 'errors'],\n'level': 'DEBUG'\n}\n}\n# Log some initial events, just to show that logging in the parent works\n# normally.\nlogging.config.dictConfig(config_initial)\nlogger = logging.getLogger('setup')\nlogger.info('About to create workers ...')\nworkers = []\nfor i in range(5):\nwp = Process(target=worker_process, name='worker %d' % (i + 1),\nargs=(config_worker,))\nworkers.append(wp)\nwp.start()\nlogger.info('Started worker: %s', wp.name)\nlogger.info('About to create listener ...')\nstop_event = Event()\nlp = Process(target=listener_process, name='listener',\nargs=(q, stop_event, config_listener))\nlp.start()\nlogger.info('Started listener')\n# We now hang around for the workers to finish their work.\nfor wp in workers:\nwp.join()\n# Workers all done, listening can now stop.\n# Logging in the parent still works normally.\nlogger.info('Telling listener to stop ...')\nstop_event.set()\nlp.join()\nlogger.info('All done.')\nif __name__ == '__main__':\nmain()\nInserting a BOM into messages sent to a SysLogHandler\u00b6\nRFC 5424 requires that a Unicode message be sent to a syslog daemon as a set of bytes which have the following structure: an optional pure-ASCII component, followed by a UTF-8 Byte Order Mark (BOM), followed by Unicode encoded using UTF-8. (See the relevant section of the specification.)\nIn Python 3.1, code was added to\nSysLogHandler\nto insert a BOM into the message, but\nunfortunately, it was implemented incorrectly, with the BOM appearing at the\nbeginning of the message and hence not allowing any pure-ASCII component to\nappear before it.\nAs this behaviour is broken, the incorrect BOM insertion code is being removed from Python 3.2.4 and later. However, it is not being replaced, and if you want to produce RFC 5424-compliant messages which include a BOM, an optional pure-ASCII sequence before it and arbitrary Unicode after it, encoded using UTF-8, then you need to do the following:\nAttach a\nFormatter\ninstance to yourSysLogHandler\ninstance, with a format string such as:'ASCII section\\ufeffUnicode section'\nThe Unicode code point U+FEFF, when encoded using UTF-8, will be encoded as a UTF-8 BOM \u2013 the byte-string\nb'\\xef\\xbb\\xbf'\n.Replace the ASCII section with whatever placeholders you like, but make sure that the data that appears in there after substitution is always ASCII (that way, it will remain unchanged after UTF-8 encoding).\nReplace the Unicode section with whatever placeholders you like; if the data which appears there after substitution contains characters outside the ASCII range, that\u2019s fine \u2013 it will be encoded using UTF-8.\nThe formatted message will be encoded using UTF-8 encoding by\nSysLogHandler\n. If you follow the above rules, you should be able to produce\nRFC 5424-compliant messages. If you don\u2019t, logging may not complain, but your\nmessages will not be RFC 5424-compliant, and your syslog daemon may complain.\nImplementing structured logging\u00b6\nAlthough most logging messages are intended for reading by humans, and thus not readily machine-parseable, there might be circumstances where you want to output messages in a structured format which is capable of being parsed by a program (without needing complex regular expressions to parse the log message). This is straightforward to achieve using the logging package. There are a number of ways in which this could be achieved, but the following is a simple approach which uses JSON to serialise the event in a machine-parseable manner:\nimport json\nimport logging\nclass StructuredMessage:\ndef __init__(self, message, /, **kwargs):\nself.message = message\nself.kwargs = kwargs\ndef __str__(self):\nreturn '%s >>> %s' % (self.message, json.dumps(self.kwargs))\n_ = StructuredMessage # optional, to improve readability\nlogging.basicConfig(level=logging.INFO, format='%(message)s')\nlogging.info(_('message 1', foo='bar', bar='baz', num=123, fnum=123.456))\nIf the above script is run, it prints:\nmessage 1 >>> {\"fnum\": 123.456, \"num\": 123, \"bar\": \"baz\", \"foo\": \"bar\"}\nNote that the order of items might be different according to the version of Python used.\nIf you need more specialised processing, you can use a custom JSON encoder, as in the following complete example:\nimport json\nimport logging\nclass Encoder(json.JSONEncoder):\ndef default(self, o):\nif isinstance(o, set):\nreturn tuple(o)\nelif isinstance(o, str):\nreturn o.encode('unicode_escape').decode('ascii')\nreturn super().default(o)\nclass StructuredMessage:\ndef __init__(self, message, /, **kwargs):\nself.message = message\nself.kwargs = kwargs\ndef __str__(self):\ns = Encoder().encode(self.kwargs)\nreturn '%s >>> %s' % (self.message, s)\n_ = StructuredMessage # optional, to improve readability\ndef main():\nlogging.basicConfig(level=logging.INFO, format='%(message)s')\nlogging.info(_('message 1', set_value={1, 2, 3}, snowman='\\u2603'))\nif __name__ == '__main__':\nmain()\nWhen the above script is run, it prints:\nmessage 1 >>> {\"snowman\": \"\\u2603\", \"set_value\": [1, 2, 3]}\nNote that the order of items might be different according to the version of Python used.\nCustomizing handlers with dictConfig()\n\u00b6\nThere are times when you want to customize logging handlers in particular ways,\nand if you use dictConfig()\nyou may be able to do this without\nsubclassing. As an example, consider that you may want to set the ownership of a\nlog file. On POSIX, this is easily done using shutil.chown()\n, but the file\nhandlers in the stdlib don\u2019t offer built-in support. You can customize handler\ncreation using a plain function such as:\ndef owned_file_handler(filename, mode='a', encoding=None, owner=None):\nif owner:\nif not os.path.exists(filename):\nopen(filename, 'a').close()\nshutil.chown(filename, *owner)\nreturn logging.FileHandler(filename, mode, encoding)\nYou can then specify, in a logging configuration passed to dictConfig()\n,\nthat a logging handler be created by calling this function:\nLOGGING = {\n'version': 1,\n'disable_existing_loggers': False,\n'formatters': {\n'default': {\n'format': '%(asctime)s %(levelname)s %(name)s %(message)s'\n},\n},\n'handlers': {\n'file':{\n# The values below are popped from this dictionary and\n# used to create the handler, set the handler's level and\n# its formatter.\n'()': owned_file_handler,\n'level':'DEBUG',\n'formatter': 'default',\n# The values below are passed to the handler creator callable\n# as keyword arguments.\n'owner': ['pulse', 'pulse'],\n'filename': 'chowntest.log',\n'mode': 'w',\n'encoding': 'utf-8',\n},\n},\n'root': {\n'handlers': ['file'],\n'level': 'DEBUG',\n},\n}\nIn this example I am setting the ownership using the pulse\nuser and group,\njust for the purposes of illustration. Putting it together into a working\nscript, chowntest.py\n:\nimport logging, logging.config, os, shutil\ndef owned_file_handler(filename, mode='a', encoding=None, owner=None):\nif owner:\nif not os.path.exists(filename):\nopen(filename, 'a').close()\nshutil.chown(filename, *owner)\nreturn logging.FileHandler(filename, mode, encoding)\nLOGGING = {\n'version': 1,\n'disable_existing_loggers': False,\n'formatters': {\n'default': {\n'format': '%(asctime)s %(levelname)s %(name)s %(message)s'\n},\n},\n'handlers': {\n'file':{\n# The values below are popped from this dictionary and\n# used to create the handler, set the handler's level and\n# its formatter.\n'()': owned_file_handler,\n'level':'DEBUG',\n'formatter': 'default',\n# The values below are passed to the handler creator callable\n# as keyword arguments.\n'owner': ['pulse', 'pulse'],\n'filename': 'chowntest.log',\n'mode': 'w',\n'encoding': 'utf-8',\n},\n},\n'root': {\n'handlers': ['file'],\n'level': 'DEBUG',\n},\n}\nlogging.config.dictConfig(LOGGING)\nlogger = logging.getLogger('mylogger')\nlogger.debug('A debug message')\nTo run this, you will probably need to run as root\n:\n$ sudo python3.3 chowntest.py\n$ cat chowntest.log\n2013-11-05 09:34:51,128 DEBUG mylogger A debug message\n$ ls -l chowntest.log\n-rw-r--r-- 1 pulse pulse 55 2013-11-05 09:34 chowntest.log\nNote that this example uses Python 3.3 because that\u2019s where shutil.chown()\nmakes an appearance. This approach should work with any Python version that\nsupports dictConfig()\n- namely, Python 2.7, 3.2 or later. With pre-3.3\nversions, you would need to implement the actual ownership change using e.g.\nos.chown()\n.\nIn practice, the handler-creating function may be in a utility module somewhere in your project. Instead of the line in the configuration:\n'()': owned_file_handler,\nyou could use e.g.:\n'()': 'ext://project.util.owned_file_handler',\nwhere project.util\ncan be replaced with the actual name of the package\nwhere the function resides. In the above working script, using\n'ext://__main__.owned_file_handler'\nshould work. Here, the actual callable\nis resolved by dictConfig()\nfrom the ext://\nspecification.\nThis example hopefully also points the way to how you could implement other\ntypes of file change - e.g. setting specific POSIX permission bits - in the\nsame way, using os.chmod()\n.\nOf course, the approach could also be extended to types of handler other than a\nFileHandler\n- for example, one of the rotating file handlers,\nor a different type of handler altogether.\nUsing particular formatting styles throughout your application\u00b6\nIn Python 3.2, the Formatter\ngained a style\nkeyword\nparameter which, while defaulting to %\nfor backward compatibility, allowed\nthe specification of {\nor $\nto support the formatting approaches\nsupported by str.format()\nand string.Template\n. Note that this\ngoverns the formatting of logging messages for final output to logs, and is\ncompletely orthogonal to how an individual logging message is constructed.\nLogging calls (debug()\n, info()\netc.) only take\npositional parameters for the actual logging message itself, with keyword\nparameters used only for determining options for how to handle the logging call\n(e.g. the exc_info\nkeyword parameter to indicate that traceback information\nshould be logged, or the extra\nkeyword parameter to indicate additional\ncontextual information to be added to the log). So you cannot directly make\nlogging calls using str.format()\nor string.Template\nsyntax,\nbecause internally the logging package uses %-formatting to merge the format\nstring and the variable arguments. There would be no changing this while preserving\nbackward compatibility, since all logging calls which are out there in existing\ncode will be using %-format strings.\nThere have been suggestions to associate format styles with specific loggers, but that approach also runs into backward compatibility problems because any existing code could be using a given logger name and using %-formatting.\nFor logging to work interoperably between any third-party libraries and your code, decisions about formatting need to be made at the level of the individual logging call. This opens up a couple of ways in which alternative formatting styles can be accommodated.\nUsing LogRecord factories\u00b6\nIn Python 3.2, along with the Formatter\nchanges mentioned\nabove, the logging package gained the ability to allow users to set their own\nLogRecord\nsubclasses, using the setLogRecordFactory()\nfunction.\nYou can use this to set your own subclass of LogRecord\n, which does the\nRight Thing by overriding the getMessage()\nmethod. The base\nclass implementation of this method is where the msg % args\nformatting\nhappens, and where you can substitute your alternate formatting; however, you\nshould be careful to support all formatting styles and allow %-formatting as\nthe default, to ensure interoperability with other code. Care should also be\ntaken to call str(self.msg)\n, just as the base implementation does.\nRefer to the reference documentation on setLogRecordFactory()\nand\nLogRecord\nfor more information.\nUsing custom message objects\u00b6\nThere is another, perhaps simpler way that you can use {}- and $- formatting to\nconstruct your individual log messages. You may recall (from\nUsing arbitrary objects as messages) that when logging you can use an arbitrary\nobject as a message format string, and that the logging package will call\nstr()\non that object to get the actual format string. Consider the\nfollowing two classes:\nclass BraceMessage:\ndef __init__(self, fmt, /, *args, **kwargs):\nself.fmt = fmt\nself.args = args\nself.kwargs = kwargs\ndef __str__(self):\nreturn self.fmt.format(*self.args, **self.kwargs)\nclass DollarMessage:\ndef __init__(self, fmt, /, **kwargs):\nself.fmt = fmt\nself.kwargs = kwargs\ndef __str__(self):\nfrom string import Template\nreturn Template(self.fmt).substitute(**self.kwargs)\nEither of these can be used in place of a format string, to allow {}- or\n$-formatting to be used to build the actual \u201cmessage\u201d part which appears in the\nformatted log output in place of \u201c%(message)s\u201d or \u201c{message}\u201d or \u201c$message\u201d.\nIf you find it a little unwieldy to use the class names whenever you want to log\nsomething, you can make it more palatable if you use an alias such as M\nor\n_\nfor the message (or perhaps __\n, if you are using _\nfor\nlocalization).\nExamples of this approach are given below. Firstly, formatting with\nstr.format()\n:\n>>> __ = BraceMessage\n>>> print(__('Message with {0} {1}', 2, 'placeholders'))\nMessage with 2 placeholders\n>>> class Point: pass\n...\n>>> p = Point()\n>>> p.x = 0.5\n>>> p.y = 0.5\n>>> print(__('Message with coordinates: ({point.x:.2f}, {point.y:.2f})', point=p))\nMessage with coordinates: (0.50, 0.50)\nSecondly, formatting with string.Template\n:\n>>> __ = DollarMessage\n>>> print(__('Message with $num $what', num=2, what='placeholders'))\nMessage with 2 placeholders\n>>>\nOne thing to note is that you pay no significant performance penalty with this\napproach: the actual formatting happens not when you make the logging call, but\nwhen (and if) the logged message is actually about to be output to a log by a\nhandler. So the only slightly unusual thing which might trip you up is that the\nparentheses go around the format string and the arguments, not just the format\nstring. That\u2019s because the __ notation is just syntax sugar for a constructor\ncall to one of the XXXMessage\nclasses shown above.\nConfiguring filters with dictConfig()\n\u00b6\nYou can configure filters using dictConfig()\n, though it\nmight not be obvious at first glance how to do it (hence this recipe). Since\nFilter\nis the only filter class included in the standard\nlibrary, and it is unlikely to cater to many requirements (it\u2019s only there as a\nbase class), you will typically need to define your own Filter\nsubclass with an overridden filter()\nmethod. To do this,\nspecify the ()\nkey in the configuration dictionary for the filter,\nspecifying a callable which will be used to create the filter (a class is the\nmost obvious, but you can provide any callable which returns a\nFilter\ninstance). Here is a complete example:\nimport logging\nimport logging.config\nimport sys\nclass MyFilter(logging.Filter):\ndef __init__(self, param=None):\nself.param = param\ndef filter(self, record):\nif self.param is None:\nallow = True\nelse:\nallow = self.param not in record.msg\nif allow:\nrecord.msg = 'changed: ' + record.msg\nreturn allow\nLOGGING = {\n'version': 1,\n'filters': {\n'myfilter': {\n'()': MyFilter,\n'param': 'noshow',\n}\n},\n'handlers': {\n'console': {\n'class': 'logging.StreamHandler',\n'filters': ['myfilter']\n}\n},\n'root': {\n'level': 'DEBUG',\n'handlers': ['console']\n},\n}\nif __name__ == '__main__':\nlogging.config.dictConfig(LOGGING)\nlogging.debug('hello')\nlogging.debug('hello - noshow')\nThis example shows how you can pass configuration data to the callable which constructs the instance, in the form of keyword parameters. When run, the above script will print:\nchanged: hello\nwhich shows that the filter is working as configured.\nA couple of extra points to note:\nIf you can\u2019t refer to the callable directly in the configuration (e.g. if it lives in a different module, and you can\u2019t import it directly where the configuration dictionary is), you can use the form\next://...\nas described in Access to external objects. For example, you could have used the text'ext://__main__.MyFilter'\ninstead ofMyFilter\nin the above example.As well as for filters, this technique can also be used to configure custom handlers and formatters. See User-defined objects for more information on how logging supports using user-defined objects in its configuration, and see the other cookbook recipe Customizing handlers with dictConfig() above.\nCustomized exception formatting\u00b6\nThere might be times when you want to do customized exception formatting - for argument\u2019s sake, let\u2019s say you want exactly one line per logged event, even when exception information is present. You can do this with a custom formatter class, as shown in the following example:\nimport logging\nclass OneLineExceptionFormatter(logging.Formatter):\ndef formatException(self, exc_info):\n\"\"\"\nFormat an exception so that it prints on a single line.\n\"\"\"\nresult = super().formatException(exc_info)\nreturn repr(result) # or format into one line however you want to\ndef format(self, record):\ns = super().format(record)\nif record.exc_text:\ns = s.replace('\\n', '') + '|'\nreturn s\ndef configure_logging():\nfh = logging.FileHandler('output.txt', 'w')\nf = OneLineExceptionFormatter('%(asctime)s|%(levelname)s|%(message)s|',\n'%d/%m/%Y %H:%M:%S')\nfh.setFormatter(f)\nroot = logging.getLogger()\nroot.setLevel(logging.DEBUG)\nroot.addHandler(fh)\ndef main():\nconfigure_logging()\nlogging.info('Sample message')\ntry:\nx = 1 / 0\nexcept ZeroDivisionError as e:\nlogging.exception('ZeroDivisionError: %s', e)\nif __name__ == '__main__':\nmain()\nWhen run, this produces a file with exactly two lines:\n28/01/2015 07:21:23|INFO|Sample message|\n28/01/2015 07:21:23|ERROR|ZeroDivisionError: division by zero|'Traceback (most recent call last):\\n File \"logtest7.py\", line 30, in main\\n x = 1 / 0\\nZeroDivisionError: division by zero'|\nWhile the above treatment is simplistic, it points the way to how exception\ninformation can be formatted to your liking. The traceback\nmodule may be\nhelpful for more specialized needs.\nSpeaking logging messages\u00b6\nThere might be situations when it is desirable to have logging messages rendered\nin an audible rather than a visible format. This is easy to do if you have\ntext-to-speech (TTS) functionality available in your system, even if it doesn\u2019t have\na Python binding. Most TTS systems have a command line program you can run, and\nthis can be invoked from a handler using subprocess\n. It\u2019s assumed here\nthat TTS command line programs won\u2019t expect to interact with users or take a\nlong time to complete, and that the frequency of logged messages will be not so\nhigh as to swamp the user with messages, and that it\u2019s acceptable to have the\nmessages spoken one at a time rather than concurrently, The example implementation\nbelow waits for one message to be spoken before the next is processed, and this\nmight cause other handlers to be kept waiting. Here is a short example showing\nthe approach, which assumes that the espeak\nTTS package is available:\nimport logging\nimport subprocess\nimport sys\nclass TTSHandler(logging.Handler):\ndef emit(self, record):\nmsg = self.format(record)\n# Speak slowly in a female English voice\ncmd = ['espeak', '-s150', '-ven+f3', msg]\np = subprocess.Popen(cmd, stdout=subprocess.PIPE,\nstderr=subprocess.STDOUT)\n# wait for the program to finish\np.communicate()\ndef configure_logging():\nh = TTSHandler()\nroot = logging.getLogger()\nroot.addHandler(h)\n# the default formatter just returns the message\nroot.setLevel(logging.DEBUG)\ndef main():\nlogging.info('Hello')\nlogging.debug('Goodbye')\nif __name__ == '__main__':\nconfigure_logging()\nsys.exit(main())\nWhen run, this script should say \u201cHello\u201d and then \u201cGoodbye\u201d in a female voice.\nThe above approach can, of course, be adapted to other TTS systems and even other systems altogether which can process messages via external programs run from a command line.\nBuffering logging messages and outputting them conditionally\u00b6\nThere might be situations where you want to log messages in a temporary area and only output them if a certain condition occurs. For example, you may want to start logging debug events in a function, and if the function completes without errors, you don\u2019t want to clutter the log with the collected debug information, but if there is an error, you want all the debug information to be output as well as the error.\nHere is an example which shows how you could do this using a decorator for your\nfunctions where you want logging to behave this way. It makes use of the\nlogging.handlers.MemoryHandler\n, which allows buffering of logged events\nuntil some condition occurs, at which point the buffered events are flushed\n- passed to another handler (the target\nhandler) for processing. By default,\nthe MemoryHandler\nflushed when its buffer gets filled up or an event whose\nlevel is greater than or equal to a specified threshold is seen. You can use this\nrecipe with a more specialised subclass of MemoryHandler\nif you want custom\nflushing behavior.\nThe example script has a simple function, foo\n, which just cycles through\nall the logging levels, writing to sys.stderr\nto say what level it\u2019s about\nto log at, and then actually logging a message at that level. You can pass a\nparameter to foo\nwhich, if true, will log at ERROR and CRITICAL levels -\notherwise, it only logs at DEBUG, INFO and WARNING levels.\nThe script just arranges to decorate foo\nwith a decorator which will do the\nconditional logging that\u2019s required. The decorator takes a logger as a parameter\nand attaches a memory handler for the duration of the call to the decorated\nfunction. The decorator can be additionally parameterised using a target handler,\na level at which flushing should occur, and a capacity for the buffer (number of\nrecords buffered). These default to a StreamHandler\nwhich\nwrites to sys.stderr\n, logging.ERROR\nand 100\nrespectively.\nHere\u2019s the script:\nimport logging\nfrom logging.handlers import MemoryHandler\nimport sys\nlogger = logging.getLogger(__name__)\nlogger.addHandler(logging.NullHandler())\ndef log_if_errors(logger, target_handler=None, flush_level=None, capacity=None):\nif target_handler is None:\ntarget_handler = logging.StreamHandler()\nif flush_level is None:\nflush_level = logging.ERROR\nif capacity is None:\ncapacity = 100\nhandler = MemoryHandler(capacity, flushLevel=flush_level, target=target_handler)\ndef decorator(fn):\ndef wrapper(*args, **kwargs):\nlogger.addHandler(handler)\ntry:\nreturn fn(*args, **kwargs)\nexcept Exception:\nlogger.exception('call failed')\nraise\nfinally:\nsuper(MemoryHandler, handler).flush()\nlogger.removeHandler(handler)\nreturn wrapper\nreturn decorator\ndef write_line(s):\nsys.stderr.write('%s\\n' % s)\ndef foo(fail=False):\nwrite_line('about to log at DEBUG ...')\nlogger.debug('Actually logged at DEBUG')\nwrite_line('about to log at INFO ...')\nlogger.info('Actually logged at INFO')\nwrite_line('about to log at WARNING ...')\nlogger.warning('Actually logged at WARNING')\nif fail:\nwrite_line('about to log at ERROR ...')\nlogger.error('Actually logged at ERROR')\nwrite_line('about to log at CRITICAL ...')\nlogger.critical('Actually logged at CRITICAL')\nreturn fail\ndecorated_foo = log_if_errors(logger)(foo)\nif __name__ == '__main__':\nlogger.setLevel(logging.DEBUG)\nwrite_line('Calling undecorated foo with False')\nassert not foo(False)\nwrite_line('Calling undecorated foo with True')\nassert foo(True)\nwrite_line('Calling decorated foo with False')\nassert not decorated_foo(False)\nwrite_line('Calling decorated foo with True')\nassert decorated_foo(True)\nWhen this script is run, the following output should be observed:\nCalling undecorated foo with False\nabout to log at DEBUG ...\nabout to log at INFO ...\nabout to log at WARNING ...\nCalling undecorated foo with True\nabout to log at DEBUG ...\nabout to log at INFO ...\nabout to log at WARNING ...\nabout to log at ERROR ...\nabout to log at CRITICAL ...\nCalling decorated foo with False\nabout to log at DEBUG ...\nabout to log at INFO ...\nabout to log at WARNING ...\nCalling decorated foo with True\nabout to log at DEBUG ...\nabout to log at INFO ...\nabout to log at WARNING ...\nabout to log at ERROR ...\nActually logged at DEBUG\nActually logged at INFO\nActually logged at WARNING\nActually logged at ERROR\nabout to log at CRITICAL ...\nActually logged at CRITICAL\nAs you can see, actual logging output only occurs when an event is logged whose severity is ERROR or greater, but in that case, any previous events at lower severities are also logged.\nYou can of course use the conventional means of decoration:\n@log_if_errors(logger)\ndef foo(fail=False):\n...\nSending logging messages to email, with buffering\u00b6\nTo illustrate how you can send log messages via email, so that a set number of\nmessages are sent per email, you can subclass\nBufferingHandler\n. In the following example, which you can\nadapt to suit your specific needs, a simple test harness is provided which allows you\nto run the script with command line arguments specifying what you typically need to\nsend things via SMTP. (Run the downloaded script with the -h\nargument to see the\nrequired and optional arguments.)\nimport logging\nimport logging.handlers\nimport smtplib\nclass BufferingSMTPHandler(logging.handlers.BufferingHandler):\ndef __init__(self, mailhost, port, username, password, fromaddr, toaddrs,\nsubject, capacity):\nlogging.handlers.BufferingHandler.__init__(self, capacity)\nself.mailhost = mailhost\nself.mailport = port\nself.username = username\nself.password = password\nself.fromaddr = fromaddr\nif isinstance(toaddrs, str):\ntoaddrs = [toaddrs]\nself.toaddrs = toaddrs\nself.subject = subject\nself.setFormatter(logging.Formatter(\"%(asctime)s %(levelname)-5s %(message)s\"))\ndef flush(self):\nif len(self.buffer) > 0:\ntry:\nsmtp = smtplib.SMTP(self.mailhost, self.mailport)\nsmtp.starttls()\nsmtp.login(self.username, self.password)\nmsg = \"From: %s\\r\\nTo: %s\\r\\nSubject: %s\\r\\n\\r\\n\" % (self.fromaddr, ','.join(self.toaddrs), self.subject)\nfor record in self.buffer:\ns = self.format(record)\nmsg = msg + s + \"\\r\\n\"\nsmtp.sendmail(self.fromaddr, self.toaddrs, msg)\nsmtp.quit()\nexcept Exception:\nif logging.raiseExceptions:\nraise\nself.buffer = []\nif __name__ == '__main__':\nimport argparse\nap = argparse.ArgumentParser()\naa = ap.add_argument\naa('host', metavar='HOST', help='SMTP server')\naa('--port', '-p', type=int, default=587, help='SMTP port')\naa('user', metavar='USER', help='SMTP username')\naa('password', metavar='PASSWORD', help='SMTP password')\naa('to', metavar='TO', help='Addressee for emails')\naa('sender', metavar='SENDER', help='Sender email address')\naa('--subject', '-s',\ndefault='Test Logging email from Python logging module (buffering)',\nhelp='Subject of email')\noptions = ap.parse_args()\nlogger = logging.getLogger()\nlogger.setLevel(logging.DEBUG)\nh = BufferingSMTPHandler(options.host, options.port, options.user,\noptions.password, options.sender,\noptions.to, options.subject, 10)\nlogger.addHandler(h)\nfor i in range(102):\nlogger.info(\"Info index = %d\", i)\nh.flush()\nh.close()\nIf you run this script and your SMTP server is correctly set up, you should find that it sends eleven emails to the addressee you specify. The first ten emails will each have ten log messages, and the eleventh will have two messages. That makes up 102 messages as specified in the script.\nFormatting times using UTC (GMT) via configuration\u00b6\nSometimes you want to format times using UTC, which can be done using a class\nsuch as UTCFormatter\n, shown below:\nimport logging\nimport time\nclass UTCFormatter(logging.Formatter):\nconverter = time.gmtime\nand you can then use the UTCFormatter\nin your code instead of\nFormatter\n. If you want to do that via configuration, you can\nuse the dictConfig()\nAPI with an approach illustrated by\nthe following complete example:\nimport logging\nimport logging.config\nimport time\nclass UTCFormatter(logging.Formatter):\nconverter = time.gmtime\nLOGGING = {\n'version': 1,\n'disable_existing_loggers': False,\n'formatters': {\n'utc': {\n'()': UTCFormatter,\n'format': '%(asctime)s %(message)s',\n},\n'local': {\n'format': '%(asctime)s %(message)s',\n}\n},\n'handlers': {\n'console1': {\n'class': 'logging.StreamHandler',\n'formatter': 'utc',\n},\n'console2': {\n'class': 'logging.StreamHandler',\n'formatter': 'local',\n},\n},\n'root': {\n'handlers': ['console1', 'console2'],\n}\n}\nif __name__ == '__main__':\nlogging.config.dictConfig(LOGGING)\nlogging.warning('The local time is %s', time.asctime())\nWhen this script is run, it should print something like:\n2015-10-17 12:53:29,501 The local time is Sat Oct 17 13:53:29 2015\n2015-10-17 13:53:29,501 The local time is Sat Oct 17 13:53:29 2015\nshowing how the time is formatted both as local time and UTC, one for each handler.\nUsing a context manager for selective logging\u00b6\nThere are times when it would be useful to temporarily change the logging configuration and revert it back after doing something. For this, a context manager is the most obvious way of saving and restoring the logging context. Here is a simple example of such a context manager, which allows you to optionally change the logging level and add a logging handler purely in the scope of the context manager:\nimport logging\nimport sys\nclass LoggingContext:\ndef __init__(self, logger, level=None, handler=None, close=True):\nself.logger = logger\nself.level = level\nself.handler = handler\nself.close = close\ndef __enter__(self):\nif self.level is not None:\nself.old_level = self.logger.level\nself.logger.setLevel(self.level)\nif self.handler:\nself.logger.addHandler(self.handler)\ndef __exit__(self, et, ev, tb):\nif self.level is not None:\nself.logger.setLevel(self.old_level)\nif self.handler:\nself.logger.removeHandler(self.handler)\nif self.handler and self.close:\nself.handler.close()\n# implicit return of None => don't swallow exceptions\nIf you specify a level value, the logger\u2019s level is set to that value in the scope of the with block covered by the context manager. If you specify a handler, it is added to the logger on entry to the block and removed on exit from the block. You can also ask the manager to close the handler for you on block exit - you could do this if you don\u2019t need the handler any more.\nTo illustrate how it works, we can add the following block of code to the above:\nif __name__ == '__main__':\nlogger = logging.getLogger('foo')\nlogger.addHandler(logging.StreamHandler())\nlogger.setLevel(logging.INFO)\nlogger.info('1. This should appear just once on stderr.')\nlogger.debug('2. This should not appear.')\nwith LoggingContext(logger, level=logging.DEBUG):\nlogger.debug('3. This should appear once on stderr.')\nlogger.debug('4. This should not appear.')\nh = logging.StreamHandler(sys.stdout)\nwith LoggingContext(logger, level=logging.DEBUG, handler=h, close=True):\nlogger.debug('5. This should appear twice - once on stderr and once on stdout.')\nlogger.info('6. This should appear just once on stderr.')\nlogger.debug('7. This should not appear.')\nWe initially set the logger\u2019s level to INFO\n, so message #1 appears and\nmessage #2 doesn\u2019t. We then change the level to DEBUG\ntemporarily in the\nfollowing with\nblock, and so message #3 appears. After the block exits, the\nlogger\u2019s level is restored to INFO\nand so message #4 doesn\u2019t appear. In the\nnext with\nblock, we set the level to DEBUG\nagain but also add a handler\nwriting to sys.stdout\n. Thus, message #5 appears twice on the console (once\nvia stderr\nand once via stdout\n). After the with\nstatement\u2019s\ncompletion, the status is as it was before so message #6 appears (like message\n#1) whereas message #7 doesn\u2019t (just like message #2).\nIf we run the resulting script, the result is as follows:\n$ python logctx.py\n1. This should appear just once on stderr.\n3. This should appear once on stderr.\n5. This should appear twice - once on stderr and once on stdout.\n5. This should appear twice - once on stderr and once on stdout.\n6. This should appear just once on stderr.\nIf we run it again, but pipe stderr\nto /dev/null\n, we see the following,\nwhich is the only message written to stdout\n:\n$ python logctx.py 2>/dev/null\n5. This should appear twice - once on stderr and once on stdout.\nOnce again, but piping stdout\nto /dev/null\n, we get:\n$ python logctx.py >/dev/null\n1. This should appear just once on stderr.\n3. This should appear once on stderr.\n5. This should appear twice - once on stderr and once on stdout.\n6. This should appear just once on stderr.\nIn this case, the message #5 printed to stdout\ndoesn\u2019t appear, as expected.\nOf course, the approach described here can be generalised, for example to attach logging filters temporarily. Note that the above code works in Python 2 as well as Python 3.\nA CLI application starter template\u00b6\nHere\u2019s an example which shows how you can:\nUse a logging level based on command-line arguments\nDispatch to multiple subcommands in separate files, all logging at the same level in a consistent way\nMake use of simple, minimal configuration\nSuppose we have a command-line application whose job is to stop, start or\nrestart some services. This could be organised for the purposes of illustration\nas a file app.py\nthat is the main script for the application, with individual\ncommands implemented in start.py\n, stop.py\nand restart.py\n. Suppose\nfurther that we want to control the verbosity of the application via a\ncommand-line argument, defaulting to logging.INFO\n. Here\u2019s one way that\napp.py\ncould be written:\nimport argparse\nimport importlib\nimport logging\nimport os\nimport sys\ndef main(args=None):\nscriptname = os.path.basename(__file__)\nparser = argparse.ArgumentParser(scriptname)\nlevels = ('DEBUG', 'INFO', 'WARNING', 'ERROR', 'CRITICAL')\nparser.add_argument('--log-level', default='INFO', choices=levels)\nsubparsers = parser.add_subparsers(dest='command',\nhelp='Available commands:')\nstart_cmd = subparsers.add_parser('start', help='Start a service')\nstart_cmd.add_argument('name', metavar='NAME',\nhelp='Name of service to start')\nstop_cmd = subparsers.add_parser('stop',\nhelp='Stop one or more services')\nstop_cmd.add_argument('names', metavar='NAME', nargs='+',\nhelp='Name of service to stop')\nrestart_cmd = subparsers.add_parser('restart',\nhelp='Restart one or more services')\nrestart_cmd.add_argument('names', metavar='NAME', nargs='+',\nhelp='Name of service to restart')\noptions = parser.parse_args()\n# the code to dispatch commands could all be in this file. For the purposes\n# of illustration only, we implement each command in a separate module.\ntry:\nmod = importlib.import_module(options.command)\ncmd = getattr(mod, 'command')\nexcept (ImportError, AttributeError):\nprint('Unable to find the code for command \\'%s\\'' % options.command)\nreturn 1\n# Could get fancy here and load configuration from file or dictionary\nlogging.basicConfig(level=options.log_level,\nformat='%(levelname)s %(name)s %(message)s')\ncmd(options)\nif __name__ == '__main__':\nsys.exit(main())\nAnd the start\n, stop\nand restart\ncommands can be implemented in\nseparate modules, like so for starting:\n# start.py\nimport logging\nlogger = logging.getLogger(__name__)\ndef command(options):\nlogger.debug('About to start %s', options.name)\n# actually do the command processing here ...\nlogger.info('Started the \\'%s\\' service.', options.name)\nand thus for stopping:\n# stop.py\nimport logging\nlogger = logging.getLogger(__name__)\ndef command(options):\nn = len(options.names)\nif n == 1:\nplural = ''\nservices = '\\'%s\\'' % options.names[0]\nelse:\nplural = 's'\nservices = ', '.join('\\'%s\\'' % name for name in options.names)\ni = services.rfind(', ')\nservices = services[:i] + ' and ' + services[i + 2:]\nlogger.debug('About to stop %s', services)\n# actually do the command processing here ...\nlogger.info('Stopped the %s service%s.', services, plural)\nand similarly for restarting:\n# restart.py\nimport logging\nlogger = logging.getLogger(__name__)\ndef command(options):\nn = len(options.names)\nif n == 1:\nplural = ''\nservices = '\\'%s\\'' % options.names[0]\nelse:\nplural = 's'\nservices = ', '.join('\\'%s\\'' % name for name in options.names)\ni = services.rfind(', ')\nservices = services[:i] + ' and ' + services[i + 2:]\nlogger.debug('About to restart %s', services)\n# actually do the command processing here ...\nlogger.info('Restarted the %s service%s.', services, plural)\nIf we run this application with the default log level, we get output like this:\n$ python app.py start foo\nINFO start Started the 'foo' service.\n$ python app.py stop foo bar\nINFO stop Stopped the 'foo' and 'bar' services.\n$ python app.py restart foo bar baz\nINFO restart Restarted the 'foo', 'bar' and 'baz' services.\nThe first word is the logging level, and the second word is the module or package name of the place where the event was logged.\nIf we change the logging level, then we can change the information sent to the log. For example, if we want more information:\n$ python app.py --log-level DEBUG start foo\nDEBUG start About to start foo\nINFO start Started the 'foo' service.\n$ python app.py --log-level DEBUG stop foo bar\nDEBUG stop About to stop 'foo' and 'bar'\nINFO stop Stopped the 'foo' and 'bar' services.\n$ python app.py --log-level DEBUG restart foo bar baz\nDEBUG restart About to restart 'foo', 'bar' and 'baz'\nINFO restart Restarted the 'foo', 'bar' and 'baz' services.\nAnd if we want less:\n$ python app.py --log-level WARNING start foo\n$ python app.py --log-level WARNING stop foo bar\n$ python app.py --log-level WARNING restart foo bar baz\nIn this case, the commands don\u2019t print anything to the console, since nothing\nat WARNING\nlevel or above is logged by them.\nA Qt GUI for logging\u00b6\nA question that comes up from time to time is about how to log to a GUI application. The Qt framework is a popular cross-platform UI framework with Python bindings using PySide2 or PyQt5 libraries.\nThe following example shows how to log to a Qt GUI. This introduces a simple\nQtHandler\nclass which takes a callable, which should be a slot in the main\nthread that does GUI updates. A worker thread is also created to show how you\ncan log to the GUI from both the UI itself (via a button for manual logging)\nas well as a worker thread doing work in the background (here, just logging\nmessages at random levels with random short delays in between).\nThe worker thread is implemented using Qt\u2019s QThread\nclass rather than the\nthreading\nmodule, as there are circumstances where one has to use\nQThread\n, which offers better integration with other Qt\ncomponents.\nThe code should work with recent releases of any of PySide6\n, PyQt6\n,\nPySide2\nor PyQt5\n. You should be able to adapt the approach to earlier\nversions of Qt. Please refer to the comments in the code snippet for more\ndetailed information.\nimport datetime\nimport logging\nimport random\nimport sys\nimport time\n# Deal with minor differences between different Qt packages\ntry:\nfrom PySide6 import QtCore, QtGui, QtWidgets\nSignal = QtCore.Signal\nSlot = QtCore.Slot\nexcept ImportError:\ntry:\nfrom PyQt6 import QtCore, QtGui, QtWidgets\nSignal = QtCore.pyqtSignal\nSlot = QtCore.pyqtSlot\nexcept ImportError:\ntry:\nfrom PySide2 import QtCore, QtGui, QtWidgets\nSignal = QtCore.Signal\nSlot = QtCore.Slot\nexcept ImportError:\nfrom PyQt5 import QtCore, QtGui, QtWidgets\nSignal = QtCore.pyqtSignal\nSlot = QtCore.pyqtSlot\nlogger = logging.getLogger(__name__)\n#\n# Signals need to be contained in a QObject or subclass in order to be correctly\n# initialized.\n#\nclass Signaller(QtCore.QObject):\nsignal = Signal(str, logging.LogRecord)\n#\n# Output to a Qt GUI is only supposed to happen on the main thread. So, this\n# handler is designed to take a slot function which is set up to run in the main\n# thread. In this example, the function takes a string argument which is a\n# formatted log message, and the log record which generated it. The formatted\n# string is just a convenience - you could format a string for output any way\n# you like in the slot function itself.\n#\n# You specify the slot function to do whatever GUI updates you want. The handler\n# doesn't know or care about specific UI elements.\n#\nclass QtHandler(logging.Handler):\ndef __init__(self, slotfunc, *args, **kwargs):\nsuper().__init__(*args, **kwargs)\nself.signaller = Signaller()\nself.signaller.signal.connect(slotfunc)\ndef emit(self, record):\ns = self.format(record)\nself.signaller.signal.emit(s, record)\n#\n# This example uses QThreads, which means that the threads at the Python level\n# are named something like \"Dummy-1\". The function below gets the Qt name of the\n# current thread.\n#\ndef ctname():\nreturn QtCore.QThread.currentThread().objectName()\n#\n# Used to generate random levels for logging.\n#\nLEVELS = (logging.DEBUG, logging.INFO, logging.WARNING, logging.ERROR,\nlogging.CRITICAL)\n#\n# This worker class represents work that is done in a thread separate to the\n# main thread. The way the thread is kicked off to do work is via a button press\n# that connects to a slot in the worker.\n#\n# Because the default threadName value in the LogRecord isn't much use, we add\n# a qThreadName which contains the QThread name as computed above, and pass that\n# value in an \"extra\" dictionary which is used to update the LogRecord with the\n# QThread name.\n#\n# This example worker just outputs messages sequentially, interspersed with\n# random delays of the order of a few seconds.\n#\nclass Worker(QtCore.QObject):\n@Slot()\ndef start(self):\nextra = {'qThreadName': ctname() }\nlogger.debug('Started work', extra=extra)\ni = 1\n# Let the thread run until interrupted. This allows reasonably clean\n# thread termination.\nwhile not QtCore.QThread.currentThread().isInterruptionRequested():\ndelay = 0.5 + random.random() * 2\ntime.sleep(delay)\ntry:\nif random.random() < 0.1:\nraise ValueError('Exception raised: %d' % i)\nelse:\nlevel = random.choice(LEVELS)\nlogger.log(level, 'Message after delay of %3.1f: %d', delay, i, extra=extra)\nexcept ValueError as e:\nlogger.exception('Failed: %s', e, extra=extra)\ni += 1\n#\n# Implement a simple UI for this cookbook example. This contains:\n#\n# * A read-only text edit window which holds formatted log messages\n# * A button to start work and log stuff in a separate thread\n# * A button to log something from the main thread\n# * A button to clear the log window\n#\nclass Window(QtWidgets.QWidget):\nCOLORS = {\nlogging.DEBUG: 'black',\nlogging.INFO: 'blue',\nlogging.WARNING: 'orange',\nlogging.ERROR: 'red',\nlogging.CRITICAL: 'purple',\n}\ndef __init__(self, app):\nsuper().__init__()\nself.app = app\nself.textedit = te = QtWidgets.QPlainTextEdit(self)\n# Set whatever the default monospace font is for the platform\nf = QtGui.QFont('nosuchfont')\nif hasattr(f, 'Monospace'):\nf.setStyleHint(f.Monospace)\nelse:\nf.setStyleHint(f.StyleHint.Monospace) # for Qt6\nte.setFont(f)\nte.setReadOnly(True)\nPB = QtWidgets.QPushButton\nself.work_button = PB('Start background work', self)\nself.log_button = PB('Log a message at a random level', self)\nself.clear_button = PB('Clear log window', self)\nself.handler = h = QtHandler(self.update_status)\n# Remember to use qThreadName rather than threadName in the format string.\nfs = '%(asctime)s %(qThreadName)-12s %(levelname)-8s %(message)s'\nformatter = logging.Formatter(fs)\nh.setFormatter(formatter)\nlogger.addHandler(h)\n# Set up to terminate the QThread when we exit\napp.aboutToQuit.connect(self.force_quit)\n# Lay out all the widgets\nlayout = QtWidgets.QVBoxLayout(self)\nlayout.addWidget(te)\nlayout.addWidget(self.work_button)\nlayout.addWidget(self.log_button)\nlayout.addWidget(self.clear_button)\nself.setFixedSize(900, 400)\n# Connect the non-worker slots and signals\nself.log_button.clicked.connect(self.manual_update)\nself.clear_button.clicked.connect(self.clear_display)\n# Start a new worker thread and connect the slots for the worker\nself.start_thread()\nself.work_button.clicked.connect(self.worker.start)\n# Once started, the button should be disabled\nself.work_button.clicked.connect(lambda : self.work_button.setEnabled(False))\ndef start_thread(self):\nself.worker = Worker()\nself.worker_thread = QtCore.QThread()\nself.worker.setObjectName('Worker')\nself.worker_thread.setObjectName('WorkerThread') # for qThreadName\nself.worker.moveToThread(self.worker_thread)\n# This will start an event loop in the worker thread\nself.worker_thread.start()\ndef kill_thread(self):\n# Just tell the worker to stop, then tell it to quit and wait for that\n# to happen\nself.worker_thread.requestInterruption()\nif self.worker_thread.isRunning():\nself.worker_thread.quit()\nself.worker_thread.wait()\nelse:\nprint('worker has already exited.')\ndef force_quit(self):\n# For use when the window is closed\nif self.worker_thread.isRunning():\nself.kill_thread()\n# The functions below update the UI and run in the main thread because\n# that's where the slots are set up\n@Slot(str, logging.LogRecord)\ndef update_status(self, status, record):\ncolor = self.COLORS.get(record.levelno, 'black')\ns = '
%s
' % (color, status)\nself.textedit.appendHtml(s)\n@Slot()\ndef manual_update(self):\n# This function uses the formatted message passed in, but also uses\n# information from the record to format the message in an appropriate\n# color according to its severity (level).\nlevel = random.choice(LEVELS)\nextra = {'qThreadName': ctname() }\nlogger.log(level, 'Manually logged!', extra=extra)\n@Slot()\ndef clear_display(self):\nself.textedit.clear()\ndef main():\nQtCore.QThread.currentThread().setObjectName('MainThread')\nlogging.getLogger().setLevel(logging.DEBUG)\napp = QtWidgets.QApplication(sys.argv)\nexample = Window(app)\nexample.show()\nif hasattr(app, 'exec'):\nrc = app.exec()\nelse:\nrc = app.exec_()\nsys.exit(rc)\nif __name__=='__main__':\nmain()\nLogging to syslog with RFC5424 support\u00b6\nAlthough RFC 5424 dates from 2009, most syslog servers are configured by default to\nuse the older RFC 3164, which hails from 2001. When logging\nwas added to Python\nin 2003, it supported the earlier (and only existing) protocol at the time. Since\nRFC5424 came out, as there has not been widespread deployment of it in syslog\nservers, the SysLogHandler\nfunctionality has not been\nupdated.\nRFC 5424 contains some useful features such as support for structured data, and if you need to be able to log to a syslog server with support for it, you can do so with a subclassed handler which looks something like this:\nimport datetime\nimport logging.handlers\nimport re\nimport socket\nimport time\nclass SysLogHandler5424(logging.handlers.SysLogHandler):\ntz_offset = re.compile(r'([+-]\\d{2})(\\d{2})$')\nescaped = re.compile(r'([\\]\"\\\\])')\ndef __init__(self, *args, **kwargs):\nself.msgid = kwargs.pop('msgid', None)\nself.appname = kwargs.pop('appname', None)\nsuper().__init__(*args, **kwargs)\ndef format(self, record):\nversion = 1\nasctime = datetime.datetime.fromtimestamp(record.created).isoformat()\nm = self.tz_offset.match(time.strftime('%z'))\nhas_offset = False\nif m and time.timezone:\nhrs, mins = m.groups()\nif int(hrs) or int(mins):\nhas_offset = True\nif not has_offset:\nasctime += 'Z'\nelse:\nasctime += f'{hrs}:{mins}'\ntry:\nhostname = socket.gethostname()\nexcept Exception:\nhostname = '-'\nappname = self.appname or '-'\nprocid = record.process\nmsgid = '-'\nmsg = super().format(record)\nsdata = '-'\nif hasattr(record, 'structured_data'):\nsd = record.structured_data\n# This should be a dict where the keys are SD-ID and the value is a\n# dict mapping PARAM-NAME to PARAM-VALUE (refer to the RFC for what these\n# mean)\n# There's no error checking here - it's purely for illustration, and you\n# can adapt this code for use in production environments\nparts = []\ndef replacer(m):\ng = m.groups()\nreturn '\\\\' + g[0]\nfor sdid, dv in sd.items():\npart = f'[{sdid}'\nfor k, v in dv.items():\ns = str(v)\ns = self.escaped.sub(replacer, s)\npart += f' {k}=\"{s}\"'\npart += ']'\nparts.append(part)\nsdata = ''.join(parts)\nreturn f'{version} {asctime} {hostname} {appname} {procid} {msgid} {sdata} {msg}'\nYou\u2019ll need to be familiar with RFC 5424 to fully understand the above code, and it may be that you have slightly different needs (e.g. for how you pass structural data to the log). Nevertheless, the above should be adaptable to your speciric needs. With the above handler, you\u2019d pass structured data using something like this:\nsd = {\n'foo@12345': {'bar': 'baz', 'baz': 'bozz', 'fizz': r'buzz'},\n'foo@54321': {'rab': 'baz', 'zab': 'bozz', 'zzif': r'buzz'}\n}\nextra = {'structured_data': sd}\ni = 1\nlogger.debug('Message %d', i, extra=extra)\nHow to treat a logger like an output stream\u00b6\nSometimes, you need to interface to a third-party API which expects a file-like object to write to, but you want to direct the API\u2019s output to a logger. You can do this using a class which wraps a logger with a file-like API. Here\u2019s a short script illustrating such a class:\nimport logging\nclass LoggerWriter:\ndef __init__(self, logger, level):\nself.logger = logger\nself.level = level\ndef write(self, message):\nif message != '\\n': # avoid printing bare newlines, if you like\nself.logger.log(self.level, message)\ndef flush(self):\n# doesn't actually do anything, but might be expected of a file-like\n# object - so optional depending on your situation\npass\ndef close(self):\n# doesn't actually do anything, but might be expected of a file-like\n# object - so optional depending on your situation. You might want\n# to set a flag so that later calls to write raise an exception\npass\ndef main():\nlogging.basicConfig(level=logging.DEBUG)\nlogger = logging.getLogger('demo')\ninfo_fp = LoggerWriter(logger, logging.INFO)\ndebug_fp = LoggerWriter(logger, logging.DEBUG)\nprint('An INFO message', file=info_fp)\nprint('A DEBUG message', file=debug_fp)\nif __name__ == \"__main__\":\nmain()\nWhen this script is run, it prints\nINFO:demo:An INFO message\nDEBUG:demo:A DEBUG message\nYou could also use LoggerWriter\nto redirect sys.stdout\nand\nsys.stderr\nby doing something like this:\nimport sys\nsys.stdout = LoggerWriter(logger, logging.INFO)\nsys.stderr = LoggerWriter(logger, logging.WARNING)\nYou should do this after configuring logging for your needs. In the above\nexample, the basicConfig()\ncall does this (using the\nsys.stderr\nvalue before it is overwritten by a LoggerWriter\ninstance). Then, you\u2019d get this kind of result:\n>>> print('Foo')\nINFO:demo:Foo\n>>> print('Bar', file=sys.stderr)\nWARNING:demo:Bar\n>>>\nOf course, the examples above show output according to the format used by\nbasicConfig()\n, but you can use a different formatter when you\nconfigure logging.\nNote that with the above scheme, you are somewhat at the mercy of buffering and\nthe sequence of write calls which you are intercepting. For example, with the\ndefinition of LoggerWriter\nabove, if you have the snippet\nsys.stderr = LoggerWriter(logger, logging.WARNING)\n1 / 0\nthen running the script results in\nWARNING:demo:Traceback (most recent call last):\nWARNING:demo: File \"/home/runner/cookbook-loggerwriter/test.py\", line 53, in \nWARNING:demo:\nWARNING:demo:main()\nWARNING:demo: File \"/home/runner/cookbook-loggerwriter/test.py\", line 49, in main\nWARNING:demo:\nWARNING:demo:1 / 0\nWARNING:demo:ZeroDivisionError\nWARNING:demo::\nWARNING:demo:division by zero\nAs you can see, this output isn\u2019t ideal. That\u2019s because the underlying code\nwhich writes to sys.stderr\nmakes multiple writes, each of which results in a\nseparate logged line (for example, the last three lines above). To get around\nthis problem, you need to buffer things and only output log lines when newlines\nare seen. Let\u2019s use a slightly better implementation of LoggerWriter\n:\nclass BufferingLoggerWriter(LoggerWriter):\ndef __init__(self, logger, level):\nsuper().__init__(logger, level)\nself.buffer = ''\ndef write(self, message):\nif '\\n' not in message:\nself.buffer += message\nelse:\nparts = message.split('\\n')\nif self.buffer:\ns = self.buffer + parts.pop(0)\nself.logger.log(self.level, s)\nself.buffer = parts.pop()\nfor part in parts:\nself.logger.log(self.level, part)\nThis just buffers up stuff until a newline is seen, and then logs complete lines. With this approach, you get better output:\nWARNING:demo:Traceback (most recent call last):\nWARNING:demo: File \"/home/runner/cookbook-loggerwriter/main.py\", line 55, in \nWARNING:demo: main()\nWARNING:demo: File \"/home/runner/cookbook-loggerwriter/main.py\", line 52, in main\nWARNING:demo: 1/0\nWARNING:demo:ZeroDivisionError: division by zero\nHow to uniformly handle newlines in logging output\u00b6\nUsually, messages that are logged (say to console or file) consist of a single line of text. However, sometimes there is a need to handle messages with multiple lines - whether because a logging format string contains newlines, or logged data contains newlines. If you want to handle such messages uniformly, so that each line in the logged message appears uniformly formatted as if it was logged separately, you can do this using a handler mixin, as in the following snippet:\n# Assume this is in a module mymixins.py\nimport copy\nclass MultilineMixin:\ndef emit(self, record):\ns = record.getMessage()\nif '\\n' not in s:\nsuper().emit(record)\nelse:\nlines = s.splitlines()\nrec = copy.copy(record)\nrec.args = None\nfor line in lines:\nrec.msg = line\nsuper().emit(rec)\nYou can use the mixin as in the following script:\nimport logging\nfrom mymixins import MultilineMixin\nlogger = logging.getLogger(__name__)\nclass StreamHandler(MultilineMixin, logging.StreamHandler):\npass\nif __name__ == '__main__':\nlogging.basicConfig(level=logging.DEBUG, format='%(asctime)s %(levelname)-9s %(message)s',\nhandlers = [StreamHandler()])\nlogger.debug('Single line')\nlogger.debug('Multiple lines:\\nfool me once ...')\nlogger.debug('Another single line')\nlogger.debug('Multiple lines:\\n%s', 'fool me ...\\ncan\\'t get fooled again')\nThe script, when run, prints something like:\n2025-07-02 13:54:47,234 DEBUG Single line\n2025-07-02 13:54:47,234 DEBUG Multiple lines:\n2025-07-02 13:54:47,234 DEBUG fool me once ...\n2025-07-02 13:54:47,234 DEBUG Another single line\n2025-07-02 13:54:47,234 DEBUG Multiple lines:\n2025-07-02 13:54:47,234 DEBUG fool me ...\n2025-07-02 13:54:47,234 DEBUG can't get fooled again\nIf, on the other hand, you are concerned about log injection, you can use a formatter which escapes newlines, as per the following example:\nimport logging\nlogger = logging.getLogger(__name__)\nclass EscapingFormatter(logging.Formatter):\ndef format(self, record):\ns = super().format(record)\nreturn s.replace('\\n', r'\\n')\nif __name__ == '__main__':\nh = logging.StreamHandler()\nh.setFormatter(EscapingFormatter('%(asctime)s %(levelname)-9s %(message)s'))\nlogging.basicConfig(level=logging.DEBUG, handlers = [h])\nlogger.debug('Single line')\nlogger.debug('Multiple lines:\\nfool me once ...')\nlogger.debug('Another single line')\nlogger.debug('Multiple lines:\\n%s', 'fool me ...\\ncan\\'t get fooled again')\nYou can, of course, use whatever escaping scheme makes the most sense for you. The script, when run, should produce output like this:\n2025-07-09 06:47:33,783 DEBUG Single line\n2025-07-09 06:47:33,783 DEBUG Multiple lines:\\nfool me once ...\n2025-07-09 06:47:33,783 DEBUG Another single line\n2025-07-09 06:47:33,783 DEBUG Multiple lines:\\nfool me ...\\ncan't get fooled again\nEscaping behaviour can\u2019t be the stdlib default , as it would break backwards compatibility.\nPatterns to avoid\u00b6\nAlthough the preceding sections have described ways of doing things you might need to do or deal with, it is worth mentioning some usage patterns which are unhelpful, and which should therefore be avoided in most cases. The following sections are in no particular order.\nOpening the same log file multiple times\u00b6\nOn Windows, you will generally not be able to open the same file multiple times as this will lead to a \u201cfile is in use by another process\u201d error. However, on POSIX platforms you\u2019ll not get any errors if you open the same file multiple times. This could be done accidentally, for example by:\nAdding a file handler more than once which references the same file (e.g. by a copy/paste/forget-to-change error).\nOpening two files that look different, as they have different names, but are the same because one is a symbolic link to the other.\nForking a process, following which both parent and child have a reference to the same file. This might be through use of the\nmultiprocessing\nmodule, for example.\nOpening a file multiple times might appear to work most of the time, but can lead to a number of problems in practice:\nLogging output can be garbled because multiple threads or processes try to write to the same file. Although logging guards against concurrent use of the same handler instance by multiple threads, there is no such protection if concurrent writes are attempted by two different threads using two different handler instances which happen to point to the same file.\nAn attempt to delete a file (e.g. during file rotation) silently fails, because there is another reference pointing to it. This can lead to confusion and wasted debugging time - log entries end up in unexpected places, or are lost altogether. Or a file that was supposed to be moved remains in place, and grows in size unexpectedly despite size-based rotation being supposedly in place.\nUse the techniques outlined in Logging to a single file from multiple processes to circumvent such issues.\nUsing loggers as attributes in a class or passing them as parameters\u00b6\nWhile there might be unusual cases where you\u2019ll need to do this, in general\nthere is no point because loggers are singletons. Code can always access a\ngiven logger instance by name using logging.getLogger(name)\n, so passing\ninstances around and holding them as instance attributes is pointless. Note\nthat in other languages such as Java and C#, loggers are often static class\nattributes. However, this pattern doesn\u2019t make sense in Python, where the\nmodule (and not the class) is the unit of software decomposition.\nAdding handlers other than NullHandler\nto a logger in a library\u00b6\nConfiguring logging by adding handlers, formatters and filters is the\nresponsibility of the application developer, not the library developer. If you\nare maintaining a library, ensure that you don\u2019t add handlers to any of your\nloggers other than a NullHandler\ninstance.\nCreating a lot of loggers\u00b6\nLoggers are singletons that are never freed during a script execution, and so creating lots of loggers will use up memory which can\u2019t then be freed. Rather than create a logger per e.g. file processed or network connection made, use the existing mechanisms for passing contextual information into your logs and restrict the loggers created to those describing areas within your application (generally modules, but occasionally slightly more fine-grained than that).\nOther resources\u00b6\nSee also\n- Module\nlogging\nAPI reference for the logging module.\n- Module\nlogging.config\nConfiguration API for the logging module.\n- Module\nlogging.handlers\nUseful handlers included with the logging module.", "code_snippets": ["\n", "\n\n", "\n", " ", " ", "\n", "\n", "\n", " ", " ", "\n", "\n", "\n", " ", " ", "\n", "\n", "\n", " ", " ", "\n", "\n", "\n", "\n", "\n", "\n\n", "\n", " ", " ", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n\n", "\n", " ", " ", "\n\n", "\n ", "\n ", " ", " ", "\n ", "\n\n ", "\n ", "\n ", " ", " ", " ", " ", "\n ", "\n\n", "\n ", "\n", "\n", "\n", "\n\n", "\n ", " ", " ", "\n ", "\n ", "\n\n", "\n ", " ", "\n ", " ", " ", " ", "\n ", " ", " ", " ", "\n ", "\n ", " ", "\n ", "\n ", "\n ", "\n ", " ", "\n ", " ", " ", "\n ", "\n ", "\n\n", " ", " ", " ", "\n ", "\n", "\n\n", " ", " ", "\n", "\n", "\n", " ", " ", "\n", "\n", "\n", " ", " ", "\n", "\n", "\n", " ", " ", "\n", "\n", "\n", "\n", "\n", "\n\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n\n", "\n", "\n ", "\n ", "\n ", "\n ", "\n", "\n", " ", " ", "\n", "\n", "\n", " ", " ", "\n", "\n", "\n", "\n", "\n\n", "\n", "\n\n", "\n", "\n\n", " ", " ", "\n", " ", " ", "\n\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n\n", "\n", "\n\n", "\n", " ", " ", "\n", "\n\n", " ", " ", "\n\n", "\n ", "\n ", "\n ", " ", "\n ", "\n ", "\n ", "\n ", "\n ", "\n ", "\n", " ", "\n ", "\n ", "\n ", "\n", "\n", "\n\n", " ", " ", " ", " ", "\n ", " ", " ", "\n\n", " ", " ", "\n", " ", " ", "\n", " ", " ", " ", "\n", "\n", " ", "\n", "\n", " ", "\n", "\n", "\n", "\n", " ", " ", " ", "\n", " ", " ", "\n", " ", " ", "\n", " ", " ", " ", "\n", " ", " ", "\n", "\n", " ", " ", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n\n", " ", " ", "\n", "\n", " ", " ", "\n ", "\n", "\n", "\n", "\n\n", "\n", "\n\n", "\n", "\n\n", " ", " ", "\n", " ", " ", "\n\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n\n\n", "\n", "\n\n", "\n", "\n", "\n\n ", "\n", "\n", "\n", "\n", "\n", "\n ", " ", "\n ", " ", " ", "\n ", " ", " ", " ", "\n ", "\n ", " ", " ", " ", "\n ", " ", " ", "\n ", " ", " ", " ", "\n ", " ", " ", " ", " ", " ", " ", "\n ", " ", " ", "\n ", " ", " ", "\n ", "\n\n ", " ", "\n ", " ", "\n\n ", " ", "\n ", "\n ", "\n ", " ", " ", " ", " ", "\n ", " ", " ", "\n ", "\n ", " ", " ", "\n ", " ", " ", "\n ", "\n ", "\n ", "\n ", "\n ", "\n\n", "\n", "\n", "\n", "\n\n ", " ", " ", "\n\n ", " ", "\n ", "\n ", "\n ", " ", " ", " ", "\n ", " ", " ", "\n ", " ", " ", "\n ", " ", " ", "\n\n ", "\n ", "\n ", " ", " ", "\n ", " ", " ", "\n ", " ", " ", " ", " ", "\n ", " ", "\n ", "\n ", " ", "\n ", "\n ", " ", " ", "\n\n", "\n ", "\n ", "\n ", " ", " ", "\n ", "\n ", "\n\n", " ", " ", " ", "\n ", "\n", " ", " ", " ", " ", "\n", "\n", "\n", "\n", "\n ", " ", " ", " ", " ", "\n ", " ", " ", "\n", "\n", "\n", "\n", "\n", "\n ", " ", " ", "\n ", " ", " ", " ", " ", " ", "\n", " ", " ", "\n", " ", " ", " ", " ", "\n", "\n", " ", "\n\n", "\n", "\n", "\n\n", "\n", "\n", "\n\n ", " ", " ", " ", " ", "\n ", " ", " ", " ", " ", "\n\n ", " ", "\n\n ", " ", " ", "\n ", " ", " ", "\n ", " ", "\n\n", " ", " ", " ", "\n ", " ", " ", " ", " ", " ", " ", "\n ", "\n ", "\n ", " ", " ", "\n ", " ", " ", "\n\n ", " ", " ", "\n ", "\n ", "\n ", "\n ", " ", "\n ", " ", " ", " ", "\n ", " ", " ", "\n ", " ", " ", "\n ", " ", " ", " ", " ", "\n", "\n", "\n\n", " ", "\n ", " ", " ", "\n ", " ", " ", "\n ", " ", "\n\n", " ", " ", " ", "\n ", " ", " ", "\n ", "\n ", " ", " ", "\n ", " ", " ", "\n ", "\n ", "\n ", "\n\n ", "\n", "\n", "\n", "\n", "\n\n", "\n", " ", " ", "\n", "\n\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n ", " ", " ", "\n ", " ", " ", " ", " ", " ", "\n ", " ", " ", "\n ", "\n ", "\n\n", "\n", "\n", "\n", " ", "\n ", "\n ", " ", "\n ", "\n ", " ", " ", "\n ", " ", " ", " ", " ", "\n ", "\n ", " ", " ", "\n ", " ", "\n ", " ", "\n ", "\n ", " ", "\n ", "\n\n", "\n\n", " ", " ", " ", " ", "\n ", " ", "\n\n", " ", " ", " ", "\n\n", " ", " ", "\n ", "\n ", "\n ", "\n", "\n\n", "\n", "\n", "\n", "\n ", " ", " ", " ", "\n ", " ", " ", "\n ", "\n ", "\n ", "\n\n", "\n", "\n", "\n", " ", "\n ", "\n ", " ", " ", "\n ", " ", " ", "\n ", " ", " ", " ", "\n ", "\n ", " ", " ", "\n ", " ", " ", "\n ", " ", " ", "\n ", " ", "\n ", " ", " ", "\n\n", "\n", "\n", "\n", "\n ", " ", " ", "\n ", " ", " ", "\n ", " ", "\n ", "\n ", " ", " ", "\n ", " ", " ", " ", "\n ", " ", " ", "\n ", " ", "\n ", "\n ", "\n ", " ", " ", " ", "\n ", "\n ", "\n ", "\n\n", " ", " ", " ", "\n ", "\n", "\n", "\n", "\n", " ", " ", "\n", "\n", "\n", "\n\n", "\n ", " ", "\n ", " ", " ", "\n ", " ", " ", " ", "\n ", "\n ", " ", " ", "\n ", "\n\n\n", "\n ", " ", " ", "\n ", " ", " ", "\n ", "\n ", "\n ", " ", " ", " ", " ", " ", "\n ", "\n ", " ", " ", " ", " ", "\n ", " ", " ", "\n ", " ", " ", " ", "\n ", " ", " ", "\n ", " ", " ", "\n ", " ", " ", "\n\n", " ", " ", " ", "\n ", " ", " ", "\n ", " ", " ", "\n ", " ", "\n ", " ", "\n ", " ", "\n ", " ", "\n ", " ", "\n ", "\n ", "\n ", " ", "\n ", " ", "\n ", " ", "\n ", " ", "\n ", "\n ", " ", "\n ", " ", "\n ", " ", "\n ", " ", "\n ", " ", "\n ", "\n ", " ", "\n ", " ", "\n ", " ", "\n ", " ", "\n ", " ", "\n ", "\n ", " ", "\n ", " ", "\n ", " ", "\n ", " ", "\n ", " ", "\n ", " ", "\n ", "\n ", "\n ", " ", "\n ", " ", "\n ", " ", "\n ", "\n ", "\n ", " ", "\n ", " ", "\n ", " ", " ", " ", "\n ", "\n ", "\n ", " ", " ", "\n ", " ", " ", " ", "\n ", " ", " ", " ", " ", " ", " ", " ", " ", "\n ", "\n ", "\n ", "\n ", " ", " ", " ", "\n ", "\n ", "\n ", "\n ", " ", " ", " ", "\n ", "\n ", "\n ", "\n ", "\n", " ", " ", "\n", " ", " ", " ", "\n ", " ", " ", "\n ", " ", "\n ", "\n ", "\n", " ", " ", " ", "\n ", "\n", " ", " ", " ", "\n ", " ", " ", " ", "\n ", " ", " ", "\n", "\n", "\n", "\n\n", " ", " ", "\n\n", "\n", " ", " ", "\n", "\n\n", "\n", " ", " ", "\n ", " ", " ", "\n\n", "\n\n", "\n", " ", " ", " ", "\n ", " ", " ", "\n\n", "\n", " ", " ", " ", " ", "\n\n", " ", " ", " ", "\n ", "\n", " ", " ", " ", "\n", "\n", "\n", "\n ", " ", " ", " ", " ", "\n ", " ", " ", "\n ", " ", " ", "\n ", " ", " ", "\n\n ", "\n ", " ", " ", "\n\n", "\n ", " ", " ", " ", "\n ", " ", " ", "\n ", " ", " ", "\n\n ", "\n ", " ", "\n ", " ", "\n", "\n\n", "\n ", " ", " ", "\n ", " ", " ", "\n ", " ", " ", "\n\n ", "\n ", " ", "\n\n", "\n ", " ", " ", " ", " ", " ", " ", "\n ", " ", "\n ", " ", " ", " ", " ", "\n ", " ", " ", " ", "\n ", "\n\n", " ", " ", "\n\n", "\n ", " ", "\n\n", " ", " ", " ", "\n ", "\n ", "\n", " ", " ", "\n", " ", " ", "\n\n", " ", "\n ", " ", " ", " ", "\n ", " ", " ", "\n ", " ", "\n\n", "\n", " ", "\n", " ", "\n\n", " ", " ", "\n", " ", " ", " ", " ", "\n", " ", "\n\n", "\n ", " ", "\n ", "\n\n\n", " ", " ", "\n", "\n ", " ", " ", " ", "\n ", " ", " ", " ", " ", "\n ", " ", " ", " ", "\n ", "\n ", "\n\n ", " ", "\n ", "\n\n ", "\n ", "\n", "\n ", " ", " ", " ", " ", "\n ", " ", " ", " ", " ", "\n ", " ", " ", " ", "\n ", " ", " ", "\n ", "\n ", " ", " ", "\n\n ", "\n ", " ", " ", "\n ", " ", "\n", " ", " ", "\n ", " ", "\n ", " ", "\n ", " ", "\n ", " ", "\n ", " ", "\n ", " ", "\n ", "\n ", " ", "\n ", " ", "\n ", " ", "\n ", "\n ", "\n ", " ", "\n ", " ", "\n ", " ", "\n ", " ", "\n ", "\n ", "\n ", " ", "\n ", " ", "\n ", " ", "\n ", " ", "\n ", " ", "\n ", "\n ", " ", "\n ", " ", "\n ", " ", "\n ", " ", "\n ", "\n ", "\n ", " ", "\n ", " ", "\n ", " ", "\n ", " ", "\n ", "\n ", " ", "\n ", " ", "\n ", " ", "\n ", " ", "\n ", "\n ", " ", "\n ", " ", " ", "\n ", " ", "\n ", " ", "\n ", "\n ", "\n", "\n", "\n", "\n", "\n", "\n", "\n\n", "\n ", " ", " ", " ", "\n\n", " ", "\n ", " ", " ", " ", " ", "\n ", " ", " ", " ", " ", "\n ", " ", "\n ", "\n\n\n", " ", " ", " ", " ", "\n", " ", " ", "\n", " ", " ", "\n\n", " ", " ", "\n", "\n", "\n", " ", " ", "\n", "\n", " ", " ", " ", "\n ", "\n", "\n", "\n", "\n", " ", " ", " ", " ", "\n", "\n", "\n", "\n\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n\n ", " ", "\n ", " ", " ", " ", "\n ", " ", " ", "\n ", "\n ", " ", " ", "\n\n ", " ", "\n ", "\n ", "\n ", " ", " ", " ", " ", " ", "\n ", "\n\n", " ", " ", "\n", "\n", "\n", "\n\n", "\n", "\n", "\n", "\n ", "\n ", " ", " ", " ", "\n ", "\n ", " ", " ", " ", "\n ", "\n ", "\n ", "\n ", "\n ", "\n ", "\n ", " ", " ", "\n ", "\n ", "\n ", "\n\n", "\n", "\n", "\n", "\n", "\n\n", "\n", "\n", "\n\n", "\n", "\n", "\n", "\n ", "\n ", " ", " ", " ", " ", " ", "\n ", "\n ", " ", " ", " ", " ", "\n ", " ", " ", "\n ", " ", " ", " ", "\n ", "\n ", "\n ", "\n ", "\n ", "\n ", "\n ", " ", " ", "\n ", "\n ", " ", " ", " ", "\n ", " ", " ", "\n ", " ", " ", "\n ", " ", " ", "\n ", "\n\n", "\n ", " ", " ", "\n ", "\n ", " ", " ", "\n ", " ", "\n ", " ", "\n ", " ", "\n ", " ", "\n ", " ", "\n ", "\n ", "\n ", " ", "\n ", " ", "\n ", " ", "\n ", "\n ", "\n ", "\n ", "\n ", "\n ", "\n ", "\n ", " ", " ", "\n ", " ", "\n ", " ", "\n ", " ", "\n ", " ", "\n ", " ", "\n ", " ", "\n ", "\n ", "\n ", " ", "\n ", " ", "\n ", " ", "\n ", "\n ", "\n ", "\n ", "\n ", "\n ", "\n ", "\n ", "\n ", " ", " ", "\n ", " ", "\n ", " ", "\n ", " ", "\n ", " ", "\n ", " ", "\n ", " ", "\n ", "\n ", " ", "\n ", " ", "\n ", " ", "\n ", "\n ", "\n ", " ", "\n ", " ", "\n ", " ", "\n ", " ", "\n ", " ", "\n ", "\n ", " ", "\n ", " ", "\n ", " ", "\n ", " ", "\n ", " ", "\n ", "\n ", " ", "\n ", " ", "\n ", " ", "\n ", " ", "\n ", " ", "\n ", "\n ", " ", "\n ", " ", "\n ", " ", "\n ", " ", "\n ", " ", "\n ", " ", "\n ", "\n ", "\n ", " ", "\n ", " ", "\n ", " ", "\n ", "\n ", "\n ", " ", "\n ", " ", " ", " ", "\n ", " ", "\n ", "\n ", "\n ", "\n ", "\n ", "\n ", " ", " ", "\n ", "\n ", " ", " ", "\n ", " ", " ", " ", "\n ", " ", " ", " ", " ", " ", " ", " ", "\n ", "\n ", "\n ", "\n ", " ", "\n ", "\n ", " ", " ", "\n ", " ", " ", " ", "\n ", " ", " ", "\n ", "\n ", "\n ", "\n ", " ", " ", " ", "\n ", "\n ", "\n ", "\n ", "\n ", "\n ", "\n ", "\n\n", " ", " ", " ", "\n ", "\n", "\n", "\n", "\n\n", "\n ", " ", " ", " ", "\n ", " ", " ", "\n ", " ", " ", "\n\n ", "\n ", " ", " ", " ", " ", "\n\n", " ", " ", " ", "\n\n", " ", "\n", " ", " ", " ", " ", "\n", "\n", "\n\n\n", "\n ", " ", "\n ", " ", " ", "\n ", " ", "\n ", " ", " ", "\n ", " ", "\n ", " ", "\n\n", "\n ", " ", " ", " ", "\n ", " ", " ", "\n ", " ", " ", "\n\n ", "\n ", " ", " ", "\n ", " ", " ", " ", " ", "\n\n", " ", " ", " ", "\n\n", "\n ", " ", "\n ", " ", " ", " ", " ", "\n\n", " ", " ", " ", "\n ", "\n", " ", " ", " ", "\n ", " ", "\n ", " ", " ", "\n ", " ", "\n ", " ", "\n ", " ", " ", " ", "\n", " ", " ", "\n ", " ", "\n ", " ", "\n ", " ", "\n ", " ", "\n ", " ", "\n ", "\n ", "\n ", " ", "\n ", "\n ", "\n ", "\n ", "\n ", " ", "\n ", "\n ", " ", "\n ", "\n ", "\n ", " ", " ", "\n ", " ", "\n ", " ", "\n ", " ", "\n ", "\n ", "\n ", " ", "\n ", " ", "\n ", " ", "\n ", "\n", "\n", "\n\n", " ", " ", " ", "\n ", " ", "\n ", " ", " ", "\n ", " ", "\n ", " ", "\n ", " ", " ", " ", "\n\n", " ", " ", "\n ", " ", "\n ", " ", "\n ", " ", "\n ", " ", "\n ", " ", "\n ", "\n ", "\n ", " ", "\n ", "\n ", "\n ", "\n ", "\n ", " ", "\n ", "\n ", " ", "\n ", "\n ", "\n ", " ", " ", "\n ", " ", "\n ", " ", "\n ", " ", "\n ", "\n ", "\n ", " ", "\n ", " ", "\n ", " ", "\n ", "\n", "\n\n", "\n", " ", " ", "\n", "\n", " ", "\n", " ", "\n", "\n ", " ", " ", " ", " ", "\n ", " ", " ", "\n ", " ", " ", "\n ", " ", " ", "\n\n ", "\n ", " ", " ", "\n\n", "\n ", " ", " ", " ", "\n ", " ", " ", "\n ", " ", " ", "\n\n ", "\n ", " ", "\n ", " ", "\n", " ", " ", "\n", " ", " ", "\n", "\n", " ", "\n", "\n", " ", " ", "\n", " ", " ", "\n", " ", " ", "\n", " ", "\n", "\n", " ", " ", "\n", " ", " ", "\n", "\n", "\n", "\n", "\n", "\n\n", "\n ", " ", "\n ", " ", " ", "\n\n ", " ", "\n ", " ", " ", " ", "\n ", " ", " ", "\n ", "\n ", " ", " ", " ", " ", " ", "\n ", " ", "\n ", " ", " ", " ", " ", "\n ", " ", "\n\n", " ", " ", "\n ", " ", "\n ", " ", "\n ", " ", "\n ", " ", "\n ", " ", "\n ", "\n ", "\n ", " ", "\n ", " ", "\n ", " ", "\n ", " ", "\n ", "\n ", "\n ", " ", "\n ", " ", "\n ", " ", "\n ", "\n", "\n\n", " ", " ", " ", "\n ", "\n ", "\n ", "\n", "\n\n", "\n ", " ", "\n", "\n", "\n", "\n ", " ", " ", "\n ", " ", " ", "\n\n ", " ", "\n ", " ", " ", "\n ", " ", "\n ", " ", " ", " ", " ", " ", "\n ", " ", "\n\n", "\n ", " ", " ", " ", "\n ", " ", " ", "\n ", "\n ", "\n ", " ", " ", "\n ", "\n ", "\n\n", "\n ", "\n ", "\n ", "\n ", " ", " ", " ", " ", "\n ", " ", " ", " ", "\n ", " ", "\n\n", " ", " ", " ", "\n ", "\n", "\n", "\n", "\n\n", "\n ", " ", "\n ", " ", " ", "\n ", "\n ", " ", " ", " ", " ", " ", "\n ", " ", " ", " ", "\n ", "\n ", "\n ", "\n\n", "\n ", " ", " ", "\n ", " ", " ", "\n ", "\n ", "\n ", "\n\n", "\n ", "\n ", "\n\n", " ", " ", " ", "\n ", "\n ", "\n", "\n", " ", "\n", "\n\n", " ", " ", "\n", "\n\n", " ", " ", " ", "\n ", " ", " ", " ", "\n ", " ", " ", "\n ", " ", " ", " ", "\n ", " ", " ", "\n ", " ", " ", " ", "\n ", " ", " ", "\n ", " ", " ", " ", " ", "\n\n ", "\n ", " ", "\n ", "\n ", "\n ", " ", " ", "\n ", " ", "\n ", "\n ", "\n ", "\n ", " ", "\n ", "\n ", " ", "\n\n ", " ", "\n\n", "\n ", " ", " ", "\n\n", "\n ", "\n ", "\n ", "\n ", "\n ", "\n ", "\n ", " ", "\n ", "\n ", "\n ", "\n ", "\n ", " ", "\n\n", " ", " ", "\n\n", " ", " ", " ", "\n ", "\n ", "\n ", " ", " ", "\n ", "\n ", " ", "\n ", "\n ", " ", " ", "\n ", "\n ", " ", "\n", "\n", "\n ", "\n", "\n", "\n\n", "\n ", " ", " ", "\n", "\n", "\n", "\n\n", "\n ", " ", " ", "\n\n", " ", " ", "\n ", " ", "\n ", " ", "\n ", " ", "\n ", " ", "\n ", " ", "\n ", " ", "\n ", "\n ", " ", "\n ", " ", "\n ", "\n ", "\n ", " ", "\n ", " ", "\n ", " ", "\n ", " ", "\n ", "\n ", " ", "\n ", " ", "\n ", " ", "\n ", "\n ", "\n ", " ", "\n ", " ", " ", "\n ", "\n", "\n\n", " ", " ", " ", "\n ", "\n ", " ", "\n", "\n", "\n\n", "\n ", " ", " ", " ", " ", "\n ", " ", " ", "\n ", " ", " ", "\n ", " ", " ", "\n ", " ", " ", "\n\n ", "\n ", " ", " ", " ", " ", "\n ", " ", " ", "\n ", "\n ", " ", "\n ", "\n\n ", " ", " ", " ", "\n ", " ", " ", " ", " ", "\n ", "\n ", " ", "\n ", "\n ", " ", " ", " ", "\n ", "\n ", "\n", " ", " ", " ", "\n ", " ", " ", "\n ", "\n ", "\n ", "\n ", "\n ", " ", " ", "\n ", "\n ", "\n ", " ", " ", "\n ", " ", " ", " ", " ", "\n ", "\n ", "\n ", "\n", "\n", "\n", "\n", "\n", "\n\n", "\n ", " ", " ", "\n ", " ", " ", "\n ", " ", " ", " ", " ", " ", " ", "\n ", " ", " ", "\n ", " ", " ", "\n ", "\n ", " ", " ", " ", "\n ", " ", "\n ", "\n ", " ", " ", "\n ", "\n ", " ", " ", "\n ", "\n ", " ", " ", "\n ", "\n ", " ", " ", "\n ", "\n ", " ", " ", "\n ", "\n ", "\n ", "\n ", " ", " ", "\n ", " ", " ", " ", "\n ", " ", " ", "\n ", " ", " ", "\n ", " ", "\n ", "\n ", "\n ", "\n ", "\n\n", " ", " ", " ", "\n ", "\n", "\n", "\n\n", " ", " ", "\n\n", "\n ", " ", "\n ", "\n ", " ", "\n", "\n", "\n\n", " ", " ", "\n\n", "\n ", " ", " ", "\n ", " ", " ", " ", "\n ", " ", " ", "\n ", " ", " ", " ", " ", "\n ", "\n ", " ", " ", "\n ", " ", " ", " ", " ", " ", " ", " ", " ", "\n ", " ", " ", "\n ", " ", " ", " ", " ", " ", " ", " ", " ", "\n ", " ", "\n ", "\n ", " ", " ", "\n", "\n", "\n\n", " ", " ", "\n\n", "\n ", " ", " ", "\n ", " ", " ", " ", "\n ", " ", " ", "\n ", " ", " ", " ", " ", "\n ", "\n ", " ", " ", "\n ", " ", " ", " ", " ", " ", " ", " ", " ", "\n ", " ", " ", "\n ", " ", " ", " ", " ", " ", " ", " ", " ", "\n ", " ", "\n ", "\n ", " ", " ", "\n", "\n", "\n", "\n", "\n", "\n\n", "\n", "\n ", " ", " ", " ", "\n ", " ", " ", "\n ", " ", " ", "\n", " ", "\n ", "\n ", " ", " ", " ", "\n ", " ", " ", "\n ", " ", " ", "\n ", " ", "\n ", "\n ", " ", " ", " ", "\n ", " ", " ", "\n ", " ", " ", "\n ", " ", "\n ", " ", " ", " ", "\n ", " ", " ", "\n ", " ", " ", "\n\n", " ", " ", "\n\n\n", "\n", "\n", "\n", "\n", "\n ", " ", " ", " ", "\n\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n ", " ", " ", " ", "\n ", " ", "\n ", " ", " ", "\n ", "\n\n ", " ", "\n ", " ", " ", "\n ", " ", "\n\n", "\n", "\n", "\n", "\n", "\n", "\n ", " ", "\n\n\n", "\n", "\n", "\n", " ", " ", " ", " ", " ", "\n ", "\n\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n ", "\n ", "\n ", " ", " ", " ", " ", "\n ", " ", "\n ", " ", " ", "\n ", "\n ", "\n ", " ", " ", "\n ", " ", " ", " ", " ", " ", " ", "\n ", "\n ", "\n ", " ", " ", " ", "\n ", " ", " ", " ", "\n ", "\n ", " ", " ", "\n ", " ", " ", " ", " ", "\n ", " ", " ", " ", "\n ", " ", " ", "\n ", " ", " ", "\n\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n\n ", " ", " ", "\n ", " ", "\n ", " ", "\n ", " ", "\n ", " ", "\n ", " ", "\n ", "\n\n ", " ", "\n ", "\n ", " ", " ", "\n ", " ", " ", " ", " ", "\n ", "\n ", " ", " ", "\n ", " ", " ", "\n ", "\n ", "\n ", " ", "\n ", "\n ", "\n ", " ", " ", "\n ", " ", " ", " ", "\n ", " ", " ", " ", "\n ", " ", " ", " ", "\n ", " ", " ", " ", " ", "\n ", "\n ", " ", " ", "\n ", " ", " ", "\n ", "\n ", "\n ", "\n ", "\n\n ", "\n ", " ", " ", "\n ", "\n ", "\n ", "\n ", "\n ", " ", "\n\n ", "\n ", "\n ", "\n\n ", "\n ", "\n ", "\n ", "\n ", " ", " ", "\n\n ", "\n ", " ", " ", "\n ", " ", " ", "\n ", "\n ", " ", "\n ", "\n ", "\n ", "\n\n ", "\n ", "\n ", "\n ", "\n ", " ", "\n ", "\n ", "\n ", "\n ", "\n\n ", "\n ", "\n ", " ", "\n ", "\n\n ", "\n ", "\n\n ", " ", "\n ", " ", " ", "\n ", " ", " ", " ", "\n ", " ", " ", " ", " ", " ", "\n ", "\n\n ", "\n ", "\n ", "\n ", "\n ", "\n ", " ", " ", "\n ", " ", " ", " ", " ", "\n ", " ", " ", "\n\n ", "\n ", "\n ", "\n\n\n", "\n ", "\n ", "\n ", " ", " ", "\n ", " ", " ", "\n ", "\n ", " ", " ", "\n ", " ", " ", "\n ", "\n ", " ", " ", "\n ", "\n\n", " ", "\n ", "\n", "\n", "\n", "\n", "\n", "\n\n", "\n\n ", " ", " ", "\n ", " ", " ", "\n\n ", " ", " ", "\n ", " ", " ", " ", "\n ", " ", " ", " ", "\n ", " ", "\n\n ", " ", "\n ", " ", " ", "\n ", " ", " ", "\n ", " ", " ", "\n ", " ", " ", "\n ", " ", " ", " ", "\n ", " ", " ", " ", "\n ", " ", " ", " ", "\n ", " ", " ", "\n ", " ", " ", "\n ", " ", " ", "\n ", "\n ", " ", " ", "\n ", "\n ", " ", " ", "\n ", " ", "\n ", " ", " ", "\n ", " ", " ", " ", " ", "\n ", " ", " ", "\n ", " ", " ", "\n ", " ", " ", "\n ", " ", " ", "\n ", " ", " ", "\n ", " ", " ", "\n ", "\n ", "\n ", "\n ", "\n ", "\n ", " ", " ", "\n\n ", "\n ", " ", " ", "\n ", " ", " ", " ", "\n\n ", " ", " ", " ", " ", "\n ", " ", " ", "\n ", " ", " ", " ", " ", "\n ", " ", " ", "\n ", " ", " ", " ", "\n ", " ", " ", "\n ", " ", " ", "\n ", "\n ", " ", " ", "\n ", " ", "\n", " ", " ", "\n ", " ", " ", " ", " ", " ", " ", "\n ", " ", " ", " ", " ", " ", " ", "\n", "\n", " ", " ", " ", "\n", " ", " ", "\n", " ", " ", "\n"], "language": "Python", "source": "python.org", "token_count": 33788} +{"url": "https://docs.python.org/3/c-api/reflection.html", "title": "Reflection", "content": "Reflection\u00b6\n-\nPyObject *PyEval_GetBuiltins(void)\u00b6\n- Return value: Borrowed reference. Part of the Stable ABI.\nDeprecated since version 3.13: Use\nPyEval_GetFrameBuiltins()\ninstead.Return a dictionary of the builtins in the current execution frame, or the interpreter of the thread state if no frame is currently executing.\n-\nPyObject *PyEval_GetLocals(void)\u00b6\n- Return value: Borrowed reference. Part of the Stable ABI.\nDeprecated since version 3.13: Use either\nPyEval_GetFrameLocals()\nto obtain the same behaviour as callinglocals()\nin Python code, or else callPyFrame_GetLocals()\non the result ofPyEval_GetFrame()\nto access thef_locals\nattribute of the currently executing frame.Return a mapping providing access to the local variables in the current execution frame, or\nNULL\nif no frame is currently executing.Refer to\nlocals()\nfor details of the mapping returned at different scopes.As this function returns a borrowed reference, the dictionary returned for optimized scopes is cached on the frame object and will remain alive as long as the frame object does. Unlike\nPyEval_GetFrameLocals()\nandlocals()\n, subsequent calls to this function in the same frame will update the contents of the cached dictionary to reflect changes in the state of the local variables rather than returning a new snapshot.Changed in version 3.13: As part of PEP 667,\nPyFrame_GetLocals()\n,locals()\n, andFrameType.f_locals\nno longer make use of the shared cache dictionary. Refer to the What\u2019s New entry for additional details.\n-\nPyObject *PyEval_GetGlobals(void)\u00b6\n- Return value: Borrowed reference. Part of the Stable ABI.\nDeprecated since version 3.13: Use\nPyEval_GetFrameGlobals()\ninstead.Return a dictionary of the global variables in the current execution frame, or\nNULL\nif no frame is currently executing.\n-\nPyFrameObject *PyEval_GetFrame(void)\u00b6\n- Return value: Borrowed reference. Part of the Stable ABI.\nReturn the attached thread state\u2019s frame, which is\nNULL\nif no frame is currently executing.See also\nPyThreadState_GetFrame()\n.\n-\nPyObject *PyEval_GetFrameBuiltins(void)\u00b6\n- Return value: New reference. Part of the Stable ABI since version 3.13.\nReturn a dictionary of the builtins in the current execution frame, or the interpreter of the thread state if no frame is currently executing.\nAdded in version 3.13.\n-\nPyObject *PyEval_GetFrameLocals(void)\u00b6\n- Return value: New reference. Part of the Stable ABI since version 3.13.\nReturn a dictionary of the local variables in the current execution frame, or\nNULL\nif no frame is currently executing. Equivalent to callinglocals()\nin Python code.To access\nf_locals\non the current frame without making an independent snapshot in optimized scopes, callPyFrame_GetLocals()\non the result ofPyEval_GetFrame()\n.Added in version 3.13.\n-\nPyObject *PyEval_GetFrameGlobals(void)\u00b6\n- Return value: New reference. Part of the Stable ABI since version 3.13.\nReturn a dictionary of the global variables in the current execution frame, or\nNULL\nif no frame is currently executing. Equivalent to callingglobals()\nin Python code.Added in version 3.13.\n-\nconst char *PyEval_GetFuncName(PyObject *func)\u00b6\n- Part of the Stable ABI.\nReturn the name of func if it is a function, class or instance object, else the name of funcs type.\n-\nconst char *PyEval_GetFuncDesc(PyObject *func)\u00b6\n- Part of the Stable ABI.\nReturn a description string, depending on the type of func. Return values include \u201c()\u201d for functions and methods, \u201c constructor\u201d, \u201c instance\u201d, and \u201c object\u201d. Concatenated with the result of\nPyEval_GetFuncName()\n, the result will be a description of func.", "code_snippets": [], "language": "Python", "source": "python.org", "token_count": 893} +{"url": "https://docs.python.org/3/c-api/set.html", "title": "Set Objects", "content": "Set Objects\u00b6\nThis section details the public API for set\nand frozenset\nobjects. Any functionality not listed below is best accessed using either\nthe abstract object protocol (including PyObject_CallMethod()\n,\nPyObject_RichCompareBool()\n, PyObject_Hash()\n,\nPyObject_Repr()\n, PyObject_IsTrue()\n, PyObject_Print()\n, and\nPyObject_GetIter()\n) or the abstract number protocol (including\nPyNumber_And()\n, PyNumber_Subtract()\n, PyNumber_Or()\n,\nPyNumber_Xor()\n, PyNumber_InPlaceAnd()\n,\nPyNumber_InPlaceSubtract()\n, PyNumber_InPlaceOr()\n, and\nPyNumber_InPlaceXor()\n).\n-\ntype PySetObject\u00b6\nThis subtype of\nPyObject\nis used to hold the internal data for bothset\nandfrozenset\nobjects. It is like aPyDictObject\nin that it is a fixed size for small sets (much like tuple storage) and will point to a separate, variable sized block of memory for medium and large sized sets (much like list storage). None of the fields of this structure should be considered public and all are subject to change. All access should be done through the documented API rather than by manipulating the values in the structure.\n-\nPyTypeObject PySet_Type\u00b6\n- Part of the Stable ABI.\nThis is an instance of\nPyTypeObject\nrepresenting the Pythonset\ntype.\n-\nPyTypeObject PyFrozenSet_Type\u00b6\n- Part of the Stable ABI.\nThis is an instance of\nPyTypeObject\nrepresenting the Pythonfrozenset\ntype.\nThe following type check macros work on pointers to any Python object. Likewise, the constructor functions work with any iterable Python object.\n-\nint PySet_Check(PyObject *p)\u00b6\nReturn true if p is a\nset\nobject or an instance of a subtype. This function always succeeds.\n-\nint PyFrozenSet_Check(PyObject *p)\u00b6\nReturn true if p is a\nfrozenset\nobject or an instance of a subtype. This function always succeeds.\n-\nint PyAnySet_Check(PyObject *p)\u00b6\nReturn true if p is a\nset\nobject, afrozenset\nobject, or an instance of a subtype. This function always succeeds.\n-\nint PySet_CheckExact(PyObject *p)\u00b6\nReturn true if p is a\nset\nobject but not an instance of a subtype. This function always succeeds.Added in version 3.10.\n-\nint PyAnySet_CheckExact(PyObject *p)\u00b6\nReturn true if p is a\nset\nobject or afrozenset\nobject but not an instance of a subtype. This function always succeeds.\n-\nint PyFrozenSet_CheckExact(PyObject *p)\u00b6\nReturn true if p is a\nfrozenset\nobject but not an instance of a subtype. This function always succeeds.\n-\nPyObject *PySet_New(PyObject *iterable)\u00b6\n- Return value: New reference. Part of the Stable ABI.\nReturn a new\nset\ncontaining objects returned by the iterable. The iterable may beNULL\nto create a new empty set. Return the new set on success orNULL\non failure. RaiseTypeError\nif iterable is not actually iterable. The constructor is also useful for copying a set (c=set(s)\n).\n-\nPyObject *PyFrozenSet_New(PyObject *iterable)\u00b6\n- Return value: New reference. Part of the Stable ABI.\nReturn a new\nfrozenset\ncontaining objects returned by the iterable. The iterable may beNULL\nto create a new empty frozenset. Return the new set on success orNULL\non failure. RaiseTypeError\nif iterable is not actually iterable.\nThe following functions and macros are available for instances of set\nor frozenset\nor instances of their subtypes.\n-\nPy_ssize_t PySet_Size(PyObject *anyset)\u00b6\n- Part of the Stable ABI.\nReturn the length of a\nset\norfrozenset\nobject. Equivalent tolen(anyset)\n. Raises aSystemError\nif anyset is not aset\n,frozenset\n, or an instance of a subtype.\n-\nPy_ssize_t PySet_GET_SIZE(PyObject *anyset)\u00b6\nMacro form of\nPySet_Size()\nwithout error checking.\n-\nint PySet_Contains(PyObject *anyset, PyObject *key)\u00b6\n- Part of the Stable ABI.\nReturn\n1\nif found,0\nif not found, and-1\nif an error is encountered. Unlike the Python__contains__()\nmethod, this function does not automatically convert unhashable sets into temporary frozensets. Raise aTypeError\nif the key is unhashable. RaiseSystemError\nif anyset is not aset\n,frozenset\n, or an instance of a subtype.\n-\nint PySet_Add(PyObject *set, PyObject *key)\u00b6\n- Part of the Stable ABI.\nAdd key to a\nset\ninstance. Also works withfrozenset\ninstances (likePyTuple_SetItem()\nit can be used to fill in the values of brand new frozensets before they are exposed to other code). Return0\non success or-1\non failure. Raise aTypeError\nif the key is unhashable. Raise aMemoryError\nif there is no room to grow. Raise aSystemError\nif set is not an instance ofset\nor its subtype.\nThe following functions are available for instances of set\nor its\nsubtypes but not for instances of frozenset\nor its subtypes.\n-\nint PySet_Discard(PyObject *set, PyObject *key)\u00b6\n- Part of the Stable ABI.\nReturn\n1\nif found and removed,0\nif not found (no action taken), and-1\nif an error is encountered. Does not raiseKeyError\nfor missing keys. Raise aTypeError\nif the key is unhashable. Unlike the Pythondiscard()\nmethod, this function does not automatically convert unhashable sets into temporary frozensets. RaiseSystemError\nif set is not an instance ofset\nor its subtype.\n-\nPyObject *PySet_Pop(PyObject *set)\u00b6\n- Return value: New reference. Part of the Stable ABI.\nReturn a new reference to an arbitrary object in the set, and removes the object from the set. Return\nNULL\non failure. RaiseKeyError\nif the set is empty. Raise aSystemError\nif set is not an instance ofset\nor its subtype.\n-\nint PySet_Clear(PyObject *set)\u00b6\n- Part of the Stable ABI.\nEmpty an existing set of all elements. Return\n0\non success. Return-1\nand raiseSystemError\nif set is not an instance ofset\nor its subtype.\nDeprecated API\u00b6\n-\nPySet_MINSIZE\u00b6\nA soft deprecated constant representing the size of an internal preallocated table inside\nPySetObject\ninstances.This is documented solely for completeness, as there are no guarantees that a given version of CPython uses preallocated tables with a fixed size. In code that does not deal with unstable set internals,\nPySet_MINSIZE\ncan be replaced with a small constant like8\n.If looking for the size of a set, use\nPySet_Size()\ninstead.", "code_snippets": [], "language": "Python", "source": "python.org", "token_count": 1479} +{"url": "https://docs.python.org/3/library/token.html", "title": " \u2014 Constants used with Python parse trees", "content": "token\n\u2014 Constants used with Python parse trees\u00b6\nSource code: Lib/token.py\nThis module provides constants which represent the numeric values of leaf nodes\nof the parse tree (terminal tokens). Refer to the file Grammar/Tokens\nin the Python distribution for the definitions of the names in the context of\nthe language grammar. The specific numeric values which the names map to may\nchange between Python versions.\nThe module also provides a mapping from numeric codes to names and some functions. The functions mirror definitions in the Python C header files.\nNote that a token\u2019s value may depend on tokenizer options. For example, a\n\"+\"\ntoken may be reported as either PLUS\nor OP\n, or\na \"match\"\ntoken may be either NAME\nor SOFT_KEYWORD\n.\n- token.tok_name\u00b6\nDictionary mapping the numeric values of the constants defined in this module back to name strings, allowing more human-readable representation of parse trees to be generated.\n- token.ISTERMINAL(x)\u00b6\nReturn\nTrue\nfor terminal token values.\n- token.ISNONTERMINAL(x)\u00b6\nReturn\nTrue\nfor non-terminal token values.\n- token.ISEOF(x)\u00b6\nReturn\nTrue\nif x is the marker indicating the end of input.\nThe token constants are:\n- token.NAME\u00b6\nToken value that indicates an identifier or keyword.\n- token.NUMBER\u00b6\nToken value that indicates a numeric literal\n- token.STRING\u00b6\nToken value that indicates a string or byte literal, excluding formatted string literals. The token string is not interpreted: it includes the surrounding quotation marks and the prefix (if given); backslashes are included literally, without processing escape sequences.\n- token.OP\u00b6\nA generic token value that indicates an operator or delimiter.\nCPython implementation detail: This value is only reported by the\ntokenize\nmodule. Internally, the tokenizer uses exact token types instead.\n- token.COMMENT\u00b6\nToken value used to indicate a comment. The parser ignores\nCOMMENT\ntokens.\n- token.NEWLINE\u00b6\nToken value that indicates the end of a logical line.\n- token.NL\u00b6\nToken value used to indicate a non-terminating newline.\nNL\ntokens are generated when a logical line of code is continued over multiple physical lines. The parser ignoresNL\ntokens.\n- token.INDENT\u00b6\nToken value used at the beginning of a logical line to indicate the start of an indented block.\n- token.DEDENT\u00b6\nToken value used at the beginning of a logical line to indicate the end of an indented block.\n- token.FSTRING_START\u00b6\nToken value used to indicate the beginning of an f-string literal.\nCPython implementation detail: The token string includes the prefix and the opening quote(s), but none of the contents of the literal.\n- token.FSTRING_MIDDLE\u00b6\nToken value used for literal text inside an f-string literal, including format specifications.\nCPython implementation detail: Replacement fields (that is, the non-literal parts of f-strings) use the same tokens as other expressions, and are delimited by\nLBRACE\n,RBRACE\n,EXCLAMATION\nandCOLON\ntokens.\n- token.FSTRING_END\u00b6\nToken value used to indicate the end of a f-string.\nCPython implementation detail: The token string contains the closing quote(s).\n- token.TSTRING_START\u00b6\nToken value used to indicate the beginning of a template string literal.\nCPython implementation detail: The token string includes the prefix and the opening quote(s), but none of the contents of the literal.\nAdded in version 3.14.\n- token.TSTRING_MIDDLE\u00b6\nToken value used for literal text inside a template string literal including format specifications.\nCPython implementation detail: Replacement fields (that is, the non-literal parts of t-strings) use the same tokens as other expressions, and are delimited by\nLBRACE\n,RBRACE\n,EXCLAMATION\nandCOLON\ntokens.Added in version 3.14.\n- token.TSTRING_END\u00b6\nToken value used to indicate the end of a template string literal.\nCPython implementation detail: The token string contains the closing quote(s).\nAdded in version 3.14.\n- token.ENDMARKER\u00b6\nToken value that indicates the end of input. Used in top-level grammar rules.\n- token.ENCODING\u00b6\nToken value that indicates the encoding used to decode the source bytes into text. The first token returned by\ntokenize.tokenize()\nwill always be anENCODING\ntoken.CPython implementation detail: This token type isn\u2019t used by the C tokenizer but is needed for the\ntokenize\nmodule.\nThe following token types are not produced by the tokenize\nmodule,\nand are defined for special uses in the tokenizer or parser:\n- token.TYPE_IGNORE\u00b6\nToken value indicating that a\ntype: ignore\ncomment was recognized. Such tokens are produced instead of regularCOMMENT\ntokens only with thePyCF_TYPE_COMMENTS\nflag.\n- token.TYPE_COMMENT\u00b6\nToken value indicating that a type comment was recognized. Such tokens are produced instead of regular\nCOMMENT\ntokens only with thePyCF_TYPE_COMMENTS\nflag.\n- token.SOFT_KEYWORD\u00b6\nToken value indicating a soft keyword.\nThe tokenizer never produces this value. To check for a soft keyword, pass a\nNAME\ntoken\u2019s string tokeyword.issoftkeyword()\n.\n- token.ERRORTOKEN\u00b6\nToken value used to indicate wrong input.\nThe\ntokenize\nmodule generally indicates errors by raising exceptions instead of emitting this token. It can also emit tokens such asOP\norNAME\nwith strings that are later rejected by the parser.\nThe remaining tokens represent specific operators and\ndelimiters.\n(The tokenize\nmodule reports these as OP\n; see exact_type\nin the tokenize\ndocumentation for details.)\nToken |\nValue |\n|---|---|\n|\n|\n|\n|\n|\n|\n|\n|\n|\n|\n|\n|\n|\n|\n|\n|\n|\n|\n|\n|\n|\n|\n|\n|\n|\n|\n|\n|\n|\n|\n|\n|\n|\n|\n|\n|\n|\n|\n|\n|\n|\n|\n|\n|\n|\n|\n|\n|\n|\n|\n|\n|\n|\n|\n|\n|\n|\n|\n|\n|\n|\n|\n|\n|\n|\n|\n|\n|\n|\n|\n|\n|\n|\n|\n|\n|\n|\n|\n|\n|\n|\n|\n|\n|\n|\n|\n|\n|\n|\n|\n|\n|\n|\n|\n|\n|\nThe following non-token constants are provided:\n- token.N_TOKENS\u00b6\nThe number of token types defined in this module.\n- token.EXACT_TOKEN_TYPES\u00b6\nA dictionary mapping the string representation of a token to its numeric code.\nAdded in version 3.8.\nChanged in version 3.5: Added AWAIT\nand ASYNC\ntokens.\nChanged in version 3.7: Removed AWAIT\nand ASYNC\ntokens. \u201casync\u201d and \u201cawait\u201d are\nnow tokenized as NAME\ntokens.\nChanged in version 3.8: Added TYPE_COMMENT\n, TYPE_IGNORE\n, COLONEQUAL\n.\nAdded AWAIT\nand ASYNC\ntokens back (they\u2019re needed\nto support parsing older Python versions for ast.parse()\nwith\nfeature_version\nset to 6 or lower).\nChanged in version 3.12: Added EXCLAMATION\n.\nChanged in version 3.13: Removed AWAIT\nand ASYNC\ntokens again.", "code_snippets": [], "language": "Python", "source": "python.org", "token_count": 1578} +{"url": "https://docs.python.org/3/library/sunau.html", "title": " \u2014 Read and write Sun AU files", "content": "sunau\n\u2014 Read and write Sun AU files\u00b6\nDeprecated since version 3.11, removed in version 3.13.\nThis module is no longer part of the Python standard library. It was removed in Python 3.13 after being deprecated in Python 3.11. The removal was decided in PEP 594.\nThe last version of Python that provided the sunau\nmodule was\nPython 3.12.", "code_snippets": [], "language": "Python", "source": "python.org", "token_count": 83} +{"url": "https://docs.python.org/3/faq/general.html", "title": null, "content": "General Python FAQ\u00b6\nGeneral Information\u00b6\nWhat is Python?\u00b6\nPython is an interpreted, interactive, object-oriented programming language. It incorporates modules, exceptions, dynamic typing, very high level dynamic data types, and classes. It supports multiple programming paradigms beyond object-oriented programming, such as procedural and functional programming. Python combines remarkable power with very clear syntax. It has interfaces to many system calls and libraries, as well as to various window systems, and is extensible in C or C++. It is also usable as an extension language for applications that need a programmable interface. Finally, Python is portable: it runs on many Unix variants including Linux and macOS, and on Windows.\nTo find out more, start with The Python Tutorial. The Beginner\u2019s Guide to Python links to other introductory tutorials and resources for learning Python.\nWhat is the Python Software Foundation?\u00b6\nThe Python Software Foundation is an independent non-profit organization that holds the copyright on Python versions 2.1 and newer. The PSF\u2019s mission is to advance open source technology related to the Python programming language and to publicize the use of Python. The PSF\u2019s home page is at https://www.python.org/psf/.\nDonations to the PSF are tax-exempt in the US. If you use Python and find it helpful, please contribute via the PSF donation page.\nAre there copyright restrictions on the use of Python?\u00b6\nYou can do anything you want with the source, as long as you leave the copyrights in and display those copyrights in any documentation about Python that you produce. If you honor the copyright rules, it\u2019s OK to use Python for commercial use, to sell copies of Python in source or binary form (modified or unmodified), or to sell products that incorporate Python in some form. We would still like to know about all commercial use of Python, of course.\nSee the license page to find further explanations and the full text of the PSF License.\nThe Python logo is trademarked, and in certain cases permission is required to use it. Consult the Trademark Usage Policy for more information.\nWhy was Python created in the first place?\u00b6\nHere\u2019s a very brief summary of what started it all, written by Guido van Rossum:\nI had extensive experience with implementing an interpreted language in the ABC group at CWI, and from working with this group I had learned a lot about language design. This is the origin of many Python features, including the use of indentation for statement grouping and the inclusion of very-high-level data types (although the details are all different in Python).\nI had a number of gripes about the ABC language, but also liked many of its features. It was impossible to extend the ABC language (or its implementation) to remedy my complaints \u2013 in fact its lack of extensibility was one of its biggest problems. I had some experience with using Modula-2+ and talked with the designers of Modula-3 and read the Modula-3 report. Modula-3 is the origin of the syntax and semantics used for exceptions, and some other Python features.\nI was working in the Amoeba distributed operating system group at CWI. We needed a better way to do system administration than by writing either C programs or Bourne shell scripts, since Amoeba had its own system call interface which wasn\u2019t easily accessible from the Bourne shell. My experience with error handling in Amoeba made me acutely aware of the importance of exceptions as a programming language feature.\nIt occurred to me that a scripting language with a syntax like ABC but with access to the Amoeba system calls would fill the need. I realized that it would be foolish to write an Amoeba-specific language, so I decided that I needed a language that was generally extensible.\nDuring the 1989 Christmas holidays, I had a lot of time on my hand, so I decided to give it a try. During the next year, while still mostly working on it in my own time, Python was used in the Amoeba project with increasing success, and the feedback from colleagues made me add many early improvements.\nIn February 1991, after just over a year of development, I decided to post to USENET. The rest is in the\nMisc/HISTORY\nfile.\nWhat is Python good for?\u00b6\nPython is a high-level general-purpose programming language that can be applied to many different classes of problems.\nThe language comes with a large standard library that covers areas such as string processing (regular expressions, Unicode, calculating differences between files), internet protocols (HTTP, FTP, SMTP, XML-RPC, POP, IMAP), software engineering (unit testing, logging, profiling, parsing Python code), and operating system interfaces (system calls, filesystems, TCP/IP sockets). Look at the table of contents for The Python Standard Library to get an idea of what\u2019s available. A wide variety of third-party extensions are also available. Consult the Python Package Index to find packages of interest to you.\nHow does the Python version numbering scheme work?\u00b6\nPython versions are numbered \u201cA.B.C\u201d or \u201cA.B\u201d:\nA is the major version number \u2013 it is only incremented for really major changes in the language.\nB is the minor version number \u2013 it is incremented for less earth-shattering changes.\nC is the micro version number \u2013 it is incremented for each bugfix release.\nNot all releases are bugfix releases. In the run-up to a new feature release, a series of development releases are made, denoted as alpha, beta, or release candidate. Alphas are early releases in which interfaces aren\u2019t yet finalized; it\u2019s not unexpected to see an interface change between two alpha releases. Betas are more stable, preserving existing interfaces but possibly adding new modules, and release candidates are frozen, making no changes except as needed to fix critical bugs.\nAlpha, beta and release candidate versions have an additional suffix:\nThe suffix for an alpha version is \u201caN\u201d for some small number N.\nThe suffix for a beta version is \u201cbN\u201d for some small number N.\nThe suffix for a release candidate version is \u201crcN\u201d for some small number N.\nIn other words, all versions labeled 2.0aN precede the versions labeled 2.0bN, which precede versions labeled 2.0rcN, and those precede 2.0.\nYou may also find version numbers with a \u201c+\u201d suffix, e.g. \u201c2.2+\u201d. These are unreleased versions, built directly from the CPython development repository. In practice, after a final minor release is made, the version is incremented to the next minor version, which becomes the \u201ca0\u201d version, e.g. \u201c2.4a0\u201d.\nSee the Developer\u2019s Guide\nfor more information about the development cycle, and\nPEP 387 to learn more about Python\u2019s backward compatibility policy. See also\nthe documentation for sys.version\n, sys.hexversion\n, and\nsys.version_info\n.\nHow do I obtain a copy of the Python source?\u00b6\nThe latest Python source distribution is always available from python.org, at https://www.python.org/downloads/. The latest development sources can be obtained at https://github.com/python/cpython/.\nThe source distribution is a gzipped tar file containing the complete C source, Sphinx-formatted documentation, Python library modules, example programs, and several useful pieces of freely distributable software. The source will compile and run out of the box on most UNIX platforms.\nConsult the Getting Started section of the Python Developer\u2019s Guide for more information on getting the source code and compiling it.\nHow do I get documentation on Python?\u00b6\nThe standard documentation for the current stable version of Python is available at https://docs.python.org/3/. EPUB, plain text, and downloadable HTML versions are also available at https://docs.python.org/3/download.html.\nThe documentation is written in reStructuredText and processed by the Sphinx documentation tool. The reStructuredText source for the documentation is part of the Python source distribution.\nI\u2019ve never programmed before. Is there a Python tutorial?\u00b6\nThere are numerous tutorials and books available. The standard documentation includes The Python Tutorial.\nConsult the Beginner\u2019s Guide to find information for beginning Python programmers, including lists of tutorials.\nIs there a newsgroup or mailing list devoted to Python?\u00b6\nThere is a newsgroup, comp.lang.python, and a mailing list, python-list. The newsgroup and mailing list are gatewayed into each other \u2013 if you can read news it\u2019s unnecessary to subscribe to the mailing list. comp.lang.python is high-traffic, receiving hundreds of postings every day, and Usenet readers are often more able to cope with this volume.\nAnnouncements of new software releases and events can be found in comp.lang.python.announce, a low-traffic moderated list that receives about five postings per day. It\u2019s available as the python-announce mailing list.\nMore info about other mailing lists and newsgroups can be found at https://www.python.org/community/lists/.\nHow do I get a beta test version of Python?\u00b6\nAlpha and beta releases are available from https://www.python.org/downloads/. All releases are announced on the comp.lang.python and comp.lang.python.announce newsgroups and on the Python home page at https://www.python.org/; an RSS feed of news is available.\nYou can also access the development version of Python through Git. See The Python Developer\u2019s Guide for details.\nHow do I submit bug reports and patches for Python?\u00b6\nTo report a bug or submit a patch, use the issue tracker at https://github.com/python/cpython/issues.\nFor more information on how Python is developed, consult the Python Developer\u2019s Guide.\nAre there any published articles about Python that I can reference?\u00b6\nIt\u2019s probably best to cite your favorite book about Python.\nThe very first article about Python was written in 1991 and is now quite outdated.\nGuido van Rossum and Jelke de Boer, \u201cInteractively Testing Remote Servers Using the Python Programming Language\u201d, CWI Quarterly, Volume 4, Issue 4 (December 1991), Amsterdam, pp 283\u2013303.\nAre there any books on Python?\u00b6\nYes, there are many, and more are being published. See the python.org wiki at https://wiki.python.org/moin/PythonBooks for a list.\nYou can also search online bookstores for \u201cPython\u201d and filter out the Monty Python references; or perhaps search for \u201cPython\u201d and \u201clanguage\u201d.\nWhere in the world is www.python.org located?\u00b6\nThe Python project\u2019s infrastructure is located all over the world and is managed by the Python Infrastructure Team. Details here.\nWhy is it called Python?\u00b6\nWhen he began implementing Python, Guido van Rossum was also reading the published scripts from \u201cMonty Python\u2019s Flying Circus\u201d, a BBC comedy series from the 1970s. Van Rossum thought he needed a name that was short, unique, and slightly mysterious, so he decided to call the language Python.\nDo I have to like \u201cMonty Python\u2019s Flying Circus\u201d?\u00b6\nNo, but it helps. :)\nPython in the real world\u00b6\nHow stable is Python?\u00b6\nVery stable. New, stable releases have been coming out roughly every 6 to 18 months since 1991, and this seems likely to continue. As of version 3.9, Python will have a new feature release every 12 months (PEP 602).\nThe developers issue bugfix releases of older versions, so the stability of existing releases gradually improves. Bugfix releases, indicated by a third component of the version number (e.g. 3.5.3, 3.6.2), are managed for stability; only fixes for known problems are included in a bugfix release, and it\u2019s guaranteed that interfaces will remain the same throughout a series of bugfix releases.\nThe latest stable releases can always be found on the Python download page. Python 3.x is the recommended version and supported by most widely used libraries. Python 2.x is not maintained anymore.\nHow many people are using Python?\u00b6\nThere are probably millions of users, though it\u2019s difficult to obtain an exact count.\nPython is available for free download, so there are no sales figures, and it\u2019s available from many different sites and packaged with many Linux distributions, so download statistics don\u2019t tell the whole story either.\nThe comp.lang.python newsgroup is very active, but not all Python users post to the group or even read it.\nHave any significant projects been done in Python?\u00b6\nSee https://www.python.org/about/success for a list of projects that use Python. Consulting the proceedings for past Python conferences will reveal contributions from many different companies and organizations.\nHigh-profile Python projects include the Mailman mailing list manager and the Zope application server. Several Linux distributions, most notably Red Hat, have written part or all of their installer and system administration software in Python. Companies that use Python internally include Google, Yahoo, and Lucasfilm Ltd.\nWhat new developments are expected for Python in the future?\u00b6\nSee https://peps.python.org/ for the Python Enhancement Proposals (PEPs). PEPs are design documents describing a suggested new feature for Python, providing a concise technical specification and a rationale. Look for a PEP titled \u201cPython X.Y Release Schedule\u201d, where X.Y is a version that hasn\u2019t been publicly released yet.\nNew development is discussed on the python-dev mailing list.\nIs it reasonable to propose incompatible changes to Python?\u00b6\nIn general, no. There are already millions of lines of Python code around the world, so any change in the language that invalidates more than a very small fraction of existing programs has to be frowned upon. Even if you can provide a conversion program, there\u2019s still the problem of updating all documentation; many books have been written about Python, and we don\u2019t want to invalidate them all at a single stroke.\nProviding a gradual upgrade path is necessary if a feature has to be changed. PEP 5 describes the procedure followed for introducing backward-incompatible changes while minimizing disruption for users.\nIs Python a good language for beginning programmers?\u00b6\nYes.\nIt is still common to start students with a procedural and statically typed language such as Pascal, C, or a subset of C++ or Java. Students may be better served by learning Python as their first language. Python has a very simple and consistent syntax and a large standard library and, most importantly, using Python in a beginning programming course lets students concentrate on important programming skills such as problem decomposition and data type design. With Python, students can be quickly introduced to basic concepts such as loops and procedures. They can probably even work with user-defined objects in their very first course.\nFor a student who has never programmed before, using a statically typed language seems unnatural. It presents additional complexity that the student must master and slows the pace of the course. The students are trying to learn to think like a computer, decompose problems, design consistent interfaces, and encapsulate data. While learning to use a statically typed language is important in the long term, it is not necessarily the best topic to address in the students\u2019 first programming course.\nMany other aspects of Python make it a good first language. Like Java, Python has a large standard library so that students can be assigned programming projects very early in the course that do something. Assignments aren\u2019t restricted to the standard four-function calculator and check balancing programs. By using the standard library, students can gain the satisfaction of working on realistic applications as they learn the fundamentals of programming. Using the standard library also teaches students about code reuse. Third-party modules such as PyGame are also helpful in extending the students\u2019 reach.\nPython\u2019s interactive interpreter enables students to test language features while they\u2019re programming. They can keep a window with the interpreter running while they enter their program\u2019s source in another window. If they can\u2019t remember the methods for a list, they can do something like this:\n>>> L = []\n>>> dir(L)\n['__add__', '__class__', '__contains__', '__delattr__', '__delitem__',\n'__dir__', '__doc__', '__eq__', '__format__', '__ge__',\n'__getattribute__', '__getitem__', '__gt__', '__hash__', '__iadd__',\n'__imul__', '__init__', '__iter__', '__le__', '__len__', '__lt__',\n'__mul__', '__ne__', '__new__', '__reduce__', '__reduce_ex__',\n'__repr__', '__reversed__', '__rmul__', '__setattr__', '__setitem__',\n'__sizeof__', '__str__', '__subclasshook__', 'append', 'clear',\n'copy', 'count', 'extend', 'index', 'insert', 'pop', 'remove',\n'reverse', 'sort']\n>>> [d for d in dir(L) if '__' not in d]\n['append', 'clear', 'copy', 'count', 'extend', 'index', 'insert', 'pop', 'remove', 'reverse', 'sort']\n>>> help(L.append)\nHelp on built-in function append:\nappend(...)\nL.append(object) -> None -- append object to end\n>>> L.append(1)\n>>> L\n[1]\nWith the interpreter, documentation is never far from the student as they are programming.\nThere are also good IDEs for Python. IDLE is a cross-platform IDE for Python that is written in Python using Tkinter. Emacs users will be happy to know that there is a very good Python mode for Emacs. All of these programming environments provide syntax highlighting, auto-indenting, and access to the interactive interpreter while coding. Consult the Python wiki for a full list of Python editing environments.\nIf you want to discuss Python\u2019s use in education, you may be interested in joining the edu-sig mailing list.", "code_snippets": [" ", " ", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", " ", " ", " ", " ", " ", " ", " ", " ", " ", "\n", "\n\n", "\n", "\n\n", "\n", "\n\n", "\n", "\n", "\n"], "language": "Python", "source": "python.org", "token_count": 4345} +{"url": "https://docs.python.org/3/whatsnew/3.5.html", "title": "What\u2019s New In Python 3.5", "content": "What\u2019s New In Python 3.5\u00b6\n- Editors:\nElvis Pranskevichus , Yury Selivanov \nThis article explains the new features in Python 3.5, compared to 3.4. Python 3.5 was released on September 13, 2015. See the changelog for a full list of changes.\nSee also\nPEP 478 - Python 3.5 Release Schedule\nSummary \u2013 Release highlights\u00b6\nNew syntax features:\nPEP 492, coroutines with async and await syntax.\nPEP 465, a new matrix multiplication operator:\na @ b\n.PEP 448, additional unpacking generalizations.\nNew library modules:\nNew built-in features:\nbytes % args\n,bytearray % args\n: PEP 461 \u2013 Adding%\nformatting to bytes and bytearray.New\nbytes.hex()\n,bytearray.hex()\nandmemoryview.hex()\nmethods. (Contributed by Arnon Yaari in bpo-9951.)memoryview\nnow supports tuple indexing (including multi-dimensional). (Contributed by Antoine Pitrou in bpo-23632.)Generators have a new\ngi_yieldfrom\nattribute, which returns the object being iterated byyield from\nexpressions. (Contributed by Benno Leslie and Yury Selivanov in bpo-24450.)A new\nRecursionError\nexception is now raised when maximum recursion depth is reached. (Contributed by Georg Brandl in bpo-19235.)\nCPython implementation improvements:\nWhen the\nLC_TYPE\nlocale is the POSIX locale (C\nlocale),sys.stdin\nandsys.stdout\nnow use thesurrogateescape\nerror handler, instead of thestrict\nerror handler. (Contributed by Victor Stinner in bpo-19977.).pyo\nfiles are no longer used and have been replaced by a more flexible scheme that includes the optimization level explicitly in.pyc\nname. (See PEP 488 overview.)Builtin and extension modules are now initialized in a multi-phase process, which is similar to how Python modules are loaded. (See PEP 489 overview.)\nSignificant improvements in the standard library:\ncollections.OrderedDict\nis now implemented in C, which makes it 4 to 100 times faster.The\nssl\nmodule gained support for Memory BIO, which decouples SSL protocol handling from network IO.The new\nos.scandir()\nfunction provides a better and significantly faster way of directory traversal.functools.lru_cache()\nhas been mostly reimplemented in C, yielding much better performance.The new\nsubprocess.run()\nfunction provides a streamlined way to run subprocesses.The\ntraceback\nmodule has been significantly enhanced for improved performance and developer convenience.\nSecurity improvements:\nSSLv3 is now disabled throughout the standard library. It can still be enabled by instantiating a\nssl.SSLContext\nmanually. (See bpo-22638 for more details; this change was backported to CPython 3.4 and 2.7.)HTTP cookie parsing is now stricter, in order to protect against potential injection attacks. (Contributed by Antoine Pitrou in bpo-22796.)\nWindows improvements:\nA new installer for Windows has replaced the old MSI. See Using Python on Windows for more information.\nWindows builds now use Microsoft Visual C++ 14.0, and extension modules should use the same.\nPlease read on for a comprehensive list of user-facing changes, including many other smaller improvements, CPython optimizations, deprecations, and potential porting issues.\nNew Features\u00b6\nPEP 492 - Coroutines with async and await syntax\u00b6\nPEP 492 greatly improves support for asynchronous programming in Python by adding awaitable objects, coroutine functions, asynchronous iteration, and asynchronous context managers.\nCoroutine functions are declared using the new async def\nsyntax:\n>>> async def coro():\n... return 'spam'\nInside a coroutine function, the new await\nexpression can be used\nto suspend coroutine execution until the result is available. Any object\ncan be awaited, as long as it implements the awaitable protocol by\ndefining the __await__()\nmethod.\nPEP 492 also adds async for\nstatement for convenient iteration\nover asynchronous iterables.\nAn example of a rudimentary HTTP client written using the new syntax:\nimport asyncio\nasync def http_get(domain):\nreader, writer = await asyncio.open_connection(domain, 80)\nwriter.write(b'\\r\\n'.join([\nb'GET / HTTP/1.1',\nb'Host: %b' % domain.encode('latin-1'),\nb'Connection: close',\nb'', b''\n]))\nasync for line in reader:\nprint('>>>', line)\nwriter.close()\nloop = asyncio.get_event_loop()\ntry:\nloop.run_until_complete(http_get('example.com'))\nfinally:\nloop.close()\nSimilarly to asynchronous iteration, there is a new syntax for asynchronous context managers. The following script:\nimport asyncio\nasync def coro(name, lock):\nprint('coro {}: waiting for lock'.format(name))\nasync with lock:\nprint('coro {}: holding the lock'.format(name))\nawait asyncio.sleep(1)\nprint('coro {}: releasing the lock'.format(name))\nloop = asyncio.get_event_loop()\nlock = asyncio.Lock()\ncoros = asyncio.gather(coro(1, lock), coro(2, lock))\ntry:\nloop.run_until_complete(coros)\nfinally:\nloop.close()\nwill output:\ncoro 2: waiting for lock\ncoro 2: holding the lock\ncoro 1: waiting for lock\ncoro 2: releasing the lock\ncoro 1: holding the lock\ncoro 1: releasing the lock\nNote that both async for\nand async with\ncan only\nbe used inside a coroutine function declared with async def\n.\nCoroutine functions are intended to be run inside a compatible event loop, such as the asyncio loop.\nNote\nChanged in version 3.5.2: Starting with CPython 3.5.2, __aiter__\ncan directly return\nasynchronous iterators. Returning\nan awaitable object will result in a\nPendingDeprecationWarning\n.\nSee more details in the Asynchronous Iterators documentation section.\nSee also\n- PEP 492 \u2013 Coroutines with async and await syntax\nPEP written and implemented by Yury Selivanov.\nPEP 465 - A dedicated infix operator for matrix multiplication\u00b6\nPEP 465 adds the @\ninfix operator for matrix multiplication.\nCurrently, no builtin Python types implement the new operator, however, it\ncan be implemented by defining __matmul__()\n,\n__rmatmul__()\n, and __imatmul__()\nfor regular,\nreflected, and in-place matrix multiplication.\nThe semantics of these methods is similar to that of\nmethods defining other infix arithmetic operators.\nMatrix multiplication is a notably common operation in many fields of\nmathematics, science, engineering, and the addition of @\nallows writing\ncleaner code:\nS = (H @ beta - r).T @ inv(H @ V @ H.T) @ (H @ beta - r)\ninstead of:\nS = dot((dot(H, beta) - r).T,\ndot(inv(dot(dot(H, V), H.T)), dot(H, beta) - r))\nNumPy 1.10 has support for the new operator:\n>>> import numpy\n>>> x = numpy.ones(3)\n>>> x\narray([ 1., 1., 1.])\n>>> m = numpy.eye(3)\n>>> m\narray([[ 1., 0., 0.],\n[ 0., 1., 0.],\n[ 0., 0., 1.]])\n>>> x @ m\narray([ 1., 1., 1.])\nSee also\n- PEP 465 \u2013 A dedicated infix operator for matrix multiplication\nPEP written by Nathaniel J. Smith; implemented by Benjamin Peterson.\nPEP 448 - Additional Unpacking Generalizations\u00b6\nPEP 448 extends the allowed uses of the *\niterable unpacking\noperator and **\ndictionary unpacking operator. It is now possible\nto use an arbitrary number of unpackings in function calls:\n>>> print(*[1], *[2], 3, *[4, 5])\n1 2 3 4 5\n>>> def fn(a, b, c, d):\n... print(a, b, c, d)\n...\n>>> fn(**{'a': 1, 'c': 3}, **{'b': 2, 'd': 4})\n1 2 3 4\nSimilarly, tuple, list, set, and dictionary displays allow multiple unpackings (see Expression lists and Dictionary displays):\n>>> *range(4), 4\n(0, 1, 2, 3, 4)\n>>> [*range(4), 4]\n[0, 1, 2, 3, 4]\n>>> {*range(4), 4, *(5, 6, 7)}\n{0, 1, 2, 3, 4, 5, 6, 7}\n>>> {'x': 1, **{'y': 2}}\n{'x': 1, 'y': 2}\nSee also\n- PEP 448 \u2013 Additional Unpacking Generalizations\nPEP written by Joshua Landau; implemented by Neil Girdhar, Thomas Wouters, and Joshua Landau.\nPEP 461 - percent formatting support for bytes and bytearray\u00b6\nPEP 461 adds support for the %\ninterpolation operator to bytes\nand bytearray\n.\nWhile interpolation is usually thought of as a string operation, there are\ncases where interpolation on bytes\nor bytearrays\nmakes sense, and the\nwork needed to make up for this missing functionality detracts from the\noverall readability of the code. This issue is particularly important when\ndealing with wire format protocols, which are often a mixture of binary\nand ASCII compatible text.\nExamples:\n>>> b'Hello %b!' % b'World'\nb'Hello World!'\n>>> b'x=%i y=%f' % (1, 2.5)\nb'x=1 y=2.500000'\nUnicode is not allowed for %b\n, but it is accepted by %a\n(equivalent of\nrepr(obj).encode('ascii', 'backslashreplace')\n):\n>>> b'Hello %b!' % 'World'\nTraceback (most recent call last):\nFile \"\", line 1, in \nTypeError: %b requires bytes, or an object that implements __bytes__, not 'str'\n>>> b'price: %a' % '10\u20ac'\nb\"price: '10\\\\u20ac'\"\nNote that %s\nand %r\nconversion types, although supported, should\nonly be used in codebases that need compatibility with Python 2.\nSee also\n- PEP 461 \u2013 Adding % formatting to bytes and bytearray\nPEP written by Ethan Furman; implemented by Neil Schemenauer and Ethan Furman.\nPEP 484 - Type Hints\u00b6\nFunction annotation syntax has been a Python feature since version 3.0 (PEP 3107), however the semantics of annotations has been left undefined.\nExperience has shown that the majority of function annotation uses were to provide type hints to function parameters and return values. It became evident that it would be beneficial for Python users, if the standard library included the base definitions and tools for type annotations.\nPEP 484 introduces a provisional module to provide these standard definitions and tools, along with some conventions for situations where annotations are not available.\nFor example, here is a simple function whose argument and return type are declared in the annotations:\ndef greeting(name: str) -> str:\nreturn 'Hello ' + name\nWhile these annotations are available at runtime through the usual\n__annotations__\nattribute, no automatic type checking happens\nat runtime. Instead, it is assumed that a separate off-line type checker\n(e.g. mypy) will be used for on-demand\nsource code analysis.\nThe type system supports unions, generic types, and a special type\nnamed Any\nwhich is consistent with (i.e. assignable to\nand from) all types.\nPEP 471 - os.scandir() function \u2013 a better and faster directory iterator\u00b6\nPEP 471 adds a new directory iteration function, os.scandir()\n,\nto the standard library. Additionally, os.walk()\nis now\nimplemented using scandir\n, which makes it 3 to 5 times faster\non POSIX systems and 7 to 20 times faster on Windows systems. This is\nlargely achieved by greatly reducing the number of calls to os.stat()\nrequired to walk a directory tree.\nAdditionally, scandir\nreturns an iterator, as opposed to returning\na list of file names, which improves memory efficiency when iterating\nover very large directories.\nThe following example shows a simple use of os.scandir()\nto display all\nthe files (excluding directories) in the given path that don\u2019t start with\n'.'\n. The entry.is_file()\ncall will generally\nnot make an additional system call:\nfor entry in os.scandir(path):\nif not entry.name.startswith('.') and entry.is_file():\nprint(entry.name)\nSee also\n- PEP 471 \u2013 os.scandir() function \u2013 a better and faster directory iterator\nPEP written and implemented by Ben Hoyt with the help of Victor Stinner.\nPEP 475: Retry system calls failing with EINTR\u00b6\nAn errno.EINTR\nerror code is returned whenever a system call, that\nis waiting for I/O, is interrupted by a signal. Previously, Python would\nraise InterruptedError\nin such cases. This meant that, when writing a\nPython application, the developer had two choices:\nIgnore the\nInterruptedError\n.Handle the\nInterruptedError\nand attempt to restart the interrupted system call at every call site.\nThe first option makes an application fail intermittently. The second option adds a large amount of boilerplate that makes the code nearly unreadable. Compare:\nprint(\"Hello World\")\nand:\nwhile True:\ntry:\nprint(\"Hello World\")\nbreak\nexcept InterruptedError:\ncontinue\nPEP 475 implements automatic retry of system calls on\nEINTR\n. This removes the burden of dealing with EINTR\nor InterruptedError\nin user code in most situations and makes\nPython programs, including the standard library, more robust. Note that\nthe system call is only retried if the signal handler does not raise an\nexception.\nBelow is a list of functions which are now retried when interrupted by a signal:\nfunctions of the\nfaulthandler\nmodule;os\nfunctions:fchdir()\n,fchmod()\n,fchown()\n,fdatasync()\n,fstat()\n,fstatvfs()\n,fsync()\n,ftruncate()\n,mkfifo()\n,mknod()\n,open()\n,posix_fadvise()\n,posix_fallocate()\n,pread()\n,pwrite()\n,read()\n,readv()\n,sendfile()\n,wait3()\n,wait4()\n,wait()\n,waitid()\n,waitpid()\n,write()\n,writev()\n;special cases:\nos.close()\nandos.dup2()\nnow ignoreEINTR\nerrors; the syscall is not retried (see the PEP for the rationale);select\nfunctions:devpoll.poll()\n,epoll.poll()\n,kqueue.control()\n,poll.poll()\n,select()\n;methods of the\nsocket\nclass:accept()\n,connect()\n(except for non-blocking sockets),recv()\n,recvfrom()\n,recvmsg()\n,send()\n,sendall()\n,sendmsg()\n,sendto()\n;\nSee also\n- PEP 475 \u2013 Retry system calls failing with EINTR\nPEP and implementation written by Charles-Fran\u00e7ois Natali and Victor Stinner, with the help of Antoine Pitrou (the French connection).\nPEP 479: Change StopIteration handling inside generators\u00b6\nThe interaction of generators and StopIteration\nin Python 3.4 and\nearlier was sometimes surprising, and could conceal obscure bugs. Previously,\nStopIteration\nraised accidentally inside a generator function was\ninterpreted as the end of the iteration by the loop construct driving the\ngenerator.\nPEP 479 changes the behavior of generators: when a StopIteration\nexception is raised inside a generator, it is replaced with a\nRuntimeError\nbefore it exits the generator frame. The main goal of\nthis change is to ease debugging in the situation where an unguarded\nnext()\ncall raises StopIteration\nand causes the iteration controlled\nby the generator to terminate silently. This is particularly pernicious in\ncombination with the yield from\nconstruct.\nThis is a backwards incompatible change, so to enable the new behavior, a __future__ import is necessary:\n>>> from __future__ import generator_stop\n>>> def gen():\n... next(iter([]))\n... yield\n...\n>>> next(gen())\nTraceback (most recent call last):\nFile \"\", line 2, in gen\nStopIteration\nThe above exception was the direct cause of the following exception:\nTraceback (most recent call last):\nFile \"\", line 1, in \nRuntimeError: generator raised StopIteration\nWithout a __future__\nimport, a PendingDeprecationWarning\nwill be\nraised whenever a StopIteration\nexception is raised inside a generator.\nSee also\n- PEP 479 \u2013 Change StopIteration handling inside generators\nPEP written by Chris Angelico and Guido van Rossum. Implemented by Chris Angelico, Yury Selivanov and Nick Coghlan.\nPEP 485: A function for testing approximate equality\u00b6\nPEP 485 adds the math.isclose()\nand cmath.isclose()\nfunctions which tell whether two values are approximately equal or\n\u201cclose\u201d to each other. Whether or not two values are considered\nclose is determined according to given absolute and relative tolerances.\nRelative tolerance is the maximum allowed difference between isclose\narguments, relative to the larger absolute value:\n>>> import math\n>>> a = 5.0\n>>> b = 4.99998\n>>> math.isclose(a, b, rel_tol=1e-5)\nTrue\n>>> math.isclose(a, b, rel_tol=1e-6)\nFalse\nIt is also possible to compare two values using absolute tolerance, which must be a non-negative value:\n>>> import math\n>>> a = 5.0\n>>> b = 4.99998\n>>> math.isclose(a, b, abs_tol=0.00003)\nTrue\n>>> math.isclose(a, b, abs_tol=0.00001)\nFalse\nSee also\n- PEP 485 \u2013 A function for testing approximate equality\nPEP written by Christopher Barker; implemented by Chris Barker and Tal Einat.\nPEP 486: Make the Python Launcher aware of virtual environments\u00b6\nPEP 486 makes the Windows launcher (see PEP 397) aware of an active\nvirtual environment. When the default interpreter would be used and the\nVIRTUAL_ENV\nenvironment variable is set, the interpreter in the virtual\nenvironment will be used.\nSee also\n- PEP 486 \u2013 Make the Python Launcher aware of virtual environments\nPEP written and implemented by Paul Moore.\nPEP 488: Elimination of PYO files\u00b6\nPEP 488 does away with the concept of .pyo\nfiles. This means that\n.pyc\nfiles represent both unoptimized and optimized bytecode. To prevent the\nneed to constantly regenerate bytecode files, .pyc\nfiles now have an\noptional opt-\ntag in their name when the bytecode is optimized. This has the\nside-effect of no more bytecode file name clashes when running under either\n-O\nor -OO\n. Consequently, bytecode files generated from\n-O\n, and -OO\nmay now exist simultaneously.\nimportlib.util.cache_from_source()\nhas an updated API to help with\nthis change.\nSee also\n- PEP 488 \u2013 Elimination of PYO files\nPEP written and implemented by Brett Cannon.\nPEP 489: Multi-phase extension module initialization\u00b6\nPEP 489 updates extension module initialization to take advantage of the two step module loading mechanism introduced by PEP 451 in Python 3.4.\nThis change brings the import semantics of extension modules that opt-in to using the new mechanism much closer to those of Python source and bytecode modules, including the ability to use any valid identifier as a module name, rather than being restricted to ASCII.\nSee also\n- PEP 489 \u2013 Multi-phase extension module initialization\nPEP written by Petr Viktorin, Stefan Behnel, and Nick Coghlan; implemented by Petr Viktorin.\nOther Language Changes\u00b6\nSome smaller changes made to the core Python language are:\nAdded the\n\"namereplace\"\nerror handlers. The\"backslashreplace\"\nerror handlers now work with decoding and translating. (Contributed by Serhiy Storchaka in bpo-19676 and bpo-22286.)The\n-b\noption now affects comparisons ofbytes\nwithint\n. (Contributed by Serhiy Storchaka in bpo-23681.)New Kazakh\nkz1048\nand Tajikkoi8_t\ncodecs. (Contributed by Serhiy Storchaka in bpo-22682 and bpo-22681.)Property docstrings are now writable. This is especially useful for\ncollections.namedtuple()\ndocstrings. (Contributed by Berker Peksag in bpo-24064.)Circular imports involving relative imports are now supported. (Contributed by Brett Cannon and Antoine Pitrou in bpo-17636.)\nNew Modules\u00b6\ntyping\u00b6\nThe new typing\nprovisional module\nprovides standard definitions and tools for function type annotations.\nSee Type Hints for more information.\nzipapp\u00b6\nThe new zipapp\nmodule (specified in PEP 441) provides an API and\ncommand line tool for creating executable Python Zip Applications, which\nwere introduced in Python 2.6 in bpo-1739468, but which were not well\npublicized, either at the time or since.\nWith the new module, bundling your application is as simple as putting all\nthe files, including a __main__.py\nfile, into a directory myapp\nand running:\n$ python -m zipapp myapp\n$ python myapp.pyz\nThe module implementation has been contributed by Paul Moore in bpo-23491.\nSee also\nPEP 441 \u2013 Improving Python ZIP Application Support\nImproved Modules\u00b6\nargparse\u00b6\nThe ArgumentParser\nclass now allows disabling\nabbreviated usage of long options by setting\nallow_abbrev to False\n. (Contributed by Jonathan Paugh,\nSteven Bethard, paul j3 and Daniel Eriksson in bpo-14910.)\nasyncio\u00b6\nSince the asyncio\nmodule is provisional,\nall changes introduced in Python 3.5 have also been backported to Python 3.4.x.\nNotable changes in the asyncio\nmodule since Python 3.4.0:\nNew debugging APIs:\nloop.set_debug()\nandloop.get_debug()\nmethods. (Contributed by Victor Stinner.)The proactor event loop now supports SSL. (Contributed by Antoine Pitrou and Victor Stinner in bpo-22560.)\nA new\nloop.is_closed()\nmethod to check if the event loop is closed. (Contributed by Victor Stinner in bpo-21326.)A new\nloop.create_task()\nto conveniently create and schedule a newTask\nfor a coroutine. Thecreate_task\nmethod is also used by all asyncio functions that wrap coroutines into tasks, such asasyncio.wait()\n,asyncio.gather()\n, etc. (Contributed by Victor Stinner.)A new\ntransport.get_write_buffer_limits()\nmethod to inquire for high- and low- water limits of the flow control. (Contributed by Victor Stinner.)The\nasync()\nfunction is deprecated in favor ofensure_future()\n. (Contributed by Yury Selivanov.)New\nloop.set_task_factory()\nandloop.get_task_factory()\nmethods to customize the task factory thatloop.create_task()\nmethod uses. (Contributed by Yury Selivanov.)New\nQueue.join()\nandQueue.task_done()\nqueue methods. (Contributed by Victor Stinner.)The\nJoinableQueue\nclass was removed, in favor of theasyncio.Queue\nclass. (Contributed by Victor Stinner.)\nUpdates in 3.5.1:\nThe\nensure_future()\nfunction and all functions that use it, such asloop.run_until_complete()\n, now accept all kinds of awaitable objects. (Contributed by Yury Selivanov.)New\nrun_coroutine_threadsafe()\nfunction to submit coroutines to event loops from other threads. (Contributed by Vincent Michel.)New\nTransport.is_closing()\nmethod to check if the transport is closing or closed. (Contributed by Yury Selivanov.)The\nloop.create_server()\nmethod can now accept a list of hosts. (Contributed by Yann Sionneau.)\nUpdates in 3.5.2:\nNew\nloop.create_future()\nmethod to create Future objects. This allows alternative event loop implementations, such as uvloop, to provide a fasterasyncio.Future\nimplementation. (Contributed by Yury Selivanov.)New\nloop.get_exception_handler()\nmethod to get the current exception handler. (Contributed by Yury Selivanov.)New\nStreamReader.readuntil()\nmethod to read data from the stream until a separator bytes sequence appears. (Contributed by Mark Korenberg.)The\nloop.create_connection()\nandloop.create_server()\nmethods are optimized to avoid calling the systemgetaddrinfo\nfunction if the address is already resolved. (Contributed by A. Jesse Jiryu Davis.)The\nloop.sock_connect(sock, address)\nno longer requires the address to be resolved prior to the call. (Contributed by A. Jesse Jiryu Davis.)\nbz2\u00b6\nThe BZ2Decompressor.decompress\nmethod now accepts an optional max_length argument to limit the maximum\nsize of decompressed data. (Contributed by Nikolaus Rath in bpo-15955.)\ncgi\u00b6\nThe FieldStorage\nclass now supports the context manager\nprotocol. (Contributed by Berker Peksag in bpo-20289.)\ncmath\u00b6\nA new function isclose()\nprovides a way to test for approximate\nequality. (Contributed by Chris Barker and Tal Einat in bpo-24270.)\ncode\u00b6\nThe InteractiveInterpreter.showtraceback()\nmethod now prints the full chained traceback, just like the interactive\ninterpreter. (Contributed by Claudiu Popa in bpo-17442.)\ncollections\u00b6\nThe OrderedDict\nclass is now implemented in C, which\nmakes it 4 to 100 times faster. (Contributed by Eric Snow in bpo-16991.)\nOrderedDict.items()\n, OrderedDict.keys()\n,\nand OrderedDict.values()\nviews now support reversed()\niteration.\n(Contributed by Serhiy Storchaka in bpo-19505.)\nThe deque\nclass now defines\nindex()\n, insert()\n, and\ncopy()\n, and supports the +\nand *\noperators.\nThis allows deques to be recognized as a MutableSequence\nand improves their substitutability for lists.\n(Contributed by Raymond Hettinger in bpo-23704.)\nDocstrings produced by namedtuple()\ncan now be updated:\nPoint = namedtuple('Point', ['x', 'y'])\nPoint.__doc__ += ': Cartesian coordinate'\nPoint.x.__doc__ = 'abscissa'\nPoint.y.__doc__ = 'ordinate'\n(Contributed by Berker Peksag in bpo-24064.)\nThe UserString\nclass now implements the\n__getnewargs__()\n, __rmod__()\n, casefold()\n,\nformat_map()\n, isprintable()\n, and maketrans()\nmethods to match the corresponding methods of str\n.\n(Contributed by Joe Jevnik in bpo-22189.)\ncollections.abc\u00b6\nThe Sequence.index()\nmethod now\naccepts start and stop arguments to match the corresponding methods\nof tuple\n, list\n, etc.\n(Contributed by Devin Jeanpierre in bpo-23086.)\nA new Generator\nabstract base class. (Contributed\nby Stefan Behnel in bpo-24018.)\nNew Awaitable\n, Coroutine\n,\nAsyncIterator\n, and\nAsyncIterable\nabstract base classes.\n(Contributed by Yury Selivanov in bpo-24184.)\nFor earlier Python versions, a backport of the new ABCs is available in an external PyPI package.\ncompileall\u00b6\nA new compileall\noption, -j N\n, allows running N workers\nsimultaneously to perform parallel bytecode compilation.\nThe compile_dir()\nfunction has a corresponding workers\nparameter. (Contributed by Claudiu Popa in bpo-16104.)\nAnother new option, -r\n, allows controlling the maximum recursion\nlevel for subdirectories. (Contributed by Claudiu Popa in bpo-19628.)\nThe -q\ncommand line option can now be specified more than once, in\nwhich case all output, including errors, will be suppressed. The corresponding\nquiet\nparameter in compile_dir()\n,\ncompile_file()\n, and compile_path()\ncan now\naccept an integer value indicating the level of output suppression.\n(Contributed by Thomas Kluyver in bpo-21338.)\nconcurrent.futures\u00b6\nThe Executor.map()\nmethod now accepts a\nchunksize argument to allow batching of tasks to improve performance when\nProcessPoolExecutor()\nis used.\n(Contributed by Dan O\u2019Reilly in bpo-11271.)\nThe number of workers in the ThreadPoolExecutor\nconstructor is optional now. The default value is 5 times the number of CPUs.\n(Contributed by Claudiu Popa in bpo-21527.)\nconfigparser\u00b6\nconfigparser\nnow provides a way to customize the conversion\nof values by specifying a dictionary of converters in the\nConfigParser\nconstructor, or by defining them\nas methods in ConfigParser\nsubclasses. Converters defined in\na parser instance are inherited by its section proxies.\nExample:\n>>> import configparser\n>>> conv = {}\n>>> conv['list'] = lambda v: [e.strip() for e in v.split() if e.strip()]\n>>> cfg = configparser.ConfigParser(converters=conv)\n>>> cfg.read_string(\"\"\"\n... [s]\n... list = a b c d e f g\n... \"\"\")\n>>> cfg.get('s', 'list')\n'a b c d e f g'\n>>> cfg.getlist('s', 'list')\n['a', 'b', 'c', 'd', 'e', 'f', 'g']\n>>> section = cfg['s']\n>>> section.getlist('list')\n['a', 'b', 'c', 'd', 'e', 'f', 'g']\n(Contributed by \u0141ukasz Langa in bpo-18159.)\ncontextlib\u00b6\nThe new redirect_stderr()\ncontext manager (similar to\nredirect_stdout()\n) makes it easier for utility scripts to\nhandle inflexible APIs that write their output to sys.stderr\nand\ndon\u2019t provide any options to redirect it:\n>>> import contextlib, io, logging\n>>> f = io.StringIO()\n>>> with contextlib.redirect_stderr(f):\n... logging.warning('warning')\n...\n>>> f.getvalue()\n'WARNING:root:warning\\n'\n(Contributed by Berker Peksag in bpo-22389.)\ncsv\u00b6\nThe writerow()\nmethod now supports arbitrary iterables,\nnot just sequences. (Contributed by Serhiy Storchaka in bpo-23171.)\ncurses\u00b6\nThe new update_lines_cols()\nfunction updates the LINES\nand COLS\nmodule variables. This is useful for detecting\nmanual screen resizing. (Contributed by Arnon Yaari in bpo-4254.)\ndbm\u00b6\ndumb.open\nalways creates a new database when the flag\nhas the value \"n\"\n. (Contributed by Claudiu Popa in bpo-18039.)\ndifflib\u00b6\nThe charset of HTML documents generated by\nHtmlDiff.make_file()\ncan now be customized by using a new charset keyword-only argument.\nThe default charset of HTML document changed from \"ISO-8859-1\"\nto \"utf-8\"\n.\n(Contributed by Berker Peksag in bpo-2052.)\nThe diff_bytes()\nfunction can now compare lists of byte\nstrings. This fixes a regression from Python 2.\n(Contributed by Terry J. Reedy and Greg Ward in bpo-17445.)\ndistutils\u00b6\nBoth the build\nand build_ext\ncommands now accept a -j\noption to\nenable parallel building of extension modules.\n(Contributed by Antoine Pitrou in bpo-5309.)\nThe distutils\nmodule now supports xz\ncompression, and can be\nenabled by passing xztar\nas an argument to bdist --format\n.\n(Contributed by Serhiy Storchaka in bpo-16314.)\ndoctest\u00b6\nThe DocTestSuite()\nfunction returns an empty\nunittest.TestSuite\nif module contains no docstrings, instead of\nraising ValueError\n. (Contributed by Glenn Jones in bpo-15916.)\nemail\u00b6\nA new policy option Policy.mangle_from_\ncontrols whether or not lines that start with \"From \"\nin email bodies are\nprefixed with a \">\"\ncharacter by generators. The default is True\nfor\ncompat32\nand False\nfor all other policies.\n(Contributed by Milan Oberkirch in bpo-20098.)\nA new\nMessage.get_content_disposition()\nmethod provides easy access to a canonical value for the\nContent-Disposition header.\n(Contributed by Abhilash Raj in bpo-21083.)\nA new policy option EmailPolicy.utf8\ncan be set to True\nto encode email headers using the UTF-8 charset instead\nof using encoded words. This allows Messages\nto be formatted according to\nRFC 6532 and used with an SMTP server that supports the RFC 6531\nSMTPUTF8\nextension. (Contributed by R. David Murray in\nbpo-24211.)\nThe mime.text.MIMEText\nconstructor now\naccepts a charset.Charset\ninstance.\n(Contributed by Claude Paroz and Berker Peksag in bpo-16324.)\nenum\u00b6\nThe Enum\ncallable has a new parameter start to\nspecify the initial number of enum values if only names are provided:\n>>> Animal = enum.Enum('Animal', 'cat dog', start=10)\n>>> Animal.cat\n\n>>> Animal.dog\n\n(Contributed by Ethan Furman in bpo-21706.)\nfaulthandler\u00b6\nThe enable()\n, register()\n,\ndump_traceback()\nand\ndump_traceback_later()\nfunctions now accept file\ndescriptors in addition to file-like objects.\n(Contributed by Wei Wu in bpo-23566.)\nfunctools\u00b6\nMost of the lru_cache()\nmachinery is now implemented in C, making\nit significantly faster. (Contributed by Matt Joiner, Alexey Kachayev, and\nSerhiy Storchaka in bpo-14373.)\nglob\u00b6\nThe iglob()\nand glob()\nfunctions now support recursive\nsearch in subdirectories, using the \"**\"\npattern.\n(Contributed by Serhiy Storchaka in bpo-13968.)\ngzip\u00b6\nThe mode argument of the GzipFile\nconstructor now\naccepts \"x\"\nto request exclusive creation.\n(Contributed by Tim Heaney in bpo-19222.)\nheapq\u00b6\nElement comparison in merge()\ncan now be customized by\npassing a key function in a new optional key keyword argument,\nand a new optional reverse keyword argument can be used to reverse element\ncomparison:\n>>> import heapq\n>>> a = ['9', '777', '55555']\n>>> b = ['88', '6666']\n>>> list(heapq.merge(a, b, key=len))\n['9', '88', '777', '6666', '55555']\n>>> list(heapq.merge(reversed(a), reversed(b), key=len, reverse=True))\n['55555', '6666', '777', '88', '9']\n(Contributed by Raymond Hettinger in bpo-13742.)\nhttp\u00b6\nA new HTTPStatus\nenum that defines a set of\nHTTP status codes, reason phrases and long descriptions written in English.\n(Contributed by Demian Brecht in bpo-21793.)\nhttp.client\u00b6\nHTTPConnection.getresponse()\nnow raises a RemoteDisconnected\nexception when a\nremote server connection is closed unexpectedly. Additionally, if a\nConnectionError\n(of which RemoteDisconnected\nis a subclass) is raised, the client socket is now closed automatically,\nand will reconnect on the next request:\nimport http.client\nconn = http.client.HTTPConnection('www.python.org')\nfor retries in range(3):\ntry:\nconn.request('GET', '/')\nresp = conn.getresponse()\nexcept http.client.RemoteDisconnected:\npass\n(Contributed by Martin Panter in bpo-3566.)\nidlelib and IDLE\u00b6\nSince idlelib implements the IDLE shell and editor and is not intended for\nimport by other programs, it gets improvements with every release. See\nLib/idlelib/NEWS.txt\nfor a cumulative list of changes since 3.4.0,\nas well as changes made in future 3.5.x releases. This file is also available\nfrom the IDLE dialog.\nimaplib\u00b6\nThe IMAP4\nclass now supports the context manager protocol.\nWhen used in a with\nstatement, the IMAP4 LOGOUT\ncommand will be called automatically at the end of the block.\n(Contributed by Tarek Ziad\u00e9 and Serhiy Storchaka in bpo-4972.)\nThe imaplib\nmodule now supports RFC 5161 (ENABLE Extension)\nand RFC 6855 (UTF-8 Support) via the IMAP4.enable()\nmethod. A new IMAP4.utf8_enabled\nattribute tracks whether or not RFC 6855 support is enabled.\n(Contributed by Milan Oberkirch, R. David Murray, and Maciej Szulik in\nbpo-21800.)\nThe imaplib\nmodule now automatically encodes non-ASCII string usernames\nand passwords using UTF-8, as recommended by the RFCs. (Contributed by Milan\nOberkirch in bpo-21800.)\nimghdr\u00b6\nThe what()\nfunction now recognizes the\nOpenEXR format\n(contributed by Martin Vignali and Claudiu Popa in bpo-20295),\nand the WebP format\n(contributed by Fabrice Aneche and Claudiu Popa in bpo-20197.)\nimportlib\u00b6\nThe util.LazyLoader\nclass allows for\nlazy loading of modules in applications where startup time is important.\n(Contributed by Brett Cannon in bpo-17621.)\nThe abc.InspectLoader.source_to_code()\nmethod is now a static method. This makes it easier to initialize a module\nobject with code compiled from a string by running\nexec(code, module.__dict__)\n.\n(Contributed by Brett Cannon in bpo-21156.)\nThe new util.module_from_spec()\nfunction is now the preferred way to create a new module. As opposed to\ncreating a types.ModuleType\ninstance directly, this new function\nwill set the various import-controlled attributes based on the passed-in\nspec object. (Contributed by Brett Cannon in bpo-20383.)\ninspect\u00b6\nBoth the Signature\nand Parameter\nclasses are\nnow picklable and hashable. (Contributed by Yury Selivanov in bpo-20726\nand bpo-20334.)\nA new\nBoundArguments.apply_defaults()\nmethod provides a way to set default values for missing arguments:\n>>> def foo(a, b='ham', *args): pass\n>>> ba = inspect.signature(foo).bind('spam')\n>>> ba.apply_defaults()\n>>> ba.arguments\nOrderedDict([('a', 'spam'), ('b', 'ham'), ('args', ())])\n(Contributed by Yury Selivanov in bpo-24190.)\nA new class method\nSignature.from_callable()\nmakes\nsubclassing of Signature\neasier. (Contributed\nby Yury Selivanov and Eric Snow in bpo-17373.)\nThe signature()\nfunction now accepts a follow_wrapped\noptional keyword argument, which, when set to False\n, disables automatic\nfollowing of __wrapped__\nlinks.\n(Contributed by Yury Selivanov in bpo-20691.)\nA set of new functions to inspect\ncoroutine functions and\ncoroutine objects has been added:\niscoroutine()\n, iscoroutinefunction()\n,\nisawaitable()\n, getcoroutinelocals()\n,\nand getcoroutinestate()\n.\n(Contributed by Yury Selivanov in bpo-24017 and bpo-24400.)\nThe stack()\n, trace()\n,\ngetouterframes()\n, and getinnerframes()\nfunctions now return a list of named tuples.\n(Contributed by Daniel Shahaf in bpo-16808.)\nio\u00b6\nA new BufferedIOBase.readinto1()\nmethod, that uses at most one call to the underlying raw stream\u2019s\nRawIOBase.read()\nor\nRawIOBase.readinto()\nmethods.\n(Contributed by Nikolaus Rath in bpo-20578.)\nipaddress\u00b6\nBoth the IPv4Network\nand IPv6Network\nclasses\nnow accept an (address, netmask)\ntuple argument, so as to easily construct\nnetwork objects from existing addresses:\n>>> import ipaddress\n>>> ipaddress.IPv4Network(('127.0.0.0', 8))\nIPv4Network('127.0.0.0/8')\n>>> ipaddress.IPv4Network(('127.0.0.0', '255.0.0.0'))\nIPv4Network('127.0.0.0/8')\n(Contributed by Peter Moody and Antoine Pitrou in bpo-16531.)\nA new reverse_pointer\nattribute for the\nIPv4Address\nand IPv6Address\nclasses\nreturns the name of the reverse DNS PTR record:\n>>> import ipaddress\n>>> addr = ipaddress.IPv4Address('127.0.0.1')\n>>> addr.reverse_pointer\n'1.0.0.127.in-addr.arpa'\n>>> addr6 = ipaddress.IPv6Address('::1')\n>>> addr6.reverse_pointer\n'1.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.ip6.arpa'\n(Contributed by Leon Weber in bpo-20480.)\njson\u00b6\nThe json.tool\ncommand line interface now preserves the order of keys in\nJSON objects passed in input. The new --sort-keys\noption can be used\nto sort the keys alphabetically. (Contributed by Berker Peksag\nin bpo-21650.)\nJSON decoder now raises JSONDecodeError\ninstead of\nValueError\nto provide better context information about the error.\n(Contributed by Serhiy Storchaka in bpo-19361.)\nlinecache\u00b6\nA new lazycache()\nfunction can be used to capture information\nabout a non-file-based module to permit getting its lines later via\ngetline()\n. This avoids doing I/O until a line is actually\nneeded, without having to carry the module globals around indefinitely.\n(Contributed by Robert Collins in bpo-17911.)\nlocale\u00b6\nA new delocalize()\nfunction can be used to convert a string into\na normalized number string, taking the LC_NUMERIC\nsettings into account:\n>>> import locale\n>>> locale.setlocale(locale.LC_NUMERIC, 'de_DE.UTF-8')\n'de_DE.UTF-8'\n>>> locale.delocalize('1.234,56')\n'1234.56'\n>>> locale.setlocale(locale.LC_NUMERIC, 'en_US.UTF-8')\n'en_US.UTF-8'\n>>> locale.delocalize('1,234.56')\n'1234.56'\n(Contributed by C\u00e9dric Krier in bpo-13918.)\nlogging\u00b6\nAll logging methods (Logger\nlog()\n,\nexception()\n, critical()\n,\ndebug()\n, etc.), now accept exception instances\nas an exc_info argument, in addition to boolean values and exception\ntuples:\n>>> import logging\n>>> try:\n... 1/0\n... except ZeroDivisionError as ex:\n... logging.error('exception', exc_info=ex)\nERROR:root:exception\n(Contributed by Yury Selivanov in bpo-20537.)\nThe handlers.HTTPHandler\nclass now\naccepts an optional ssl.SSLContext\ninstance to configure SSL\nsettings used in an HTTP connection.\n(Contributed by Alex Gaynor in bpo-22788.)\nThe handlers.QueueListener\nclass now\ntakes a respect_handler_level keyword argument which, if set to True\n,\nwill pass messages to handlers taking handler levels into account.\n(Contributed by Vinay Sajip.)\nlzma\u00b6\nThe LZMADecompressor.decompress()\nmethod now accepts an optional max_length argument to limit the maximum\nsize of decompressed data.\n(Contributed by Martin Panter in bpo-15955.)\nmath\u00b6\nTwo new constants have been added to the math\nmodule: inf\nand nan\n. (Contributed by Mark Dickinson in bpo-23185.)\nA new function isclose()\nprovides a way to test for approximate\nequality. (Contributed by Chris Barker and Tal Einat in bpo-24270.)\nA new gcd()\nfunction has been added. The fractions.gcd()\nfunction is now deprecated. (Contributed by Mark Dickinson and Serhiy\nStorchaka in bpo-22486.)\nmultiprocessing\u00b6\nsharedctypes.synchronized()\nobjects now support the context manager protocol.\n(Contributed by Charles-Fran\u00e7ois Natali in bpo-21565.)\noperator\u00b6\nattrgetter()\n, itemgetter()\n,\nand methodcaller()\nobjects now support pickling.\n(Contributed by Josh Rosenberg and Serhiy Storchaka in bpo-22955.)\nNew matmul()\nand imatmul()\nfunctions\nto perform matrix multiplication.\n(Contributed by Benjamin Peterson in bpo-21176.)\nos\u00b6\nThe new scandir()\nfunction returning an iterator of\nDirEntry\nobjects has been added. If possible, scandir()\nextracts file attributes while scanning a directory, removing the need to\nperform subsequent system calls to determine file type or attributes, which may\nsignificantly improve performance. (Contributed by Ben Hoyt with the help\nof Victor Stinner in bpo-22524.)\nOn Windows, a new\nstat_result.st_file_attributes\nattribute is now available. It corresponds to the dwFileAttributes\nmember\nof the BY_HANDLE_FILE_INFORMATION\nstructure returned by\nGetFileInformationByHandle()\n. (Contributed by Ben Hoyt in bpo-21719.)\nThe urandom()\nfunction now uses the getrandom()\nsyscall on Linux 3.17\nor newer, and getentropy()\non OpenBSD 5.6 and newer, removing the need to\nuse /dev/urandom\nand avoiding failures due to potential file descriptor\nexhaustion. (Contributed by Victor Stinner in bpo-22181.)\nNew get_blocking()\nand set_blocking()\nfunctions allow\ngetting and setting a file descriptor\u2019s blocking mode (O_NONBLOCK\n.)\n(Contributed by Victor Stinner in bpo-22054.)\nThe truncate()\nand ftruncate()\nfunctions are now supported\non Windows. (Contributed by Steve Dower in bpo-23668.)\nThere is a new os.path.commonpath()\nfunction returning the longest\ncommon sub-path of each passed pathname. Unlike the\nos.path.commonprefix()\nfunction, it always returns a valid\npath:\n>>> os.path.commonprefix(['/usr/lib', '/usr/local/lib'])\n'/usr/l'\n>>> os.path.commonpath(['/usr/lib', '/usr/local/lib'])\n'/usr'\n(Contributed by Rafik Draoui and Serhiy Storchaka in bpo-10395.)\npathlib\u00b6\nThe new Path.samefile()\nmethod can be used\nto check whether the path points to the same file as another path, which can\nbe either another Path\nobject, or a string:\n>>> import pathlib\n>>> p1 = pathlib.Path('/etc/hosts')\n>>> p2 = pathlib.Path('/etc/../etc/hosts')\n>>> p1.samefile(p2)\nTrue\n(Contributed by Vajrasky Kok and Antoine Pitrou in bpo-19775.)\nThe Path.mkdir()\nmethod now accepts a new optional\nexist_ok argument to match mkdir -p\nand os.makedirs()\nfunctionality. (Contributed by Berker Peksag in bpo-21539.)\nThere is a new Path.expanduser()\nmethod to\nexpand ~\nand ~user\nprefixes. (Contributed by Serhiy Storchaka and\nClaudiu Popa in bpo-19776.)\nA new Path.home()\nclass method can be used to get\na Path\ninstance representing the user\u2019s home\ndirectory.\n(Contributed by Victor Salgado and Mayank Tripathi in bpo-19777.)\nNew Path.write_text()\n,\nPath.read_text()\n,\nPath.write_bytes()\n,\nPath.read_bytes()\nmethods to simplify\nread/write operations on files.\nThe following code snippet will create or rewrite existing file\n~/spam42\n:\n>>> import pathlib\n>>> p = pathlib.Path('~/spam42')\n>>> p.expanduser().write_text('ham')\n3\n(Contributed by Christopher Welborn in bpo-20218.)\npickle\u00b6\nNested objects, such as unbound methods or nested classes, can now be pickled using pickle protocols older than protocol version 4. Protocol version 4 already supports these cases. (Contributed by Serhiy Storchaka in bpo-23611.)\npoplib\u00b6\nA new POP3.utf8()\ncommand enables RFC 6856\n(Internationalized Email) support, if a POP server supports it.\n(Contributed by Milan OberKirch in bpo-21804.)\nre\u00b6\nReferences and conditional references to groups with fixed length are now allowed in lookbehind assertions:\n>>> import re\n>>> pat = re.compile(r'(a|b).(?<=\\1)c')\n>>> pat.match('aac')\n<_sre.SRE_Match object; span=(0, 3), match='aac'>\n>>> pat.match('bbc')\n<_sre.SRE_Match object; span=(0, 3), match='bbc'>\n(Contributed by Serhiy Storchaka in bpo-9179.)\nThe number of capturing groups in regular expressions is no longer limited to 100. (Contributed by Serhiy Storchaka in bpo-22437.)\nThe sub()\nand subn()\nfunctions now replace unmatched\ngroups with empty strings instead of raising an exception.\n(Contributed by Serhiy Storchaka in bpo-1519638.)\nThe re.error\nexceptions have new attributes,\nmsg\n, pattern\n,\npos\n, lineno\n,\nand colno\n, that provide better context\ninformation about the error:\n>>> re.compile(\"\"\"\n... (?x)\n... .++\n... \"\"\")\nTraceback (most recent call last):\n...\nsre_constants.error: multiple repeat at position 16 (line 3, column 7)\n(Contributed by Serhiy Storchaka in bpo-22578.)\nreadline\u00b6\nA new append_history_file()\nfunction can be used to append\nthe specified number of trailing elements in history to the given file.\n(Contributed by Bruno Cauet in bpo-22940.)\nselectors\u00b6\nThe new DevpollSelector\nsupports efficient\n/dev/poll\npolling on Solaris.\n(Contributed by Giampaolo Rodola\u2019 in bpo-18931.)\nshutil\u00b6\nThe move()\nfunction now accepts a copy_function argument,\nallowing, for example, the copy()\nfunction to be used instead of\nthe default copy2()\nif there is a need to ignore file metadata\nwhen moving.\n(Contributed by Claudiu Popa in bpo-19840.)\nThe make_archive()\nfunction now supports the xztar format.\n(Contributed by Serhiy Storchaka in bpo-5411.)\nsignal\u00b6\nOn Windows, the set_wakeup_fd()\nfunction now also supports\nsocket handles. (Contributed by Victor Stinner in bpo-22018.)\nVarious SIG*\nconstants in the signal\nmodule have been converted into\nEnums\n. This allows meaningful names to be printed\nduring debugging, instead of integer \u201cmagic numbers\u201d.\n(Contributed by Giampaolo Rodola\u2019 in bpo-21076.)\nsmtpd\u00b6\nBoth the SMTPServer\nand SMTPChannel\nclasses now\naccept a decode_data keyword argument to determine if the DATA\nportion of\nthe SMTP transaction is decoded using the \"utf-8\"\ncodec or is instead\nprovided to the\nSMTPServer.process_message()\nmethod as a byte string. The default is True\nfor backward compatibility\nreasons, but will change to False\nin Python 3.6. If decode_data is set\nto False\n, the process_message\nmethod must be prepared to accept keyword\narguments.\n(Contributed by Maciej Szulik in bpo-19662.)\nThe SMTPServer\nclass now advertises the 8BITMIME\nextension\n(RFC 6152) if decode_data has been set True\n. If the client\nspecifies BODY=8BITMIME\non the MAIL\ncommand, it is passed to\nSMTPServer.process_message()\nvia the mail_options keyword.\n(Contributed by Milan Oberkirch and R. David Murray in bpo-21795.)\nThe SMTPServer\nclass now also supports the SMTPUTF8\nextension (RFC 6531: Internationalized Email). If the client specified\nSMTPUTF8 BODY=8BITMIME\non the MAIL\ncommand, they are passed to\nSMTPServer.process_message()\nvia the mail_options keyword. It is the responsibility of the\nprocess_message\nmethod to correctly handle the SMTPUTF8\ndata.\n(Contributed by Milan Oberkirch in bpo-21725.)\nIt is now possible to provide, directly or via name resolution, IPv6\naddresses in the SMTPServer\nconstructor, and have it\nsuccessfully connect. (Contributed by Milan Oberkirch in bpo-14758.)\nsmtplib\u00b6\nA new SMTP.auth()\nmethod provides a convenient way to\nimplement custom authentication mechanisms. (Contributed by Milan\nOberkirch in bpo-15014.)\nThe SMTP.set_debuglevel()\nmethod now\naccepts an additional debuglevel (2), which enables timestamps in debug\nmessages. (Contributed by Gavin Chappell and Maciej Szulik in bpo-16914.)\nBoth the SMTP.sendmail()\nand\nSMTP.send_message()\nmethods now\nsupport RFC 6531 (SMTPUTF8).\n(Contributed by Milan Oberkirch and R. David Murray in bpo-22027.)\nsndhdr\u00b6\nThe what()\nand whathdr()\nfunctions now return\na namedtuple()\n. (Contributed by Claudiu Popa in\nbpo-18615.)\nsocket\u00b6\nFunctions with timeouts now use a monotonic clock, instead of a system clock. (Contributed by Victor Stinner in bpo-22043.)\nA new socket.sendfile()\nmethod allows\nsending a file over a socket by using the high-performance os.sendfile()\nfunction on UNIX, resulting in uploads being from 2 to 3 times faster than when\nusing plain socket.send()\n.\n(Contributed by Giampaolo Rodola\u2019 in bpo-17552.)\nThe socket.sendall()\nmethod no longer resets the\nsocket timeout every time bytes are received or sent. The socket timeout is\nnow the maximum total duration to send all data.\n(Contributed by Victor Stinner in bpo-23853.)\nThe backlog argument of the socket.listen()\nmethod is now optional. By default it is set to\nSOMAXCONN\nor to 128\n, whichever is less.\n(Contributed by Charles-Fran\u00e7ois Natali in bpo-21455.)\nssl\u00b6\nMemory BIO Support\u00b6\n(Contributed by Geert Jansen in bpo-21965.)\nThe new SSLObject\nclass has been added to provide SSL protocol\nsupport for cases when the network I/O capabilities of SSLSocket\nare not necessary or are suboptimal. SSLObject\nrepresents\nan SSL protocol instance, but does not implement any network I/O methods, and\ninstead provides a memory buffer interface. The new MemoryBIO\nclass can be used to pass data between Python and an SSL protocol instance.\nThe memory BIO SSL support is primarily intended to be used in frameworks\nimplementing asynchronous I/O for which SSLSocket\n\u2019s readiness\nmodel (\u201cselect/poll\u201d) is inefficient.\nA new SSLContext.wrap_bio()\nmethod can be used\nto create a new SSLObject\ninstance.\nApplication-Layer Protocol Negotiation Support\u00b6\n(Contributed by Benjamin Peterson in bpo-20188.)\nWhere OpenSSL support is present, the ssl\nmodule now implements\nthe Application-Layer Protocol Negotiation TLS extension as described\nin RFC 7301.\nThe new SSLContext.set_alpn_protocols()\ncan be used to specify which protocols a socket should advertise during\nthe TLS handshake.\nThe new\nSSLSocket.selected_alpn_protocol()\nreturns the protocol that was selected during the TLS handshake.\nThe HAS_ALPN\nflag indicates whether ALPN support is present.\nOther Changes\u00b6\nThere is a new SSLSocket.version()\nmethod to\nquery the actual protocol version in use.\n(Contributed by Antoine Pitrou in bpo-20421.)\nThe SSLSocket\nclass now implements\na SSLSocket.sendfile()\nmethod.\n(Contributed by Giampaolo Rodola\u2019 in bpo-17552.)\nThe SSLSocket.send()\nmethod now raises either\nthe ssl.SSLWantReadError\nor ssl.SSLWantWriteError\nexception on a\nnon-blocking socket if the operation would block. Previously, it would return\n0\n. (Contributed by Nikolaus Rath in bpo-20951.)\nThe cert_time_to_seconds()\nfunction now interprets the input time\nas UTC and not as local time, per RFC 5280. Additionally, the return\nvalue is always an int\n. (Contributed by Akira Li in bpo-19940.)\nNew SSLObject.shared_ciphers()\nand\nSSLSocket.shared_ciphers()\nmethods return\nthe list of ciphers sent by the client during the handshake.\n(Contributed by Benjamin Peterson in bpo-23186.)\nThe SSLSocket.do_handshake()\n,\nSSLSocket.read()\n,\nSSLSocket.shutdown()\n, and\nSSLSocket.write()\nmethods of the SSLSocket\nclass no longer reset the socket timeout every time bytes are received or sent.\nThe socket timeout is now the maximum total duration of the method.\n(Contributed by Victor Stinner in bpo-23853.)\nThe match_hostname()\nfunction now supports matching of IP addresses.\n(Contributed by Antoine Pitrou in bpo-23239.)\nsqlite3\u00b6\nThe Row\nclass now fully supports the sequence protocol,\nin particular reversed()\niteration and slice indexing.\n(Contributed by Claudiu Popa in bpo-10203; by Lucas Sinclair,\nJessica McKellar, and Serhiy Storchaka in bpo-13583.)\nsubprocess\u00b6\nThe new run()\nfunction has been added.\nIt runs the specified command and returns a\nCompletedProcess\nobject, which describes a finished\nprocess. The new API is more consistent and is the recommended approach\nto invoking subprocesses in Python code that does not need to maintain\ncompatibility with earlier Python versions.\n(Contributed by Thomas Kluyver in bpo-23342.)\nExamples:\n>>> subprocess.run([\"ls\", \"-l\"]) # doesn't capture output\nCompletedProcess(args=['ls', '-l'], returncode=0)\n>>> subprocess.run(\"exit 1\", shell=True, check=True)\nTraceback (most recent call last):\n...\nsubprocess.CalledProcessError: Command 'exit 1' returned non-zero exit status 1\n>>> subprocess.run([\"ls\", \"-l\", \"/dev/null\"], stdout=subprocess.PIPE)\nCompletedProcess(args=['ls', '-l', '/dev/null'], returncode=0,\nstdout=b'crw-rw-rw- 1 root root 1, 3 Jan 23 16:23 /dev/null\\n')\nsys\u00b6\nA new set_coroutine_wrapper()\nfunction allows setting a global\nhook that will be called whenever a coroutine object\nis created by an async def\nfunction. A corresponding\nget_coroutine_wrapper()\ncan be used to obtain a currently set\nwrapper. Both functions are provisional,\nand are intended for debugging purposes only. (Contributed by Yury Selivanov\nin bpo-24017.)\nA new is_finalizing()\nfunction can be used to check if the Python\ninterpreter is shutting down.\n(Contributed by Antoine Pitrou in bpo-22696.)\nsysconfig\u00b6\nThe name of the user scripts directory on Windows now includes the first two components of the Python version. (Contributed by Paul Moore in bpo-23437.)\ntarfile\u00b6\nThe mode argument of the open()\nfunction now accepts \"x\"\nto request exclusive creation. (Contributed by Berker Peksag in bpo-21717.)\nThe TarFile.extractall()\nand\nTarFile.extract()\nmethods now take a keyword\nargument numeric_owner. If set to True\n, the extracted files and\ndirectories will be owned by the numeric uid\nand gid\nfrom the tarfile.\nIf set to False\n(the default, and the behavior in versions prior to 3.5),\nthey will be owned by the named user and group in the tarfile.\n(Contributed by Michael Vogt and Eric Smith in bpo-23193.)\nThe TarFile.list()\nnow accepts an optional\nmembers keyword argument that can be set to a subset of the list returned\nby TarFile.getmembers()\n.\n(Contributed by Serhiy Storchaka in bpo-21549.)\nthreading\u00b6\nBoth the Lock.acquire()\nand\nRLock.acquire()\nmethods\nnow use a monotonic clock for timeout management.\n(Contributed by Victor Stinner in bpo-22043.)\ntime\u00b6\nThe monotonic()\nfunction is now always available.\n(Contributed by Victor Stinner in bpo-22043.)\ntimeit\u00b6\nA new command line option -u\nor --unit=U\ncan be used to specify the time\nunit for the timer output. Supported options are usec\n, msec\n,\nor sec\n. (Contributed by Julian Gindi in bpo-18983.)\nThe timeit()\nfunction has a new globals parameter for\nspecifying the namespace in which the code will be running.\n(Contributed by Ben Roberts in bpo-2527.)\ntkinter\u00b6\nThe tkinter._fix\nmodule used for setting up the Tcl/Tk environment\non Windows has been replaced by a private function in the _tkinter\nmodule which makes no permanent changes to environment variables.\n(Contributed by Zachary Ware in bpo-20035.)\ntraceback\u00b6\nNew walk_stack()\nand walk_tb()\nfunctions to conveniently traverse frame and\ntraceback objects.\n(Contributed by Robert Collins in bpo-17911.)\nNew lightweight classes: TracebackException\n,\nStackSummary\n, and FrameSummary\n.\n(Contributed by Robert Collins in bpo-17911.)\nBoth the print_tb()\nand print_stack()\nfunctions\nnow support negative values for the limit argument.\n(Contributed by Dmitry Kazakov in bpo-22619.)\ntypes\u00b6\nA new coroutine()\nfunction to transform\ngenerator and\ngenerator-like\nobjects into\nawaitables.\n(Contributed by Yury Selivanov in bpo-24017.)\nA new type called CoroutineType\n, which is used for\ncoroutine objects created by async def\nfunctions.\n(Contributed by Yury Selivanov in bpo-24400.)\nunicodedata\u00b6\nThe unicodedata\nmodule now uses data from Unicode 8.0.0.\nunittest\u00b6\nThe TestLoader.loadTestsFromModule()\nmethod now accepts a keyword-only argument pattern which is passed to\nload_tests\nas the third argument. Found packages are now checked for\nload_tests\nregardless of whether their path matches pattern, because it\nis impossible for a package name to match the default pattern.\n(Contributed by Robert Collins and Barry A. Warsaw in bpo-16662.)\nUnittest discovery errors now are exposed in the\nTestLoader.errors\nattribute of the\nTestLoader\ninstance.\n(Contributed by Robert Collins in bpo-19746.)\nA new command line option --locals\nto show local variables in\ntracebacks. (Contributed by Robert Collins in bpo-22936.)\nunittest.mock\u00b6\nThe Mock\nclass has the following improvements:\nThe class constructor has a new unsafe parameter, which causes mock objects to raise\nAttributeError\non attribute names starting with\"assert\"\n. (Contributed by Kushal Das in bpo-21238.)A new\nMock.assert_not_called()\nmethod to check if the mock object was called. (Contributed by Kushal Das in bpo-21262.)\nThe MagicMock\nclass now supports\n__truediv__()\n, __divmod__()\nand __matmul__()\noperators.\n(Contributed by Johannes Baiter in bpo-20968, and H\u00e5kan L\u00f6vdahl\nin bpo-23581 and bpo-23568.)\nIt is no longer necessary to explicitly pass create=True\nto the\npatch()\nfunction when patching builtin names.\n(Contributed by Kushal Das in bpo-17660.)\nurllib\u00b6\nA new\nrequest.HTTPPasswordMgrWithPriorAuth\nclass allows HTTP Basic Authentication credentials to be managed so as to\neliminate unnecessary 401\nresponse handling, or to unconditionally send\ncredentials on the first request in order to communicate with servers that\nreturn a 404\nresponse instead of a 401\nif the Authorization\nheader\nis not sent. (Contributed by Matej Cepl in bpo-19494 and Akshit Khurana in\nbpo-7159.)\nA new quote_via argument for the\nparse.urlencode()\nfunction provides a way to control the encoding of query parts if needed.\n(Contributed by Samwyse and Arnon Yaari in bpo-13866.)\nThe request.urlopen()\nfunction accepts an\nssl.SSLContext\nobject as a context argument, which will be used for\nthe HTTPS connection. (Contributed by Alex Gaynor in bpo-22366.)\nThe parse.urljoin()\nwas updated to use the\nRFC 3986 semantics for the resolution of relative URLs, rather than\nRFC 1808 and RFC 2396.\n(Contributed by Demian Brecht and Senthil Kumaran in bpo-22118.)\nwsgiref\u00b6\nThe headers argument of the headers.Headers\nclass constructor is now optional.\n(Contributed by Pablo Torres Navarrete and SilentGhost in bpo-5800.)\nxmlrpc\u00b6\nThe client.ServerProxy\nclass now supports\nthe context manager protocol.\n(Contributed by Claudiu Popa in bpo-20627.)\nThe client.ServerProxy\nconstructor now accepts\nan optional ssl.SSLContext\ninstance.\n(Contributed by Alex Gaynor in bpo-22960.)\nxml.sax\u00b6\nSAX parsers now support a character stream of the\nxmlreader.InputSource\nobject.\n(Contributed by Serhiy Storchaka in bpo-2175.)\nparseString()\nnow accepts a str\ninstance.\n(Contributed by Serhiy Storchaka in bpo-10590.)\nzipfile\u00b6\nZIP output can now be written to unseekable streams. (Contributed by Serhiy Storchaka in bpo-23252.)\nThe mode argument of ZipFile.open()\nmethod now\naccepts \"x\"\nto request exclusive creation.\n(Contributed by Serhiy Storchaka in bpo-21717.)\nOther module-level changes\u00b6\nMany functions in the mmap\n, ossaudiodev\n, socket\n,\nssl\n, and codecs\nmodules now accept writable\nbytes-like objects.\n(Contributed by Serhiy Storchaka in bpo-23001.)\nOptimizations\u00b6\nThe os.walk()\nfunction has been sped up by 3 to 5 times on POSIX systems,\nand by 7 to 20 times on Windows. This was done using the new os.scandir()\nfunction, which exposes file information from the underlying readdir\nor\nFindFirstFile\n/FindNextFile\nsystem calls. (Contributed by\nBen Hoyt with help from Victor Stinner in bpo-23605.)\nConstruction of bytes(int)\n(filled by zero bytes) is faster and uses less\nmemory for large objects. calloc()\nis used instead of malloc()\nto\nallocate memory for these objects.\n(Contributed by Victor Stinner in bpo-21233.)\nSome operations on ipaddress\nIPv4Network\nand\nIPv6Network\nhave been massively sped up, such as\nsubnets()\n, supernet()\n,\nsummarize_address_range()\n, collapse_addresses()\n.\nThe speed up can range from 3 to 15 times.\n(Contributed by Antoine Pitrou, Michel Albert, and Markus in\nbpo-21486, bpo-21487, bpo-20826, bpo-23266.)\nPickling of ipaddress\nobjects was optimized to produce significantly\nsmaller output. (Contributed by Serhiy Storchaka in bpo-23133.)\nMany operations on io.BytesIO\nare now 50% to 100% faster.\n(Contributed by Serhiy Storchaka in bpo-15381 and David Wilson in\nbpo-22003.)\nThe marshal.dumps()\nfunction is now faster: 65\u201385% with versions 3\nand 4, 20\u201325% with versions 0 to 2 on typical data, and up to 5 times in\nbest cases.\n(Contributed by Serhiy Storchaka in bpo-20416 and bpo-23344.)\nThe UTF-32 encoder is now 3 to 7 times faster. (Contributed by Serhiy Storchaka in bpo-15027.)\nRegular expressions are now parsed up to 10% faster. (Contributed by Serhiy Storchaka in bpo-19380.)\nThe json.dumps()\nfunction was optimized to run with\nensure_ascii=False\nas fast as with ensure_ascii=True\n.\n(Contributed by Naoki Inada in bpo-23206.)\nThe PyObject_IsInstance()\nand PyObject_IsSubclass()\nfunctions have been sped up in the common case that the second argument\nhas type\nas its metaclass.\n(Contributed Georg Brandl by in bpo-22540.)\nMethod caching was slightly improved, yielding up to 5% performance improvement in some benchmarks. (Contributed by Antoine Pitrou in bpo-22847.)\nObjects from the random\nmodule now use 50% less memory on 64-bit\nbuilds. (Contributed by Serhiy Storchaka in bpo-23488.)\nThe property()\ngetter calls are up to 25% faster.\n(Contributed by Joe Jevnik in bpo-23910.)\nInstantiation of fractions.Fraction\nis now up to 30% faster.\n(Contributed by Stefan Behnel in bpo-22464.)\nString methods find()\n, rfind()\n, split()\n,\npartition()\nand the in\nstring operator are now significantly\nfaster for searching 1-character substrings.\n(Contributed by Serhiy Storchaka in bpo-23573.)\nBuild and C API Changes\u00b6\nNew calloc\nfunctions were added:\n(Contributed by Victor Stinner in bpo-21233.)\nNew encoding/decoding helper functions:\nPy_DecodeLocale()\n(replaced_Py_char2wchar()\n),Py_EncodeLocale()\n(replaced_Py_wchar2char()\n).\n(Contributed by Victor Stinner in bpo-18395.)\nA new PyCodec_NameReplaceErrors()\nfunction to replace the unicode\nencode error with \\N{...}\nescapes.\n(Contributed by Serhiy Storchaka in bpo-19676.)\nA new PyErr_FormatV()\nfunction similar to PyErr_Format()\n,\nbut accepts a va_list\nargument.\n(Contributed by Antoine Pitrou in bpo-18711.)\nA new PyExc_RecursionError\nexception.\n(Contributed by Georg Brandl in bpo-19235.)\nNew PyModule_FromDefAndSpec()\n, PyModule_FromDefAndSpec2()\n,\nand PyModule_ExecDef()\nfunctions introduced by PEP 489 \u2013\nmulti-phase extension module initialization.\n(Contributed by Petr Viktorin in bpo-24268.)\nNew PyNumber_MatrixMultiply()\nand\nPyNumber_InPlaceMatrixMultiply()\nfunctions to perform matrix\nmultiplication.\n(Contributed by Benjamin Peterson in bpo-21176. See also PEP 465\nfor details.)\nThe PyTypeObject.tp_finalize\nslot is now part of the stable ABI.\nWindows builds now require Microsoft Visual C++ 14.0, which is available as part of Visual Studio 2015.\nExtension modules now include a platform information tag in their filename on some platforms (the tag is optional, and CPython will import extensions without it, although if the tag is present and mismatched, the extension won\u2019t be loaded):\nOn Linux, extension module filenames end with\n.cpython-m--.pyd\n:\nis the major number of the Python version; for Python 3.5 this is3\n.\nis the minor number of the Python version; for Python 3.5 this is5\n.\nis the hardware architecture the extension module was built to run on. It\u2019s most commonly eitheri386\nfor 32-bit Intel platforms orx86_64\nfor 64-bit Intel (and AMD) platforms.\nis alwayslinux-gnu\n, except for extensions built to talk to the 32-bit ABI on 64-bit platforms, in which case it islinux-gnu32\n(and\nwill bex86_64\n).\nOn Windows, extension module filenames end with\n.cp-.pyd\n:\nis the major number of the Python version; for Python 3.5 this is3\n.\nis the minor number of the Python version; for Python 3.5 this is5\n.\nis the platform the extension module was built for, eitherwin32\nfor Win32,win_amd64\nfor Win64,win_ia64\nfor Windows Itanium 64, andwin_arm\nfor Windows on ARM.If built in debug mode,\n\nwill be_d\n, otherwise it will be blank.\nOn OS X platforms, extension module filenames now end with\n-darwin.so\n.On all other platforms, extension module filenames are the same as they were with Python 3.4.\nDeprecated\u00b6\nNew Keywords\u00b6\nasync\nand await\nare not recommended to be used as variable, class,\nfunction or module names. Introduced by PEP 492 in Python 3.5, they will\nbecome proper keywords in Python 3.7.\nDeprecated Python Behavior\u00b6\nRaising the StopIteration\nexception inside a generator will now generate a silent\nPendingDeprecationWarning\n, which will become a non-silent deprecation\nwarning in Python 3.6 and will trigger a RuntimeError\nin Python 3.7.\nSee PEP 479: Change StopIteration handling inside generators\nfor details.\nUnsupported Operating Systems\u00b6\nWindows XP is no longer supported by Microsoft, thus, per PEP 11, CPython 3.5 is no longer officially supported on this OS.\nDeprecated Python modules, functions and methods\u00b6\nThe formatter\nmodule has now graduated to full deprecation and is still\nslated for removal in Python 3.6.\nThe asyncio.async()\nfunction is deprecated in favor of\nensure_future()\n.\nThe smtpd\nmodule has in the past always decoded the DATA portion of\nemail messages using the utf-8\ncodec. This can now be controlled by the\nnew decode_data keyword to SMTPServer\n. The default value is\nTrue\n, but this default is deprecated. Specify the decode_data keyword\nwith an appropriate value to avoid the deprecation warning.\nDirectly assigning values to the key\n,\nvalue\nand\ncoded_value\nof http.cookies.Morsel\nobjects is deprecated. Use the set()\nmethod\ninstead. In addition, the undocumented LegalChars parameter of\nset()\nis deprecated, and is now ignored.\nPassing a format string as keyword argument format_string to the\nformat()\nmethod of the string.Formatter\nclass has been deprecated.\n(Contributed by Serhiy Storchaka in bpo-23671.)\nThe platform.dist()\nand platform.linux_distribution()\nfunctions\nare now deprecated. Linux distributions use too many different ways of\ndescribing themselves, so the functionality is left to a package.\n(Contributed by Vajrasky Kok and Berker Peksag in bpo-1322.)\nThe previously undocumented from_function\nand from_builtin\nmethods of\ninspect.Signature\nare deprecated. Use the new\nSignature.from_callable()\nmethod instead. (Contributed by Yury Selivanov in bpo-24248.)\nThe inspect.getargspec()\nfunction is deprecated and scheduled to be\nremoved in Python 3.6. (See bpo-20438 for details.)\nThe inspect\ngetfullargspec()\n,\ngetcallargs()\n, and formatargspec()\nfunctions are\ndeprecated in favor of the inspect.signature()\nAPI. (Contributed by Yury\nSelivanov in bpo-20438.)\ngetargvalues()\nand formatargvalues()\nfunctions\nwere inadvertently marked as deprecated with the release of Python 3.5.0.\nUse of re.LOCALE\nflag with str patterns or re.ASCII\nis now\ndeprecated. (Contributed by Serhiy Storchaka in bpo-22407.)\nUse of unrecognized special sequences consisting of '\\'\nand an ASCII letter\nin regular expression patterns and replacement patterns now raises a\ndeprecation warning and will be forbidden in Python 3.6.\n(Contributed by Serhiy Storchaka in bpo-23622.)\nThe undocumented and unofficial use_load_tests default argument of the\nunittest.TestLoader.loadTestsFromModule()\nmethod now is\ndeprecated and ignored.\n(Contributed by Robert Collins and Barry A. Warsaw in bpo-16662.)\nRemoved\u00b6\nAPI and Feature Removals\u00b6\nThe following obsolete and previously deprecated APIs and features have been removed:\nThe\n__version__\nattribute has been dropped from the email package. The email code hasn\u2019t been shipped separately from the stdlib for a long time, and the__version__\nstring was not updated in the last few releases.The internal\nNetrc\nclass in theftplib\nmodule was deprecated in 3.4, and has now been removed. (Contributed by Matt Chaput in bpo-6623.)The concept of\n.pyo\nfiles has been removed.The JoinableQueue class in the provisional\nasyncio\nmodule was deprecated in 3.4.4 and is now removed. (Contributed by A. Jesse Jiryu Davis in bpo-23464.)\nPorting to Python 3.5\u00b6\nThis section lists previously described changes and other bugfixes that may require changes to your code.\nChanges in Python behavior\u00b6\nDue to an oversight, earlier Python versions erroneously accepted the following syntax:\nf(1 for x in [1], *args) f(1 for x in [1], **kwargs)\nPython 3.5 now correctly raises a\nSyntaxError\n, as generator expressions must be put in parentheses if not a sole argument to a function.\nChanges in the Python API\u00b6\nPEP 475: System calls are now retried when interrupted by a signal instead of raising\nInterruptedError\nif the Python signal handler does not raise an exception.Before Python 3.5, a\ndatetime.time\nobject was considered to be false if it represented midnight in UTC. This behavior was considered obscure and error-prone and has been removed in Python 3.5. See bpo-13936 for full details.The\nssl.SSLSocket.send()\nmethod now raises eitherssl.SSLWantReadError\norssl.SSLWantWriteError\non a non-blocking socket if the operation would block. Previously, it would return0\n. (Contributed by Nikolaus Rath in bpo-20951.)The\n__name__\nattribute of generators is now set from the function name, instead of being set from the code name. Usegen.gi_code.co_name\nto retrieve the code name. Generators also have a new__qualname__\nattribute, the qualified name, which is now used for the representation of a generator (repr(gen)\n). (Contributed by Victor Stinner in bpo-21205.)The deprecated \u201cstrict\u201d mode and argument of\nHTMLParser\n,HTMLParser.error()\n, and theHTMLParserError\nexception have been removed. (Contributed by Ezio Melotti in bpo-15114.) The convert_charrefs argument ofHTMLParser\nis nowTrue\nby default. (Contributed by Berker Peksag in bpo-21047.)Although it is not formally part of the API, it is worth noting for porting purposes (ie: fixing tests) that error messages that were previously of the form \u201c\u2018sometype\u2019 does not support the buffer protocol\u201d are now of the form \u201ca bytes-like object is required, not \u2018sometype\u2019\u201d. (Contributed by Ezio Melotti in bpo-16518.)\nIf the current directory is set to a directory that no longer exists then\nFileNotFoundError\nwill no longer be raised and insteadfind_spec()\nwill returnNone\nwithout cachingNone\ninsys.path_importer_cache\n, which is different than the typical case (bpo-22834).HTTP status code and messages from\nhttp.client\nandhttp.server\nwere refactored into a commonHTTPStatus\nenum. The values inhttp.client\nandhttp.server\nremain available for backwards compatibility. (Contributed by Demian Brecht in bpo-21793.)When an import loader defines\nexec_module()\nit is now expected to also definecreate_module()\n(raises aDeprecationWarning\nnow, will be an error in Python 3.6). If the loader inherits fromimportlib.abc.Loader\nthen there is nothing to do, else simply definecreate_module()\nto returnNone\n. (Contributed by Brett Cannon in bpo-23014.)The\nre.split()\nfunction always ignored empty pattern matches, so the\"x*\"\npattern worked the same as\"x+\"\n, and the\"\\b\"\npattern never worked. Nowre.split()\nraises a warning if the pattern could match an empty string. For compatibility, use patterns that never match an empty string (e.g.\"x+\"\ninstead of\"x*\"\n). Patterns that could only match an empty string (such as\"\\b\"\n) now raise an error. (Contributed by Serhiy Storchaka in bpo-22818.)The\nhttp.cookies.Morsel\ndict-like interface has been made self consistent: morsel comparison now takes thekey\nandvalue\ninto account,copy()\nnow results in aMorsel\ninstance rather than adict\n, andupdate()\nwill now raise an exception if any of the keys in the update dictionary are invalid. In addition, the undocumented LegalChars parameter ofset()\nis deprecated and is now ignored. (Contributed by Demian Brecht in bpo-2211.)PEP 488 has removed\n.pyo\nfiles from Python and introduced the optionalopt-\ntag in.pyc\nfile names. Theimportlib.util.cache_from_source()\nhas gained an optimization parameter to help control theopt-\ntag. Because of this, the debug_override parameter of the function is now deprecated..pyo\nfiles are also no longer supported as a file argument to the Python interpreter and thus serve no purpose when distributed on their own (i.e. sourceless code distribution). Due to the fact that the magic number for bytecode has changed in Python 3.5, all old.pyo\nfiles from previous versions of Python are invalid regardless of this PEP.The\nsocket\nmodule now exports theCAN_RAW_FD_FRAMES\nconstant on linux 3.6 and greater.The\nssl.cert_time_to_seconds()\nfunction now interprets the input time as UTC and not as local time, per RFC 5280. Additionally, the return value is always anint\n. (Contributed by Akira Li in bpo-19940.)The\npygettext.py\nTool now uses the standard +NNNN format for timezones in the POT-Creation-Date header.The\nsmtplib\nmodule now usessys.stderr\ninstead of the previous module-levelstderr\nvariable for debug output. If your (test) program depends on patching the module-level variable to capture the debug output, you will need to update it to capture sys.stderr instead.The\nstr.startswith()\nandstr.endswith()\nmethods no longer returnTrue\nwhen finding the empty string and the indexes are completely out of range. (Contributed by Serhiy Storchaka in bpo-24284.)The\ninspect.getdoc()\nfunction now returns documentation strings inherited from base classes. Documentation strings no longer need to be duplicated if the inherited documentation is appropriate. To suppress an inherited string, an empty string must be specified (or the documentation may be filled in). This change affects the output of thepydoc\nmodule and thehelp()\nfunction. (Contributed by Serhiy Storchaka in bpo-15582.)Nested\nfunctools.partial()\ncalls are now flattened. If you were relying on the previous behavior, you can now either add an attribute to afunctools.partial()\nobject or you can create a subclass offunctools.partial()\n. (Contributed by Alexander Belopolsky in bpo-7830.)\nChanges in the C API\u00b6\nThe undocumented\nformat\nmember of the (non-public)PyMemoryViewObject\nstructure has been removed. All extensions relying on the relevant parts inmemoryobject.h\nmust be rebuilt.The\nPyMemAllocator\nstructure was renamed toPyMemAllocatorEx\nand a newcalloc\nfield was added.Removed non-documented macro\nPyObject_REPR()\nwhich leaked references. Use format character%R\ninPyUnicode_FromFormat()\n-like functions to format therepr()\nof the object. (Contributed by Serhiy Storchaka in bpo-22453.)Because the lack of the\n__module__\nattribute breaks pickling and introspection, a deprecation warning is now raised for builtin types without the__module__\nattribute. This will be anAttributeError\nin the future. (Contributed by Serhiy Storchaka in bpo-20204.)As part of the PEP 492 implementation, the\ntp_reserved\nslot ofPyTypeObject\nwas replaced with atp_as_async\nslot. Refer to Coroutine Objects for new types, structures and functions.\nNotable changes in Python 3.5.4\u00b6\nNew make regen-all\nbuild target\u00b6\nTo simplify cross-compilation, and to ensure that CPython can reliably be compiled without requiring an existing version of Python to already be available, the autotools-based build system no longer attempts to implicitly recompile generated files based on file modification times.\nInstead, a new make regen-all\ncommand has been added to force regeneration\nof these files when desired (e.g. after an initial version of Python has\nalready been built based on the pregenerated versions).\nMore selective regeneration targets are also defined - see Makefile.pre.in for details.\n(Contributed by Victor Stinner in bpo-23404.)\nAdded in version 3.5.4.\nRemoval of make touch\nbuild target\u00b6\nThe make touch\nbuild target previously used to request implicit regeneration\nof generated files by updating their modification times has been removed.\nIt has been replaced by the new make regen-all\ntarget.\n(Contributed by Victor Stinner in bpo-23404.)\nChanged in version 3.5.4.", "code_snippets": [" ", "\n", " ", " ", "\n", "\n\n", " ", "\n ", " ", " ", " ", " ", " ", "\n\n ", "\n ", "\n ", " ", " ", "\n ", "\n ", " ", "\n ", "\n\n ", " ", " ", " ", " ", "\n ", " ", "\n\n ", "\n\n", " ", " ", "\n", "\n ", "\n", "\n ", "\n", "\n\n", " ", " ", "\n ", "\n ", " ", " ", "\n ", "\n ", " ", "\n ", "\n\n", " ", " ", "\n", " ", " ", "\n", " ", " ", " ", " ", " ", "\n", "\n ", "\n", "\n ", "\n", " ", " ", " ", " ", "\n", " ", " ", " ", " ", "\n", " ", " ", " ", " ", "\n", " ", " ", " ", " ", "\n", " ", " ", " ", " ", "\n", " ", " ", " ", " ", "\n", " ", " ", " ", " ", " ", " ", " ", " ", " ", " ", " ", " ", " ", " ", " ", " ", " ", " ", "\n", " ", " ", " ", " ", " ", "\n ", " ", " ", " ", " ", " ", " ", "\n", "\n\n", " ", " ", "\n", "\n", "\n\n", " ", " ", "\n", "\n", "\n", "\n", "\n\n", " ", " ", "\n", "\n", " ", " ", " ", " ", "\n", "\n\n", " ", " ", " ", "\n", " ", " ", " ", " ", "\n", "\n\n", " ", " ", " ", " ", " ", " ", " ", "\n", "\n", " ", "\n", "\n\n", " ", "\n", "\n\n", " ", " ", " ", " ", "\n", "\n\n", " ", " ", " ", "\n", "\n", " ", " ", "\n", "\n\n", " ", " ", " ", "\n", "\n", " ", " ", "\n", "\n File ", ", line ", ", in ", "\n", ": ", "\n\n", " ", " ", "\n", "\n", " ", " ", " ", "\n ", " ", " ", " ", "\n", " ", " ", " ", "\n ", " ", " ", " ", " ", "\n ", "\n", "\n", " ", "\n ", "\n ", "\n ", "\n ", " ", "\n ", "\n", " ", "\n\n", "\n", " ", "\n", " ", "\n", "\n", "\n", "\n File ", ", line ", ", in ", "\n", "\n\n", "\n\n", "\n File ", ", line ", ", in ", "\n", ": ", "\n", "\n", " ", " ", "\n", " ", " ", "\n", " ", " ", "\n", "\n", " ", " ", "\n", "\n", "\n", " ", " ", "\n", " ", " ", "\n", " ", " ", "\n", "\n", " ", " ", "\n", "\n", " ", " ", " ", " ", "\n", " ", " ", "\n", " ", " ", "\n", " ", " ", "\n", "\n", " ", " ", "\n", " ", " ", " ", " ", " ", " ", " ", " ", " ", " ", "\n", " ", " ", "\n", "\n", "\n", "\n", "\n", " ", "\n", "\n", " ", "\n", "\n", " ", " ", "\n", "\n", "\n", "\n", " ", " ", "\n", " ", "\n", " ", "\n", "\n", "\n", "\n", " ", " ", " ", " ", "\n", "\n", "\n", "\n", "\n", "\n", " ", " ", " ", " ", "\n", " ", " ", " ", "\n", " ", " ", "\n", "\n", " ", " ", " ", "\n", "\n", "\n", " ", " ", "\n", " ", " ", " ", "\n ", "\n ", " ", "\n ", " ", " ", "\n ", " ", "\n ", "\n", " ", " ", " ", "\n", " ", " ", "\n", "\n", "\n", "\n", "\n", " ", "\n", "\n", " ", "\n", "\n", "\n", " ", " ", "\n", "\n", "\n", " ", " ", "\n", "\n", "\n", "\n", " ", "\n", "\n", "\n", "\n", " ", "\n", "\n", "\n", "\n", "\n", "\n", " ", "\n", " ", " ", " ", "\n", " ", " ", "\n", "\n", " ", "\n", "\n\n", " ", "\n", "\n", "\n", " ", " ", "\n", " ", " ", "\n", "\n", "\n", "\n", " ", " ", "\n", "\n", "\n", "\n", " ", " ", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", ": ", "\n", " ", " ", "\n", "\n\n", " ", " ", "\n", "\n", "\n", ": ", "\n\n", " ", " ", " ", "\n", "\n", "\n", " ", " ", " ", " ", " ", "\n", " ", " ", " ", " ", " ", "\n"], "language": "Python", "source": "python.org", "token_count": 18296} +{"url": "https://docs.python.org/3/faq/design.html", "title": null, "content": "Design and History FAQ\u00b6\nWhy does Python use indentation for grouping of statements?\u00b6\nGuido van Rossum believes that using indentation for grouping is extremely elegant and contributes a lot to the clarity of the average Python program. Most people learn to love this feature after a while.\nSince there are no begin/end brackets there cannot be a disagreement between grouping perceived by the parser and the human reader. Occasionally C programmers will encounter a fragment of code like this:\nif (x <= y)\nx++;\ny--;\nz++;\nOnly the x++\nstatement is executed if the condition is true, but the\nindentation leads many to believe otherwise. Even experienced C programmers will\nsometimes stare at it a long time wondering as to why y\nis being decremented even\nfor x > y\n.\nBecause there are no begin/end brackets, Python is much less prone to coding-style conflicts. In C there are many different ways to place the braces. After becoming used to reading and writing code using a particular style, it is normal to feel somewhat uneasy when reading (or being required to write) in a different one.\nMany coding styles place begin/end brackets on a line by themselves. This makes programs considerably longer and wastes valuable screen space, making it harder to get a good overview of a program. Ideally, a function should fit on one screen (say, 20\u201330 lines). 20 lines of Python can do a lot more work than 20 lines of C. This is not solely due to the lack of begin/end brackets \u2013 the lack of declarations and the high-level data types are also responsible \u2013 but the indentation-based syntax certainly helps.\nWhy am I getting strange results with simple arithmetic operations?\u00b6\nSee the next question.\nWhy are floating-point calculations so inaccurate?\u00b6\nUsers are often surprised by results like this:\n>>> 1.2 - 1.0\n0.19999999999999996\nand think it is a bug in Python. It\u2019s not. This has little to do with Python, and much more to do with how the underlying platform handles floating-point numbers.\nThe float\ntype in CPython uses a C double\nfor storage. A\nfloat\nobject\u2019s value is stored in binary floating-point with a fixed\nprecision (typically 53 bits) and Python uses C operations, which in turn rely\non the hardware implementation in the processor, to perform floating-point\noperations. This means that as far as floating-point operations are concerned,\nPython behaves like many popular languages including C and Java.\nMany numbers that can be written easily in decimal notation cannot be expressed exactly in binary floating point. For example, after:\n>>> x = 1.2\nthe value stored for x\nis a (very good) approximation to the decimal value\n1.2\n, but is not exactly equal to it. On a typical machine, the actual\nstored value is:\n1.0011001100110011001100110011001100110011001100110011 (binary)\nwhich is exactly:\n1.1999999999999999555910790149937383830547332763671875 (decimal)\nThe typical precision of 53 bits provides Python floats with 15\u201316 decimal digits of accuracy.\nFor a fuller explanation, please see the floating-point arithmetic chapter in the Python tutorial.\nWhy are Python strings immutable?\u00b6\nThere are several advantages.\nOne is performance: knowing that a string is immutable means we can allocate space for it at creation time, and the storage requirements are fixed and unchanging. This is also one of the reasons for the distinction between tuples and lists.\nAnother advantage is that strings in Python are considered as \u201celemental\u201d as numbers. No amount of activity will change the value 8 to anything else, and in Python, no amount of activity will change the string \u201ceight\u201d to anything else.\nWhy must \u2018self\u2019 be used explicitly in method definitions and calls?\u00b6\nThe idea was borrowed from Modula-3. It turns out to be very useful, for a variety of reasons.\nFirst, it\u2019s more obvious that you are using a method or instance attribute\ninstead of a local variable. Reading self.x\nor self.meth()\nmakes it\nabsolutely clear that an instance variable or method is used even if you don\u2019t\nknow the class definition by heart. In C++, you can sort of tell by the lack of\na local variable declaration (assuming globals are rare or easily recognizable)\n\u2013 but in Python, there are no local variable declarations, so you\u2019d have to\nlook up the class definition to be sure. Some C++ and Java coding standards\ncall for instance attributes to have an m_\nprefix, so this explicitness is\nstill useful in those languages, too.\nSecond, it means that no special syntax is necessary if you want to explicitly\nreference or call the method from a particular class. In C++, if you want to\nuse a method from a base class which is overridden in a derived class, you have\nto use the ::\noperator \u2013 in Python you can write\nbaseclass.methodname(self, )\n. This is particularly useful\nfor __init__()\nmethods, and in general in cases where a derived class\nmethod wants to extend the base class method of the same name and thus has to\ncall the base class method somehow.\nFinally, for instance variables it solves a syntactic problem with assignment:\nsince local variables in Python are (by definition!) those variables to which a\nvalue is assigned in a function body (and that aren\u2019t explicitly declared\nglobal), there has to be some way to tell the interpreter that an assignment was\nmeant to assign to an instance variable instead of to a local variable, and it\nshould preferably be syntactic (for efficiency reasons). C++ does this through\ndeclarations, but Python doesn\u2019t have declarations and it would be a pity having\nto introduce them just for this purpose. Using the explicit self.var\nsolves\nthis nicely. Similarly, for using instance variables, having to write\nself.var\nmeans that references to unqualified names inside a method don\u2019t\nhave to search the instance\u2019s directories. To put it another way, local\nvariables and instance variables live in two different namespaces, and you need\nto tell Python which namespace to use.\nWhy can\u2019t I use an assignment in an expression?\u00b6\nStarting in Python 3.8, you can!\nAssignment expressions using the walrus operator :=\nassign a variable in an\nexpression:\nwhile chunk := fp.read(200):\nprint(chunk)\nSee PEP 572 for more information.\nWhy does Python use methods for some functionality (e.g. list.index()) but functions for other (e.g. len(list))?\u00b6\nAs Guido said:\n(a) For some operations, prefix notation just reads better than postfix \u2013 prefix (and infix!) operations have a long tradition in mathematics which likes notations where the visuals help the mathematician thinking about a problem. Compare the easy with which we rewrite a formula like x*(a+b) into x*a + x*b to the clumsiness of doing the same thing using a raw OO notation.\n(b) When I read code that says len(x) I know that it is asking for the length of something. This tells me two things: the result is an integer, and the argument is some kind of container. To the contrary, when I read x.len(), I have to already know that x is some kind of container implementing an interface or inheriting from a class that has a standard len(). Witness the confusion we occasionally have when a class that is not implementing a mapping has a get() or keys() method, or something that isn\u2019t a file has a write() method.\n\u2014https://mail.python.org/pipermail/python-3000/2006-November/004643.html\nWhy is join() a string method instead of a list or tuple method?\u00b6\nStrings became much more like other standard types starting in Python 1.6, when methods were added which give the same functionality that has always been available using the functions of the string module. Most of these new methods have been widely accepted, but the one which appears to make some programmers feel uncomfortable is:\n\", \".join(['1', '2', '4', '8', '16'])\nwhich gives the result:\n\"1, 2, 4, 8, 16\"\nThere are two common arguments against this usage.\nThe first runs along the lines of: \u201cIt looks really ugly using a method of a string literal (string constant)\u201d, to which the answer is that it might, but a string literal is just a fixed value. If the methods are to be allowed on names bound to strings there is no logical reason to make them unavailable on literals.\nThe second objection is typically cast as: \u201cI am really telling a sequence to\njoin its members together with a string constant\u201d. Sadly, you aren\u2019t. For some\nreason there seems to be much less difficulty with having split()\nas\na string method, since in that case it is easy to see that\n\"1, 2, 4, 8, 16\".split(\", \")\nis an instruction to a string literal to return the substrings delimited by the given separator (or, by default, arbitrary runs of white space).\njoin()\nis a string method because in using it you are telling the\nseparator string to iterate over a sequence of strings and insert itself between\nadjacent elements. This method can be used with any argument which obeys the\nrules for sequence objects, including any new classes you might define yourself.\nSimilar methods exist for bytes and bytearray objects.\nHow fast are exceptions?\u00b6\nA try\n/except\nblock is extremely efficient if no exceptions\nare raised. Actually\ncatching an exception is expensive. In versions of Python prior to 2.0 it was\ncommon to use this idiom:\ntry:\nvalue = mydict[key]\nexcept KeyError:\nmydict[key] = getvalue(key)\nvalue = mydict[key]\nThis only made sense when you expected the dict to have the key almost all the time. If that wasn\u2019t the case, you coded it like this:\nif key in mydict:\nvalue = mydict[key]\nelse:\nvalue = mydict[key] = getvalue(key)\nFor this specific case, you could also use value = dict.setdefault(key,\ngetvalue(key))\n, but only if the getvalue()\ncall is cheap enough because it\nis evaluated in all cases.\nWhy isn\u2019t there a switch or case statement in Python?\u00b6\nIn general, structured switch statements execute one block of code\nwhen an expression has a particular value or set of values.\nSince Python 3.10 one can easily match literal values, or constants\nwithin a namespace, with a match ... case\nstatement.\nAn older alternative is a sequence of if... elif... elif... else\n.\nFor cases where you need to choose from a very large number of possibilities, you can create a dictionary mapping case values to functions to call. For example:\nfunctions = {'a': function_1,\n'b': function_2,\n'c': self.method_1}\nfunc = functions[value]\nfunc()\nFor calling methods on objects, you can simplify yet further by using the\ngetattr()\nbuilt-in to retrieve methods with a particular name:\nclass MyVisitor:\ndef visit_a(self):\n...\ndef dispatch(self, value):\nmethod_name = 'visit_' + str(value)\nmethod = getattr(self, method_name)\nmethod()\nIt\u2019s suggested that you use a prefix for the method names, such as visit_\nin\nthis example. Without such a prefix, if values are coming from an untrusted\nsource, an attacker would be able to call any method on your object.\nImitating switch with fallthrough, as with C\u2019s switch-case-default, is possible, much harder, and less needed.\nCan\u2019t you emulate threads in the interpreter instead of relying on an OS-specific thread implementation?\u00b6\nAnswer 1: Unfortunately, the interpreter pushes at least one C stack frame for each Python stack frame. Also, extensions can call back into Python at almost random moments. Therefore, a complete threads implementation requires thread support for C.\nAnswer 2: Fortunately, there is Stackless Python, which has a completely redesigned interpreter loop that avoids the C stack.\nWhy can\u2019t lambda expressions contain statements?\u00b6\nPython lambda expressions cannot contain statements because Python\u2019s syntactic framework can\u2019t handle statements nested inside expressions. However, in Python, this is not a serious problem. Unlike lambda forms in other languages, where they add functionality, Python lambdas are only a shorthand notation if you\u2019re too lazy to define a function.\nFunctions are already first class objects in Python, and can be declared in a local scope. Therefore the only advantage of using a lambda instead of a locally defined function is that you don\u2019t need to invent a name for the function \u2013 but that\u2019s just a local variable to which the function object (which is exactly the same type of object that a lambda expression yields) is assigned!\nCan Python be compiled to machine code, C or some other language?\u00b6\nCython compiles a modified version of Python with optional annotations into C extensions. Nuitka is an up-and-coming compiler of Python into C++ code, aiming to support the full Python language.\nHow does Python manage memory?\u00b6\nThe details of Python memory management depend on the implementation. The\nstandard implementation of Python, CPython, uses reference counting to\ndetect inaccessible objects, and another mechanism to collect reference cycles,\nperiodically executing a cycle detection algorithm which looks for inaccessible\ncycles and deletes the objects involved. The gc\nmodule provides functions\nto perform a garbage collection, obtain debugging statistics, and tune the\ncollector\u2019s parameters.\nOther implementations (such as Jython or PyPy), however, can rely on a different mechanism such as a full-blown garbage collector. This difference can cause some subtle porting problems if your Python code depends on the behavior of the reference counting implementation.\nIn some Python implementations, the following code (which is fine in CPython) will probably run out of file descriptors:\nfor file in very_long_list_of_files:\nf = open(file)\nc = f.read(1)\nIndeed, using CPython\u2019s reference counting and destructor scheme, each new\nassignment to f\ncloses the previous file. With a traditional GC, however,\nthose file objects will only get collected (and closed) at varying and possibly\nlong intervals.\nIf you want to write code that will work with any Python implementation,\nyou should explicitly close the file or use the with\nstatement;\nthis will work regardless of memory management scheme:\nfor file in very_long_list_of_files:\nwith open(file) as f:\nc = f.read(1)\nWhy doesn\u2019t CPython use a more traditional garbage collection scheme?\u00b6\nFor one thing, this is not a C standard feature and hence it\u2019s not portable. (Yes, we know about the Boehm GC library. It has bits of assembler code for most common platforms, not for all of them, and although it is mostly transparent, it isn\u2019t completely transparent; patches are required to get Python to work with it.)\nTraditional GC also becomes a problem when Python is embedded into other\napplications. While in a standalone Python it\u2019s fine to replace the standard\nmalloc()\nand free()\nwith versions provided by the GC library, an application\nembedding Python may want to have its own substitute for malloc()\nand free()\n,\nand may not want Python\u2019s. Right now, CPython works with anything that\nimplements malloc()\nand free()\nproperly.\nWhy isn\u2019t all memory freed when CPython exits?\u00b6\nObjects referenced from the global namespaces of Python modules are not always deallocated when Python exits. This may happen if there are circular references. There are also certain bits of memory that are allocated by the C library that are impossible to free (e.g. a tool like Purify will complain about these). Python is, however, aggressive about cleaning up memory on exit and does try to destroy every single object.\nIf you want to force Python to delete certain things on deallocation use the\natexit\nmodule to run a function that will force those deletions.\nWhy are there separate tuple and list data types?\u00b6\nLists and tuples, while similar in many respects, are generally used in\nfundamentally different ways. Tuples can be thought of as being similar to\nPascal records\nor C structs\n; they\u2019re small collections of related data which may\nbe of different types which are operated on as a group. For example, a\nCartesian coordinate is appropriately represented as a tuple of two or three\nnumbers.\nLists, on the other hand, are more like arrays in other languages. They tend to\nhold a varying number of objects all of which have the same type and which are\noperated on one-by-one. For example, os.listdir('.')\nreturns a list of\nstrings representing the files in the current directory. Functions which\noperate on this output would generally not break if you added another file or\ntwo to the directory.\nTuples are immutable, meaning that once a tuple has been created, you can\u2019t replace any of its elements with a new value. Lists are mutable, meaning that you can always change a list\u2019s elements. Only immutable elements can be used as dictionary keys, and hence only tuples and not lists can be used as keys.\nHow are lists implemented in CPython?\u00b6\nCPython\u2019s lists are really variable-length arrays, not Lisp-style linked lists. The implementation uses a contiguous array of references to other objects, and keeps a pointer to this array and the array\u2019s length in a list head structure.\nThis makes indexing a list a[i]\nan operation whose cost is independent of\nthe size of the list or the value of the index.\nWhen items are appended or inserted, the array of references is resized. Some cleverness is applied to improve the performance of appending items repeatedly; when the array must be grown, some extra space is allocated so the next few times don\u2019t require an actual resize.\nHow are dictionaries implemented in CPython?\u00b6\nCPython\u2019s dictionaries are implemented as resizable hash tables. Compared to B-trees, this gives better performance for lookup (the most common operation by far) under most circumstances, and the implementation is simpler.\nDictionaries work by computing a hash code for each key stored in the dictionary\nusing the hash()\nbuilt-in function. The hash code varies widely depending\non the key and a per-process seed; for example, 'Python'\ncould hash to\n-539294296\nwhile 'python'\n, a string that differs by a single bit, could hash\nto 1142331976\n. The hash code is then used to calculate a location in an\ninternal array where the value will be stored. Assuming that you\u2019re storing\nkeys that all have different hash values, this means that dictionaries take\nconstant time \u2013 O(1), in Big-O notation \u2013 to retrieve a key.\nWhy must dictionary keys be immutable?\u00b6\nThe hash table implementation of dictionaries uses a hash value calculated from the key value to find the key. If the key were a mutable object, its value could change, and thus its hash could also change. But since whoever changes the key object can\u2019t tell that it was being used as a dictionary key, it can\u2019t move the entry around in the dictionary. Then, when you try to look up the same object in the dictionary it won\u2019t be found because its hash value is different. If you tried to look up the old value it wouldn\u2019t be found either, because the value of the object found in that hash bin would be different.\nIf you want a dictionary indexed with a list, simply convert the list to a tuple\nfirst; the function tuple(L)\ncreates a tuple with the same entries as the\nlist L\n. Tuples are immutable and can therefore be used as dictionary keys.\nSome unacceptable solutions that have been proposed:\nHash lists by their address (object ID). This doesn\u2019t work because if you construct a new list with the same value it won\u2019t be found; e.g.:\nmydict = {[1, 2]: '12'} print(mydict[[1, 2]])\nwould raise a\nKeyError\nexception because the id of the[1, 2]\nused in the second line differs from that in the first line. In other words, dictionary keys should be compared using==\n, not usingis\n.Make a copy when using a list as a key. This doesn\u2019t work because the list, being a mutable object, could contain a reference to itself, and then the copying code would run into an infinite loop.\nAllow lists as keys but tell the user not to modify them. This would allow a class of hard-to-track bugs in programs when you forgot or modified a list by accident. It also invalidates an important invariant of dictionaries: every value in\nd.keys()\nis usable as a key of the dictionary.Mark lists as read-only once they are used as a dictionary key. The problem is that it\u2019s not just the top-level object that could change its value; you could use a tuple containing a list as a key. Entering anything as a key into a dictionary would require marking all objects reachable from there as read-only \u2013 and again, self-referential objects could cause an infinite loop.\nThere is a trick to get around this if you need to, but use it at your own risk:\nYou can wrap a mutable structure inside a class instance which has both a\n__eq__()\nand a __hash__()\nmethod.\nYou must then make sure that the\nhash value for all such wrapper objects that reside in a dictionary (or other\nhash based structure), remain fixed while the object is in the dictionary (or\nother structure).\nclass ListWrapper:\ndef __init__(self, the_list):\nself.the_list = the_list\ndef __eq__(self, other):\nreturn self.the_list == other.the_list\ndef __hash__(self):\nl = self.the_list\nresult = 98767 - len(l)*555\nfor i, el in enumerate(l):\ntry:\nresult = result + (hash(el) % 9999999) * 1001 + i\nexcept Exception:\nresult = (result % 7777777) + i * 333\nreturn result\nNote that the hash computation is complicated by the possibility that some members of the list may be unhashable and also by the possibility of arithmetic overflow.\nFurthermore it must always be the case that if o1 == o2\n(ie o1.__eq__(o2)\nis True\n) then hash(o1) == hash(o2)\n(ie, o1.__hash__() == o2.__hash__()\n),\nregardless of whether the object is in a dictionary or not. If you fail to meet\nthese restrictions dictionaries and other hash based structures will misbehave.\nIn the case of ListWrapper\n, whenever the wrapper object is in a dictionary the\nwrapped list must not change to avoid anomalies. Don\u2019t do this unless you are\nprepared to think hard about the requirements and the consequences of not\nmeeting them correctly. Consider yourself warned.\nWhy doesn\u2019t list.sort() return the sorted list?\u00b6\nIn situations where performance matters, making a copy of the list just to sort\nit would be wasteful. Therefore, list.sort()\nsorts the list in place. In\norder to remind you of that fact, it does not return the sorted list. This way,\nyou won\u2019t be fooled into accidentally overwriting a list when you need a sorted\ncopy but also need to keep the unsorted version around.\nIf you want to return a new list, use the built-in sorted()\nfunction\ninstead. This function creates a new list from a provided iterable, sorts\nit and returns it. For example, here\u2019s how to iterate over the keys of a\ndictionary in sorted order:\nfor key in sorted(mydict):\n... # do whatever with mydict[key]...\nHow do you specify and enforce an interface spec in Python?\u00b6\nAn interface specification for a module as provided by languages such as C++ and Java describes the prototypes for the methods and functions of the module. Many feel that compile-time enforcement of interface specifications helps in the construction of large programs.\nPython 2.6 adds an abc\nmodule that lets you define Abstract Base Classes\n(ABCs). You can then use isinstance()\nand issubclass()\nto check\nwhether an instance or a class implements a particular ABC. The\ncollections.abc\nmodule defines a set of useful ABCs such as\nIterable\n, Container\n, and\nMutableMapping\n.\nFor Python, many of the advantages of interface specifications can be obtained by an appropriate test discipline for components.\nA good test suite for a module can both provide a regression test and serve as a\nmodule interface specification and a set of examples. Many Python modules can\nbe run as a script to provide a simple \u201cself test.\u201d Even modules which use\ncomplex external interfaces can often be tested in isolation using trivial\n\u201cstub\u201d emulations of the external interface. The doctest\nand\nunittest\nmodules or third-party test frameworks can be used to construct\nexhaustive test suites that exercise every line of code in a module.\nAn appropriate testing discipline can help build large complex applications in\nPython as well as having interface specifications would. In fact, it can be\nbetter because an interface specification cannot test certain properties of a\nprogram. For example, the list.append()\nmethod is expected to add new elements\nto the end of some internal list; an interface specification cannot test that\nyour list.append()\nimplementation will actually do this correctly, but it\u2019s\ntrivial to check this property in a test suite.\nWriting test suites is very helpful, and you might want to design your code to make it easily tested. One increasingly popular technique, test-driven development, calls for writing parts of the test suite first, before you write any of the actual code. Of course Python allows you to be sloppy and not write test cases at all.\nWhy is there no goto?\u00b6\nIn the 1970s people realized that unrestricted goto could lead\nto messy \u201cspaghetti\u201d code that was hard to understand and revise.\nIn a high-level language, it is also unneeded as long as there\nare ways to branch (in Python, with if\nstatements and or\n,\nand\n, and if\n/else\nexpressions) and loop (with while\nand for\nstatements, possibly containing continue\nand break\n).\nOne can also use exceptions to provide a \u201cstructured goto\u201d\nthat works even across\nfunction calls. Many feel that exceptions can conveniently emulate all\nreasonable uses of the go\nor goto\nconstructs of C, Fortran, and other\nlanguages. For example:\nclass label(Exception): pass # declare a label\ntry:\n...\nif condition: raise label() # goto label\n...\nexcept label: # where to goto\npass\n...\nThis doesn\u2019t allow you to jump into the middle of a loop, but that\u2019s usually\nconsidered an abuse of goto\nanyway. Use sparingly.\nWhy can\u2019t raw strings (r-strings) end with a backslash?\u00b6\nMore precisely, they can\u2019t end with an odd number of backslashes: the unpaired backslash at the end escapes the closing quote character, leaving an unterminated string.\nRaw strings were designed to ease creating input for processors (chiefly regular expression engines) that want to do their own backslash escape processing. Such processors consider an unmatched trailing backslash to be an error anyway, so raw strings disallow that. In return, they allow you to pass on the string quote character by escaping it with a backslash. These rules work well when r-strings are used for their intended purpose.\nIf you\u2019re trying to build Windows pathnames, note that all Windows system calls accept forward slashes too:\nf = open(\"/mydir/file.txt\") # works fine!\nIf you\u2019re trying to build a pathname for a DOS command, try e.g. one of\ndir = r\"\\this\\is\\my\\dos\\dir\" \"\\\\\"\ndir = r\"\\this\\is\\my\\dos\\dir\\ \"[:-1]\ndir = \"\\\\this\\\\is\\\\my\\\\dos\\\\dir\\\\\"\nWhy doesn\u2019t Python have a \u201cwith\u201d statement for attribute assignments?\u00b6\nPython has a with\nstatement that wraps the execution of a block, calling code\non the entrance and exit from the block. Some languages have a construct that\nlooks like this:\nwith obj:\na = 1 # equivalent to obj.a = 1\ntotal = total + 1 # obj.total = obj.total + 1\nIn Python, such a construct would be ambiguous.\nOther languages, such as Object Pascal, Delphi, and C++, use static types, so it\u2019s possible to know, in an unambiguous way, what member is being assigned to. This is the main point of static typing \u2013 the compiler always knows the scope of every variable at compile time.\nPython uses dynamic types. It is impossible to know in advance which attribute will be referenced at runtime. Member attributes may be added or removed from objects on the fly. This makes it impossible to know, from a simple reading, what attribute is being referenced: a local one, a global one, or a member attribute?\nFor instance, take the following incomplete snippet:\ndef foo(a):\nwith a:\nprint(x)\nThe snippet assumes that a\nmust have a member attribute called x\n. However,\nthere is nothing in Python that tells the interpreter this. What should happen\nif a\nis, let us say, an integer? If there is a global variable named x\n,\nwill it be used inside the with\nblock? As you see, the dynamic nature of Python\nmakes such choices much harder.\nThe primary benefit of with\nand similar language features (reduction of code\nvolume) can, however, easily be achieved in Python by assignment. Instead of:\nfunction(args).mydict[index][index].a = 21\nfunction(args).mydict[index][index].b = 42\nfunction(args).mydict[index][index].c = 63\nwrite this:\nref = function(args).mydict[index][index]\nref.a = 21\nref.b = 42\nref.c = 63\nThis also has the side-effect of increasing execution speed because name bindings are resolved at run-time in Python, and the second version only needs to perform the resolution once.\nSimilar proposals that would introduce syntax to further reduce code volume, such as using a \u2018leading dot\u2019, have been rejected in favour of explicitness (see https://mail.python.org/pipermail/python-ideas/2016-May/040070.html).\nWhy don\u2019t generators support the with statement?\u00b6\nFor technical reasons, a generator used directly as a context manager\nwould not work correctly. When, as is most common, a generator is used as\nan iterator run to completion, no closing is needed. When it is, wrap\nit as contextlib.closing(generator)\nin the with\nstatement.\nWhy are colons required for the if/while/def/class statements?\u00b6\nThe colon is required primarily to enhance readability (one of the results of the experimental ABC language). Consider this:\nif a == b\nprint(a)\nversus\nif a == b:\nprint(a)\nNotice how the second one is slightly easier to read. Notice further how a colon sets off the example in this FAQ answer; it\u2019s a standard usage in English.\nAnother minor reason is that the colon makes it easier for editors with syntax highlighting; they can look for colons to decide when indentation needs to be increased instead of having to do a more elaborate parsing of the program text.\nWhy does Python allow commas at the end of lists and tuples?\u00b6\nPython lets you add a trailing comma at the end of lists, tuples, and dictionaries:\n[1, 2, 3,]\n('a', 'b', 'c',)\nd = {\n\"A\": [1, 5],\n\"B\": [6, 7], # last trailing comma is optional but good style\n}\nThere are several reasons to allow this.\nWhen you have a literal value for a list, tuple, or dictionary spread across multiple lines, it\u2019s easier to add more elements because you don\u2019t have to remember to add a comma to the previous line. The lines can also be reordered without creating a syntax error.\nAccidentally omitting the comma can lead to errors that are hard to diagnose. For example:\nx = [\n\"fee\",\n\"fie\"\n\"foo\",\n\"fum\"\n]\nThis list looks like it has four elements, but it actually contains three: \u201cfee\u201d, \u201cfiefoo\u201d and \u201cfum\u201d. Always adding the comma avoids this source of error.\nAllowing the trailing comma may also make programmatic code generation easier.", "code_snippets": [" ", " ", " ", "\n ", "\n ", "\n", "\n", " ", " ", "\n", "\n", " ", " ", "\n", " ", "\n", " ", "\n", " ", " ", " ", "\n ", "\n", " ", " ", " ", " ", "\n", "\n", "\n", "\n ", " ", " ", "\n", " ", "\n ", " ", " ", "\n ", " ", " ", "\n", " ", " ", " ", "\n ", " ", " ", "\n", "\n ", " ", " ", " ", " ", "\n", " ", " ", " ", "\n ", " ", "\n ", " ", "\n\n", " ", " ", "\n", "\n", "\n ", "\n ", "\n\n ", " ", "\n ", " ", " ", " ", " ", "\n ", " ", " ", " ", "\n ", "\n", " ", " ", " ", "\n ", " ", " ", "\n ", " ", " ", "\n", " ", " ", " ", "\n ", " ", " ", " ", "\n ", " ", " ", "\n", " ", " ", " ", " ", "\n", " ", "\n", "\n ", " ", "\n ", " ", " ", "\n\n ", " ", "\n ", " ", " ", " ", "\n\n ", "\n ", " ", " ", "\n ", " ", " ", " ", " ", "\n ", " ", " ", " ", " ", "\n ", "\n ", " ", " ", " ", " ", " ", " ", " ", " ", " ", " ", "\n ", " ", "\n ", " ", " ", " ", " ", " ", " ", " ", " ", "\n ", " ", "\n", " ", " ", " ", "\n ", " ", "\n", " ", " ", "\n\n", "\n ", "\n ", " ", " ", " ", " ", "\n ", "\n", " ", " ", "\n ", "\n", "\n", " ", " ", " ", "\n", " ", " ", " ", "\n", " ", " ", "\n", " ", " ", "\n", " ", "\n ", " ", " ", " ", "\n ", " ", " ", " ", " ", " ", "\n", "\n ", " ", "\n ", "\n", " ", " ", "\n", " ", " ", "\n", " ", " ", "\n", " ", " ", "\n", " ", " ", "\n", " ", " ", "\n", " ", " ", "\n", " ", " ", " ", "\n ", "\n", " ", " ", " ", "\n ", "\n", " ", " ", "\n", " ", " ", "\n", " ", " ", "\n ", " ", " ", "\n ", " ", " ", " ", "\n", "\n", " ", " ", "\n ", "\n ", "\n ", "\n ", "\n", "\n"], "language": "Python", "source": "python.org", "token_count": 7616} +{"url": "https://docs.python.org/3/tutorial/interactive.html", "title": "Interactive Input Editing and History Substitution", "content": "14. Interactive Input Editing and History Substitution\u00b6\nSome versions of the Python interpreter support editing of the current input line and history substitution, similar to facilities found in the Korn shell and the GNU Bash shell. This is implemented using the GNU Readline library, which supports various styles of editing. This library has its own documentation which we won\u2019t duplicate here.\n14.1. Tab Completion and History Editing\u00b6\nCompletion of variable and module names is\nautomatically enabled at interpreter startup so\nthat the Tab key invokes the completion function; it looks at\nPython statement names, the current local variables, and the available\nmodule names. For dotted expressions such as string.a\n, it will evaluate\nthe expression up to the final '.'\nand then suggest completions from\nthe attributes of the resulting object. Note that this may execute\napplication-defined code if an object with a __getattr__()\nmethod\nis part of the expression. The default configuration also saves your\nhistory into a file named .python_history\nin your user directory.\nThe history will be available again during the next interactive interpreter\nsession.\n14.2. Alternatives to the Interactive Interpreter\u00b6\nThis facility is an enormous step forward compared to earlier versions of the\ninterpreter; however, some wishes are left: It would be nice if the proper\nindentation were suggested on continuation lines (the parser knows if an\nINDENT\ntoken is required next). The completion mechanism might\nuse the interpreter\u2019s symbol table. A command to check (or even suggest)\nmatching parentheses, quotes, etc., would also be useful.\nOne alternative enhanced interactive interpreter that has been around for quite some time is IPython, which features tab completion, object exploration and advanced history management. It can also be thoroughly customized and embedded into other applications. Another similar enhanced interactive environment is bpython.", "code_snippets": [], "language": "Python", "source": "python.org", "token_count": 487} +{"url": "https://docs.python.org/3/using/index.html", "title": "Python Setup and Usage", "content": "Python Setup and Usage\u00b6\nThis part of the documentation is devoted to general information on the setup of the Python environment on different platforms, the invocation of the interpreter and things that make working with Python easier.\n- 1. Command line and environment\n- 2. Using Python on Unix platforms\n- 3. Configure Python\n- 4. Using Python on Windows\n- 4.1. Python install manager\n- 4.2. The embeddable package\n- 4.3. The nuget.org packages\n- 4.4. Alternative bundles\n- 4.5. Supported Windows versions\n- 4.6. Removing the MAX_PATH limitation\n- 4.7. UTF-8 mode\n- 4.8. Finding modules\n- 4.9. Additional modules\n- 4.10. Compiling Python on Windows\n- 4.11. The full installer (deprecated)\n- 4.12. Python launcher for Windows (deprecated)\n- 5. Using Python on macOS\n- 6. Using Python on Android\n- 7. Using Python on iOS\n- 8. Editors and IDEs", "code_snippets": [], "language": "Python", "source": "python.org", "token_count": 210} +{"url": "https://docs.python.org/3/faq/windows.html", "title": null, "content": "Python on Windows FAQ\u00b6\nHow do I run a Python program under Windows?\u00b6\nThis is not necessarily a straightforward question. If you are already familiar with running programs from the Windows command line then everything will seem obvious; otherwise, you might need a little more guidance.\nUnless you use some sort of integrated development environment, you will end up\ntyping Windows commands into what is referred to as a\n\u201cCommand prompt window\u201d. Usually you can create such a window from your\nsearch bar by searching for cmd\n. You should be able to recognize\nwhen you have started such a window because you will see a Windows \u201ccommand\nprompt\u201d, which usually looks like this:\nC:\\>\nThe letter may be different, and there might be other things after it, so you might just as easily see something like:\nD:\\YourName\\Projects\\Python>\ndepending on how your computer has been set up and what else you have recently done with it. Once you have started such a window, you are well on the way to running Python programs.\nYou need to realize that your Python scripts have to be processed by another program called the Python interpreter. The interpreter reads your script, compiles it into bytecodes, and then executes the bytecodes to run your program. So, how do you arrange for the interpreter to handle your Python?\nFirst, you need to make sure that your command window recognises the word\n\u201cpy\u201d as an instruction to start the interpreter. If you have opened a\ncommand window, you should try entering the command py\nand hitting\nreturn:\nC:\\Users\\YourName> py\nYou should then see something like:\nPython 3.6.4 (v3.6.4:d48eceb, Dec 19 2017, 06:04:45) [MSC v.1900 32 bit (Intel)] on win32\nType \"help\", \"copyright\", \"credits\" or \"license\" for more information.\n>>>\nYou have started the interpreter in \u201cinteractive mode\u201d. That means you can enter Python statements or expressions interactively and have them executed or evaluated while you wait. This is one of Python\u2019s strongest features. Check it by entering a few expressions of your choice and seeing the results:\n>>> print(\"Hello\")\nHello\n>>> \"Hello\" * 3\n'HelloHelloHello'\nMany people use the interactive mode as a convenient yet highly programmable\ncalculator. When you want to end your interactive Python session,\ncall the exit()\nfunction or hold the Ctrl key down\nwhile you enter a Z, then hit the \u201cEnter\u201d key to get\nback to your Windows command prompt.\nYou may also find that you have a Start-menu entry such as >>>\nprompt in a new window. If so, the window will disappear\nafter you call the exit()\nfunction or enter the Ctrl-Z\ncharacter; Windows is running a single \u201cpython\u201d\ncommand in the window, and closes it when you terminate the interpreter.\nNow that we know the py\ncommand is recognized, you can give your\nPython script to it. You\u2019ll have to give either an absolute or a\nrelative path to the Python script. Let\u2019s say your Python script is\nlocated in your desktop and is named hello.py\n, and your command\nprompt is nicely opened in your home directory so you\u2019re seeing something\nsimilar to:\nC:\\Users\\YourName>\nSo now you\u2019ll ask the py\ncommand to give your script to Python by\ntyping py\nfollowed by your script path:\nC:\\Users\\YourName> py Desktop\\hello.py\nhello\nHow do I make Python scripts executable?\u00b6\nOn Windows, the standard Python installer already associates the .py\nextension with a file type (Python.File) and gives that file type an open\ncommand that runs the interpreter (D:\\Program Files\\Python\\python.exe \"%1\"\n%*\n). This is enough to make scripts executable from the command prompt as\n\u2018foo.py\u2019. If you\u2019d rather be able to execute the script by simple typing \u2018foo\u2019\nwith no extension you need to add .py to the PATHEXT environment variable.\nWhy does Python sometimes take so long to start?\u00b6\nUsually Python starts very quickly on Windows, but occasionally there are bug reports that Python suddenly begins to take a long time to start up. This is made even more puzzling because Python will work fine on other Windows systems which appear to be configured identically.\nThe problem may be caused by a misconfiguration of virus checking software on the problem machine. Some virus scanners have been known to introduce startup overhead of two orders of magnitude when the scanner is configured to monitor all reads from the filesystem. Try checking the configuration of virus scanning software on your systems to ensure that they are indeed configured identically. McAfee, when configured to scan all file system read activity, is a particular offender.\nHow do I make an executable from a Python script?\u00b6\nSee How can I create a stand-alone binary from a Python script? for a list of tools that can be used to make executables.\nIs a *.pyd\nfile the same as a DLL?\u00b6\nYes, .pyd files are dll\u2019s, but there are a few differences. If you have a DLL\nnamed foo.pyd\n, then it must have a function PyInit_foo()\n. You can then\nwrite Python \u201cimport foo\u201d, and Python will search for foo.pyd (as well as\nfoo.py, foo.pyc) and if it finds it, will attempt to call PyInit_foo()\nto\ninitialize it. You do not link your .exe with foo.lib, as that would cause\nWindows to require the DLL to be present.\nNote that the search path for foo.pyd is PYTHONPATH, not the same as the path\nthat Windows uses to search for foo.dll. Also, foo.pyd need not be present to\nrun your program, whereas if you linked your program with a dll, the dll is\nrequired. Of course, foo.pyd is required if you want to say import foo\n. In\na DLL, linkage is declared in the source code with __declspec(dllexport)\n.\nIn a .pyd, linkage is defined in a list of available functions.\nHow can I embed Python into a Windows application?\u00b6\nEmbedding the Python interpreter in a Windows app can be summarized as follows:\nDo not build Python into your .exe file directly. On Windows, Python must be a DLL to handle importing modules that are themselves DLL\u2019s. (This is the first key undocumented fact.) Instead, link to\npythonNN.dll\n; it is typically installed inC:\\Windows\\System\n. NN is the Python version, a number such as \u201c33\u201d for Python 3.3.You can link to Python in two different ways. Load-time linking means linking against\npythonNN.lib\n, while run-time linking means linking againstpythonNN.dll\n. (General note:pythonNN.lib\nis the so-called \u201cimport lib\u201d corresponding topythonNN.dll\n. It merely defines symbols for the linker.)Run-time linking greatly simplifies link options; everything happens at run time. Your code must load\npythonNN.dll\nusing the WindowsLoadLibraryEx()\nroutine. The code must also use access routines and data inpythonNN.dll\n(that is, Python\u2019s C API\u2019s) using pointers obtained by the WindowsGetProcAddress()\nroutine. Macros can make using these pointers transparent to any C code that calls routines in Python\u2019s C API.If you use SWIG, it is easy to create a Python \u201cextension module\u201d that will make the app\u2019s data and methods available to Python. SWIG will handle just about all the grungy details for you. The result is C code that you link into your .exe file (!) You do not have to create a DLL file, and this also simplifies linking.\nSWIG will create an init function (a C function) whose name depends on the name of the extension module. For example, if the name of the module is leo, the init function will be called initleo(). If you use SWIG shadow classes, as you should, the init function will be called initleoc(). This initializes a mostly hidden helper class used by the shadow class.\nThe reason you can link the C code in step 2 into your .exe file is that calling the initialization function is equivalent to importing the module into Python! (This is the second key undocumented fact.)\nIn short, you can use the following code to initialize the Python interpreter with your extension module.\n#include ... Py_Initialize(); // Initialize Python. initmyAppc(); // Initialize (import) the helper class. PyRun_SimpleString(\"import myApp\"); // Import the shadow class.\nThere are two problems with Python\u2019s C API which will become apparent if you use a compiler other than MSVC, the compiler used to build pythonNN.dll.\nProblem 1: The so-called \u201cVery High Level\u201d functions that take\nFILE *\narguments will not work in a multi-compiler environment because each compiler\u2019s notion of astruct FILE\nwill be different. From an implementation standpoint these are very low level functions.Problem 2: SWIG generates the following code when generating wrappers to void functions:\nPy_INCREF(Py_None); _resultobj = Py_None; return _resultobj;\nAlas, Py_None is a macro that expands to a reference to a complex data structure called _Py_NoneStruct inside pythonNN.dll. Again, this code will fail in a mult-compiler environment. Replace such code by:\nreturn Py_BuildValue(\"\");\nIt may be possible to use SWIG\u2019s\n%typemap\ncommand to make the change automatically, though I have not been able to get this to work (I\u2019m a complete SWIG newbie).Using a Python shell script to put up a Python interpreter window from inside your Windows app is not a good idea; the resulting window will be independent of your app\u2019s windowing system. Rather, you (or the wxPythonWindow class) should create a \u201cnative\u201d interpreter window. It is easy to connect that window to the Python interpreter. You can redirect Python\u2019s i/o to _any_ object that supports read and write, so all you need is a Python object (defined in your extension module) that contains read() and write() methods.\nHow do I keep editors from inserting tabs into my Python source?\u00b6\nThe FAQ does not recommend using tabs, and the Python style guide, PEP 8, recommends 4 spaces for distributed Python code; this is also the Emacs python-mode default.\nUnder any editor, mixing tabs and spaces is a bad idea. MSVC is no different in this respect, and is easily configured to use spaces: Take\n, and for file type \u201cDefault\u201d set \u201cTab size\u201d and \u201cIndent size\u201d to 4, and select the \u201cInsert spaces\u201d radio button.Python raises IndentationError\nor TabError\nif mixed tabs\nand spaces are causing problems in leading whitespace.\nYou may also run the tabnanny\nmodule to check a directory tree\nin batch mode.\nHow do I check for a keypress without blocking?\u00b6\nUse the msvcrt\nmodule. This is a standard Windows-specific extension module.\nIt defines a function kbhit()\nwhich checks whether a keyboard hit is\npresent, and getch()\nwhich gets one character without echoing it.\nHow do I solve the missing api-ms-win-crt-runtime-l1-1-0.dll error?\u00b6\nThis can occur on Python 3.5 and later when using Windows 8.1 or earlier without all updates having been installed. First ensure your operating system is supported and is up to date, and if that does not resolve the issue, visit the Microsoft support page for guidance on manually installing the C Runtime update.", "code_snippets": [], "language": "Python", "source": "python.org", "token_count": 2676} +{"url": "https://docs.python.org/3/c-api/gcsupport.html", "title": "Supporting Cyclic Garbage Collection", "content": "Supporting Cyclic Garbage Collection\u00b6\nPython\u2019s support for detecting and collecting garbage which involves circular references requires support from object types which are \u201ccontainers\u201d for other objects which may also be containers. Types which do not store references to other objects, or which only store references to atomic types (such as numbers or strings), do not need to provide any explicit support for garbage collection.\nTo create a container type, the tp_flags\nfield of the type object must\ninclude the Py_TPFLAGS_HAVE_GC\nand provide an implementation of the\ntp_traverse\nhandler. If instances of the type are mutable, a\ntp_clear\nimplementation must also be provided.\nPy_TPFLAGS_HAVE_GC\nObjects with a type with this flag set must conform with the rules documented here. For convenience these objects will be referred to as container objects.\nConstructors for container types must conform to two rules:\nThe memory for the object must be allocated using\nPyObject_GC_New\norPyObject_GC_NewVar\n.Once all the fields which may contain references to other containers are initialized, it must call\nPyObject_GC_Track()\n.\nSimilarly, the deallocator for the object must conform to a similar pair of rules:\nBefore fields which refer to other containers are invalidated,\nPyObject_GC_UnTrack()\nmust be called.The object\u2019s memory must be deallocated using\nPyObject_GC_Del()\n.Warning\nIf a type adds the Py_TPFLAGS_HAVE_GC, then it must implement at least a\ntp_traverse\nhandler or explicitly use one from its subclass or subclasses.When calling\nPyType_Ready()\nor some of the APIs that indirectly call it likePyType_FromSpecWithBases()\norPyType_FromSpec()\nthe interpreter will automatically populate thetp_flags\n,tp_traverse\nandtp_clear\nfields if the type inherits from a class that implements the garbage collector protocol and the child class does not include thePy_TPFLAGS_HAVE_GC\nflag.\n-\nPyObject_GC_New(TYPE, typeobj)\u00b6\nAnalogous to\nPyObject_New\nbut for container objects with thePy_TPFLAGS_HAVE_GC\nflag set.Do not call this directly to allocate memory for an object; call the type\u2019s\ntp_alloc\nslot instead.When populating a type\u2019s\ntp_alloc\nslot,PyType_GenericAlloc()\nis preferred over a custom function that simply calls this macro.Memory allocated by this macro must be freed with\nPyObject_GC_Del()\n(usually called via the object\u2019stp_free\nslot).\n-\nPyObject_GC_NewVar(TYPE, typeobj, size)\u00b6\nAnalogous to\nPyObject_NewVar\nbut for container objects with thePy_TPFLAGS_HAVE_GC\nflag set.Do not call this directly to allocate memory for an object; call the type\u2019s\ntp_alloc\nslot instead.When populating a type\u2019s\ntp_alloc\nslot,PyType_GenericAlloc()\nis preferred over a custom function that simply calls this macro.Memory allocated by this macro must be freed with\nPyObject_GC_Del()\n(usually called via the object\u2019stp_free\nslot).\n-\nPyObject *PyUnstable_Object_GC_NewWithExtraData(PyTypeObject *type, size_t extra_size)\u00b6\n- This is Unstable API. It may change without warning in minor releases.\nAnalogous to\nPyObject_GC_New\nbut allocates extra_size bytes at the end of the object (at offsettp_basicsize\n). The allocated memory is initialized to zeros, except for thePython object header\n.The extra data will be deallocated with the object, but otherwise it is not managed by Python.\nMemory allocated by this function must be freed with\nPyObject_GC_Del()\n(usually called via the object\u2019stp_free\nslot).Warning\nThe function is marked as unstable because the final mechanism for reserving extra data after an instance is not yet decided. For allocating a variable number of fields, prefer using\nPyVarObject\nandtp_itemsize\ninstead.Added in version 3.12.\n-\nPyObject_GC_Resize(TYPE, op, newsize)\u00b6\nResize an object allocated by\nPyObject_NewVar\n. Returns the resized object of typeTYPE*\n(refers to any C type) orNULL\non failure.op must be of type PyVarObject* and must not be tracked by the collector yet. newsize must be of type\nPy_ssize_t\n.\n-\nvoid PyObject_GC_Track(PyObject *op)\u00b6\n- Part of the Stable ABI.\nAdds the object op to the set of container objects tracked by the collector. The collector can run at unexpected times so objects must be valid while being tracked. This should be called once all the fields followed by the\ntp_traverse\nhandler become valid, usually near the end of the constructor.\n-\nint PyObject_IS_GC(PyObject *obj)\u00b6\nReturns non-zero if the object implements the garbage collector protocol, otherwise returns 0.\nThe object cannot be tracked by the garbage collector if this function returns 0.\n-\nint PyObject_GC_IsTracked(PyObject *op)\u00b6\n- Part of the Stable ABI since version 3.9.\nReturns 1 if the object type of op implements the GC protocol and op is being currently tracked by the garbage collector and 0 otherwise.\nThis is analogous to the Python function\ngc.is_tracked()\n.Added in version 3.9.\n-\nint PyObject_GC_IsFinalized(PyObject *op)\u00b6\n- Part of the Stable ABI since version 3.9.\nReturns 1 if the object type of op implements the GC protocol and op has been already finalized by the garbage collector and 0 otherwise.\nThis is analogous to the Python function\ngc.is_finalized()\n.Added in version 3.9.\n-\nvoid PyObject_GC_Del(void *op)\u00b6\n- Part of the Stable ABI.\nReleases memory allocated to an object using\nPyObject_GC_New\norPyObject_GC_NewVar\n.Do not call this directly to free an object\u2019s memory; call the type\u2019s\ntp_free\nslot instead.Do not use this for memory allocated by\nPyObject_New\n,PyObject_NewVar\n, or related allocation functions; usePyObject_Free()\ninstead.See also\nPyObject_Free()\nis the non-GC equivalent of this function.\n-\nvoid PyObject_GC_UnTrack(void *op)\u00b6\n- Part of the Stable ABI.\nRemove the object op from the set of container objects tracked by the collector. Note that\nPyObject_GC_Track()\ncan be called again on this object to add it back to the set of tracked objects. The deallocator (tp_dealloc\nhandler) should call this for the object before any of the fields used by thetp_traverse\nhandler become invalid.\nChanged in version 3.8: The _PyObject_GC_TRACK()\nand _PyObject_GC_UNTRACK()\nmacros\nhave been removed from the public C API.\nThe tp_traverse\nhandler accepts a function parameter of this type:\n-\ntypedef int (*visitproc)(PyObject *object, void *arg)\u00b6\n- Part of the Stable ABI.\nType of the visitor function passed to the\ntp_traverse\nhandler. The function should be called with an object to traverse as object and the third parameter to thetp_traverse\nhandler as arg. The Python core uses several visitor functions to implement cyclic garbage detection; it\u2019s not expected that users will need to write their own visitor functions.\nThe tp_traverse\nhandler must have the following type:\n-\ntypedef int (*traverseproc)(PyObject *self, visitproc visit, void *arg)\u00b6\n- Part of the Stable ABI.\nTraversal function for a container object. Implementations must call the visit function for each object directly contained by self, with the parameters to visit being the contained object and the arg value passed to the handler. The visit function must not be called with a\nNULL\nobject argument. If visit returns a non-zero value that value should be returned immediately.The traversal function must not have any side effects. Implementations may not modify the reference counts of any Python objects nor create or destroy any Python objects.\nTo simplify writing tp_traverse\nhandlers, a Py_VISIT()\nmacro is\nprovided. In order to use this macro, the tp_traverse\nimplementation\nmust name its arguments exactly visit and arg:\n-\nPy_VISIT(o)\u00b6\nIf the PyObject* o is not\nNULL\n, call the visit callback, with arguments o and arg. If visit returns a non-zero value, then return it. Using this macro,tp_traverse\nhandlers look like:static int my_traverse(Noddy *self, visitproc visit, void *arg) { Py_VISIT(self->foo); Py_VISIT(self->bar); return 0; }\nThe tp_clear\nhandler must be of the inquiry\ntype, or NULL\nif the object is immutable.\n-\ntypedef int (*inquiry)(PyObject *self)\u00b6\n- Part of the Stable ABI.\nDrop references that may have created reference cycles. Immutable objects do not have to define this method since they can never directly create reference cycles. Note that the object must still be valid after calling this method (don\u2019t just call\nPy_DECREF()\non a reference). The collector will call this method if it detects that this object is involved in a reference cycle.\nControlling the Garbage Collector State\u00b6\nThe C-API provides the following functions for controlling garbage collection runs.\n-\nPy_ssize_t PyGC_Collect(void)\u00b6\n- Part of the Stable ABI.\nPerform a full garbage collection, if the garbage collector is enabled. (Note that\ngc.collect()\nruns it unconditionally.)Returns the number of collected + unreachable objects which cannot be collected. If the garbage collector is disabled or already collecting, returns\n0\nimmediately. Errors during garbage collection are passed tosys.unraisablehook\n. This function does not raise exceptions.\n-\nint PyGC_Enable(void)\u00b6\n- Part of the Stable ABI since version 3.10.\nEnable the garbage collector: similar to\ngc.enable()\n. Returns the previous state, 0 for disabled and 1 for enabled.Added in version 3.10.\n-\nint PyGC_Disable(void)\u00b6\n- Part of the Stable ABI since version 3.10.\nDisable the garbage collector: similar to\ngc.disable()\n. Returns the previous state, 0 for disabled and 1 for enabled.Added in version 3.10.\n-\nint PyGC_IsEnabled(void)\u00b6\n- Part of the Stable ABI since version 3.10.\nQuery the state of the garbage collector: similar to\ngc.isenabled()\n. Returns the current state, 0 for disabled and 1 for enabled.Added in version 3.10.\nQuerying Garbage Collector State\u00b6\nThe C-API provides the following interface for querying information about the garbage collector.\n-\nvoid PyUnstable_GC_VisitObjects(gcvisitobjects_t callback, void *arg)\u00b6\n- This is Unstable API. It may change without warning in minor releases.\nRun supplied callback on all live GC-capable objects. arg is passed through to all invocations of callback.\nWarning\nIf new objects are (de)allocated by the callback it is undefined if they will be visited.\nGarbage collection is disabled during operation. Explicitly running a collection in the callback may lead to undefined behaviour e.g. visiting the same objects multiple times or not at all.\nAdded in version 3.12.\n-\ntypedef int (*gcvisitobjects_t)(PyObject *object, void *arg)\u00b6\nType of the visitor function to be passed to\nPyUnstable_GC_VisitObjects()\n. arg is the same as the arg passed toPyUnstable_GC_VisitObjects\n. Return1\nto continue iteration, return0\nto stop iteration. Other return values are reserved for now so behavior on returning anything else is undefined.Added in version 3.12.", "code_snippets": [], "language": "Python", "source": "python.org", "token_count": 2648} +{"url": "https://docs.python.org/3/howto/instrumentation.html", "title": "Instrumenting CPython with DTrace and SystemTap", "content": "Instrumenting CPython with DTrace and SystemTap\u00b6\n- author:\nDavid Malcolm\n- author:\n\u0141ukasz Langa\nDTrace and SystemTap are monitoring tools, each providing a way to inspect what the processes on a computer system are doing. They both use domain-specific languages allowing a user to write scripts which:\nfilter which processes are to be observed\ngather data from the processes of interest\ngenerate reports on the data\nAs of Python 3.6, CPython can be built with embedded \u201cmarkers\u201d, also known as \u201cprobes\u201d, that can be observed by a DTrace or SystemTap script, making it easier to monitor what the CPython processes on a system are doing.\nCPython implementation detail: DTrace markers are implementation details of the CPython interpreter. No guarantees are made about probe compatibility between versions of CPython. DTrace scripts can stop working or work incorrectly without warning when changing CPython versions.\nEnabling the static markers\u00b6\nmacOS comes with built-in support for DTrace. On Linux, in order to build CPython with the embedded markers for SystemTap, the SystemTap development tools must be installed.\nOn a Linux machine, this can be done via:\n$ yum install systemtap-sdt-devel\nor:\n$ sudo apt-get install systemtap-sdt-dev\nCPython must then be configured with the --with-dtrace option\n:\nchecking for --with-dtrace... yes\nOn macOS, you can list available DTrace probes by running a Python process in the background and listing all probes made available by the Python provider:\n$ python3.6 -q &\n$ sudo dtrace -l -P python$! # or: dtrace -l -m python3.6\nID PROVIDER MODULE FUNCTION NAME\n29564 python18035 python3.6 _PyEval_EvalFrameDefault function-entry\n29565 python18035 python3.6 dtrace_function_entry function-entry\n29566 python18035 python3.6 _PyEval_EvalFrameDefault function-return\n29567 python18035 python3.6 dtrace_function_return function-return\n29568 python18035 python3.6 collect gc-done\n29569 python18035 python3.6 collect gc-start\n29570 python18035 python3.6 _PyEval_EvalFrameDefault line\n29571 python18035 python3.6 maybe_dtrace_line line\nOn Linux, you can verify if the SystemTap static markers are present in the built binary by seeing if it contains a \u201c.note.stapsdt\u201d section.\n$ readelf -S ./python | grep .note.stapsdt\n[30] .note.stapsdt NOTE 0000000000000000 00308d78\nIf you\u2019ve built Python as a shared library\n(with the --enable-shared\nconfigure option), you\nneed to look instead within the shared library. For example:\n$ readelf -S libpython3.3dm.so.1.0 | grep .note.stapsdt\n[29] .note.stapsdt NOTE 0000000000000000 00365b68\nSufficiently modern readelf can print the metadata:\n$ readelf -n ./python\nDisplaying notes found at file offset 0x00000254 with length 0x00000020:\nOwner Data size Description\nGNU 0x00000010 NT_GNU_ABI_TAG (ABI version tag)\nOS: Linux, ABI: 2.6.32\nDisplaying notes found at file offset 0x00000274 with length 0x00000024:\nOwner Data size Description\nGNU 0x00000014 NT_GNU_BUILD_ID (unique build ID bitstring)\nBuild ID: df924a2b08a7e89f6e11251d4602022977af2670\nDisplaying notes found at file offset 0x002d6c30 with length 0x00000144:\nOwner Data size Description\nstapsdt 0x00000031 NT_STAPSDT (SystemTap probe descriptors)\nProvider: python\nName: gc__start\nLocation: 0x00000000004371c3, Base: 0x0000000000630ce2, Semaphore: 0x00000000008d6bf6\nArguments: -4@%ebx\nstapsdt 0x00000030 NT_STAPSDT (SystemTap probe descriptors)\nProvider: python\nName: gc__done\nLocation: 0x00000000004374e1, Base: 0x0000000000630ce2, Semaphore: 0x00000000008d6bf8\nArguments: -8@%rax\nstapsdt 0x00000045 NT_STAPSDT (SystemTap probe descriptors)\nProvider: python\nName: function__entry\nLocation: 0x000000000053db6c, Base: 0x0000000000630ce2, Semaphore: 0x00000000008d6be8\nArguments: 8@%rbp 8@%r12 -4@%eax\nstapsdt 0x00000046 NT_STAPSDT (SystemTap probe descriptors)\nProvider: python\nName: function__return\nLocation: 0x000000000053dba8, Base: 0x0000000000630ce2, Semaphore: 0x00000000008d6bea\nArguments: 8@%rbp 8@%r12 -4@%eax\nThe above metadata contains information for SystemTap describing how it can patch strategically placed machine code instructions to enable the tracing hooks used by a SystemTap script.\nStatic DTrace probes\u00b6\nThe following example DTrace script can be used to show the call/return hierarchy of a Python script, only tracing within the invocation of a function called \u201cstart\u201d. In other words, import-time function invocations are not going to be listed:\nself int indent;\npython$target:::function-entry\n/copyinstr(arg1) == \"start\"/\n{\nself->trace = 1;\n}\npython$target:::function-entry\n/self->trace/\n{\nprintf(\"%d\\t%*s:\", timestamp, 15, probename);\nprintf(\"%*s\", self->indent, \"\");\nprintf(\"%s:%s:%d\\n\", basename(copyinstr(arg0)), copyinstr(arg1), arg2);\nself->indent++;\n}\npython$target:::function-return\n/self->trace/\n{\nself->indent--;\nprintf(\"%d\\t%*s:\", timestamp, 15, probename);\nprintf(\"%*s\", self->indent, \"\");\nprintf(\"%s:%s:%d\\n\", basename(copyinstr(arg0)), copyinstr(arg1), arg2);\n}\npython$target:::function-return\n/copyinstr(arg1) == \"start\"/\n{\nself->trace = 0;\n}\nIt can be invoked like this:\n$ sudo dtrace -q -s call_stack.d -c \"python3.6 script.py\"\nThe output looks like this:\n156641360502280 function-entry:call_stack.py:start:23\n156641360518804 function-entry: call_stack.py:function_1:1\n156641360532797 function-entry: call_stack.py:function_3:9\n156641360546807 function-return: call_stack.py:function_3:10\n156641360563367 function-return: call_stack.py:function_1:2\n156641360578365 function-entry: call_stack.py:function_2:5\n156641360591757 function-entry: call_stack.py:function_1:1\n156641360605556 function-entry: call_stack.py:function_3:9\n156641360617482 function-return: call_stack.py:function_3:10\n156641360629814 function-return: call_stack.py:function_1:2\n156641360642285 function-return: call_stack.py:function_2:6\n156641360656770 function-entry: call_stack.py:function_3:9\n156641360669707 function-return: call_stack.py:function_3:10\n156641360687853 function-entry: call_stack.py:function_4:13\n156641360700719 function-return: call_stack.py:function_4:14\n156641360719640 function-entry: call_stack.py:function_5:18\n156641360732567 function-return: call_stack.py:function_5:21\n156641360747370 function-return:call_stack.py:start:28\nStatic SystemTap markers\u00b6\nThe low-level way to use the SystemTap integration is to use the static markers directly. This requires you to explicitly state the binary file containing them.\nFor example, this SystemTap script can be used to show the call/return hierarchy of a Python script:\nprobe process(\"python\").mark(\"function__entry\") {\nfilename = user_string($arg1);\nfuncname = user_string($arg2);\nlineno = $arg3;\nprintf(\"%s => %s in %s:%d\\\\n\",\nthread_indent(1), funcname, filename, lineno);\n}\nprobe process(\"python\").mark(\"function__return\") {\nfilename = user_string($arg1);\nfuncname = user_string($arg2);\nlineno = $arg3;\nprintf(\"%s <= %s in %s:%d\\\\n\",\nthread_indent(-1), funcname, filename, lineno);\n}\nIt can be invoked like this:\n$ stap \\\nshow-call-hierarchy.stp \\\n-c \"./python test.py\"\nThe output looks like this:\n11408 python(8274): => __contains__ in Lib/_abcoll.py:362\n11414 python(8274): => __getitem__ in Lib/os.py:425\n11418 python(8274): => encode in Lib/os.py:490\n11424 python(8274): <= encode in Lib/os.py:493\n11428 python(8274): <= __getitem__ in Lib/os.py:426\n11433 python(8274): <= __contains__ in Lib/_abcoll.py:366\nwhere the columns are:\ntime in microseconds since start of script\nname of executable\nPID of process\nand the remainder indicates the call/return hierarchy as the script executes.\nFor a --enable-shared\nbuild of CPython, the markers are contained within the\nlibpython shared library, and the probe\u2019s dotted path needs to reflect this. For\nexample, this line from the above example:\nprobe process(\"python\").mark(\"function__entry\") {\nshould instead read:\nprobe process(\"python\").library(\"libpython3.6dm.so.1.0\").mark(\"function__entry\") {\n(assuming a debug build of CPython 3.6)\nAvailable static markers\u00b6\n- function__entry(str filename, str funcname, int lineno)\nThis marker indicates that execution of a Python function has begun. It is only triggered for pure-Python (bytecode) functions.\nThe filename, function name, and line number are provided back to the tracing script as positional arguments, which must be accessed using\n$arg1\n,$arg2\n,$arg3\n:$arg1\n:(const char *)\nfilename, accessible usinguser_string($arg1)\n$arg2\n:(const char *)\nfunction name, accessible usinguser_string($arg2)\n$arg3\n:int\nline number\n- function__return(str filename, str funcname, int lineno)\nThis marker is the converse of\nfunction__entry()\n, and indicates that execution of a Python function has ended (either viareturn\n, or via an exception). It is only triggered for pure-Python (bytecode) functions.The arguments are the same as for\nfunction__entry()\n- line(str filename, str funcname, int lineno)\nThis marker indicates a Python line is about to be executed. It is the equivalent of line-by-line tracing with a Python profiler. It is not triggered within C functions.\nThe arguments are the same as for\nfunction__entry()\n.\n- gc__start(int generation)\nFires when the Python interpreter starts a garbage collection cycle.\narg0\nis the generation to scan, likegc.collect()\n.\n- gc__done(long collected)\nFires when the Python interpreter finishes a garbage collection cycle.\narg0\nis the number of collected objects.\n- import__find__load__start(str modulename)\nFires before\nimportlib\nattempts to find and load the module.arg0\nis the module name.Added in version 3.7.\n- import__find__load__done(str modulename, int found)\nFires after\nimportlib\n\u2019s find_and_load function is called.arg0\nis the module name,arg1\nindicates if module was successfully loaded.Added in version 3.7.\n- audit(str event, void *tuple)\nFires when\nsys.audit()\norPySys_Audit()\nis called.arg0\nis the event name as C string,arg1\nis aPyObject\npointer to a tuple object.Added in version 3.8.\nSystemTap Tapsets\u00b6\nThe higher-level way to use the SystemTap integration is to use a \u201ctapset\u201d: SystemTap\u2019s equivalent of a library, which hides some of the lower-level details of the static markers.\nHere is a tapset file, based on a non-shared build of CPython:\n/*\nProvide a higher-level wrapping around the function__entry and\nfunction__return markers:\n\\*/\nprobe python.function.entry = process(\"python\").mark(\"function__entry\")\n{\nfilename = user_string($arg1);\nfuncname = user_string($arg2);\nlineno = $arg3;\nframeptr = $arg4\n}\nprobe python.function.return = process(\"python\").mark(\"function__return\")\n{\nfilename = user_string($arg1);\nfuncname = user_string($arg2);\nlineno = $arg3;\nframeptr = $arg4\n}\nIf this file is installed in SystemTap\u2019s tapset directory (e.g.\n/usr/share/systemtap/tapset\n), then these additional probepoints become\navailable:\n- python.function.entry(str filename, str funcname, int lineno, frameptr)\nThis probe point indicates that execution of a Python function has begun. It is only triggered for pure-Python (bytecode) functions.\n- python.function.return(str filename, str funcname, int lineno, frameptr)\nThis probe point is the converse of\npython.function.return\n, and indicates that execution of a Python function has ended (either viareturn\n, or via an exception). It is only triggered for pure-Python (bytecode) functions.\nExamples\u00b6\nThis SystemTap script uses the tapset above to more cleanly implement the example given above of tracing the Python function-call hierarchy, without needing to directly name the static markers:\nprobe python.function.entry\n{\nprintf(\"%s => %s in %s:%d\\n\",\nthread_indent(1), funcname, filename, lineno);\n}\nprobe python.function.return\n{\nprintf(\"%s <= %s in %s:%d\\n\",\nthread_indent(-1), funcname, filename, lineno);\n}\nThe following script uses the tapset above to provide a top-like view of all running CPython code, showing the top 20 most frequently entered bytecode frames, each second, across the whole system:\nglobal fn_calls;\nprobe python.function.entry\n{\nfn_calls[pid(), filename, funcname, lineno] += 1;\n}\nprobe timer.ms(1000) {\nprintf(\"\\033[2J\\033[1;1H\") /* clear screen \\*/\nprintf(\"%6s %80s %6s %30s %6s\\n\",\n\"PID\", \"FILENAME\", \"LINE\", \"FUNCTION\", \"CALLS\")\nforeach ([pid, filename, funcname, lineno] in fn_calls- limit 20) {\nprintf(\"%6d %80s %6d %30s %6d\\n\",\npid, filename, lineno, funcname,\nfn_calls[pid, filename, funcname, lineno]);\n}\ndelete fn_calls;\n}", "code_snippets": [], "language": "Python", "source": "python.org", "token_count": 3070} +{"url": "https://docs.python.org/3/library/asyncio-platforms.html", "title": "Platform Support", "content": "Platform Support\u00b6\nThe asyncio\nmodule is designed to be portable,\nbut some platforms have subtle differences and limitations\ndue to the platforms\u2019 underlying architecture and capabilities.\nAll Platforms\u00b6\nloop.add_reader()\nandloop.add_writer()\ncannot be used to monitor file I/O.\nWindows\u00b6\nSource code: Lib/asyncio/proactor_events.py, Lib/asyncio/windows_events.py, Lib/asyncio/windows_utils.py\nChanged in version 3.8: On Windows, ProactorEventLoop\nis now the default event loop.\nAll event loops on Windows do not support the following methods:\nloop.create_unix_connection()\nandloop.create_unix_server()\nare not supported. Thesocket.AF_UNIX\nsocket family is specific to Unix.loop.add_signal_handler()\nandloop.remove_signal_handler()\nare not supported.\nSelectorEventLoop\nhas the following limitations:\nSelectSelector\nis used to wait on socket events: it supports sockets and is limited to 512 sockets.loop.add_reader()\nandloop.add_writer()\nonly accept socket handles (e.g. pipe file descriptors are not supported).Pipes are not supported, so the\nloop.connect_read_pipe()\nandloop.connect_write_pipe()\nmethods are not implemented.Subprocesses are not supported, i.e.\nloop.subprocess_exec()\nandloop.subprocess_shell()\nmethods are not implemented.\nProactorEventLoop\nhas the following limitations:\nThe\nloop.add_reader()\nandloop.add_writer()\nmethods are not supported.\nThe resolution of the monotonic clock on Windows is usually around 15.6 milliseconds. The best resolution is 0.5 milliseconds. The resolution depends on the hardware (availability of HPET) and on the Windows configuration.\nSubprocess Support on Windows\u00b6\nOn Windows, the default event loop ProactorEventLoop\nsupports\nsubprocesses, whereas SelectorEventLoop\ndoes not.\nmacOS\u00b6\nModern macOS versions are fully supported.\nmacOS <= 10.8\nOn macOS 10.6, 10.7 and 10.8, the default event loop\nuses selectors.KqueueSelector\n, which does not support\ncharacter devices on these versions. The SelectorEventLoop\ncan be manually configured to use SelectSelector\nor PollSelector\nto support character devices on\nthese older versions of macOS. Example:\nimport asyncio\nimport selectors\nselector = selectors.SelectSelector()\nloop = asyncio.SelectorEventLoop(selector)\nasyncio.set_event_loop(loop)", "code_snippets": ["\n", "\n\n", " ", " ", "\n", " ", " ", "\n", "\n"], "language": "Python", "source": "python.org", "token_count": 558} +{"url": "https://docs.python.org/3/reference/executionmodel.html", "title": "Execution model", "content": "4. Execution model\u00b6\n4.1. Structure of a program\u00b6\nA Python program is constructed from code blocks.\nA block is a piece of Python program text that is executed as a unit.\nThe following are blocks: a module, a function body, and a class definition.\nEach command typed interactively is a block. A script file (a file given as\nstandard input to the interpreter or specified as a command line argument to the\ninterpreter) is a code block. A script command (a command specified on the\ninterpreter command line with the -c\noption) is a code block.\nA module run as a top level script (as module __main__\n) from the command\nline using a -m\nargument is also a code block. The string\nargument passed to the built-in functions eval()\nand exec()\nis a\ncode block.\nA code block is executed in an execution frame. A frame contains some administrative information (used for debugging) and determines where and how execution continues after the code block\u2019s execution has completed.\n4.2. Naming and binding\u00b6\n4.2.1. Binding of names\u00b6\nNames refer to objects. Names are introduced by name binding operations.\nThe following constructs bind names:\nformal parameters to functions,\nclass definitions,\nfunction definitions,\nassignment expressions,\ntargets that are identifiers if occurring in an assignment:\nimport\nstatements.type\nstatements.\nThe import\nstatement of the form from ... import *\nbinds all\nnames defined in the imported module, except those beginning with an underscore.\nThis form may only be used at the module level.\nA target occurring in a del\nstatement is also considered bound for\nthis purpose (though the actual semantics are to unbind the name).\nEach assignment or import statement occurs within a block defined by a class or function definition or at the module level (the top-level code block).\nIf a name is bound in a block, it is a local variable of that block, unless\ndeclared as nonlocal\nor global\n. If a name is bound at\nthe module level, it is a global variable. (The variables of the module code\nblock are local and global.) If a variable is used in a code block but not\ndefined there, it is a free variable.\nEach occurrence of a name in the program text refers to the binding of that name established by the following name resolution rules.\n4.2.2. Resolution of names\u00b6\nA scope defines the visibility of a name within a block. If a local variable is defined in a block, its scope includes that block. If the definition occurs in a function block, the scope extends to any blocks contained within the defining one, unless a contained block introduces a different binding for the name.\nWhen a name is used in a code block, it is resolved using the nearest enclosing scope. The set of all such scopes visible to a code block is called the block\u2019s environment.\nWhen a name is not found at all, a NameError\nexception is raised.\nIf the current scope is a function scope, and the name refers to a local\nvariable that has not yet been bound to a value at the point where the name is\nused, an UnboundLocalError\nexception is raised.\nUnboundLocalError\nis a subclass of NameError\n.\nIf a name binding operation occurs anywhere within a code block, all uses of the name within the block are treated as references to the current block. This can lead to errors when a name is used within a block before it is bound. This rule is subtle. Python lacks declarations and allows name binding operations to occur anywhere within a code block. The local variables of a code block can be determined by scanning the entire text of the block for name binding operations. See the FAQ entry on UnboundLocalError for examples.\nIf the global\nstatement occurs within a block, all uses of the names\nspecified in the statement refer to the bindings of those names in the top-level\nnamespace. Names are resolved in the top-level namespace by searching the\nglobal namespace, i.e. the namespace of the module containing the code block,\nand the builtins namespace, the namespace of the module builtins\n. The\nglobal namespace is searched first. If the names are not found there, the\nbuiltins namespace is searched next. If the names are also not found in the\nbuiltins namespace, new variables are created in the global namespace.\nThe global statement must precede all uses of the listed names.\nThe global\nstatement has the same scope as a name binding operation\nin the same block. If the nearest enclosing scope for a free variable contains\na global statement, the free variable is treated as a global.\nThe nonlocal\nstatement causes corresponding names to refer\nto previously bound variables in the nearest enclosing function scope.\nSyntaxError\nis raised at compile time if the given name does not\nexist in any enclosing function scope. Type parameters\ncannot be rebound with the nonlocal\nstatement.\nThe namespace for a module is automatically created the first time a module is\nimported. The main module for a script is always called __main__\n.\nClass definition blocks and arguments to exec()\nand eval()\nare\nspecial in the context of name resolution.\nA class definition is an executable statement that may use and define names.\nThese references follow the normal rules for name resolution with an exception\nthat unbound local variables are looked up in the global namespace.\nThe namespace of the class definition becomes the attribute dictionary of\nthe class. The scope of names defined in a class block is limited to the\nclass block; it does not extend to the code blocks of methods. This includes\ncomprehensions and generator expressions, but it does not include\nannotation scopes,\nwhich have access to their enclosing class scopes.\nThis means that the following will fail:\nclass A:\na = 42\nb = list(a + i for i in range(10))\nHowever, the following will succeed:\nclass A:\ntype Alias = Nested\nclass Nested: pass\nprint(A.Alias.__value__) # \n4.2.3. Annotation scopes\u00b6\nAnnotations, type parameter lists\nand type\nstatements\nintroduce annotation scopes, which behave mostly like function scopes,\nbut with some exceptions discussed below.\nAnnotation scopes are used in the following contexts:\nType parameter lists for generic type aliases.\nType parameter lists for generic functions. A generic function\u2019s annotations are executed within the annotation scope, but its defaults and decorators are not.\nType parameter lists for generic classes. A generic class\u2019s base classes and keyword arguments are executed within the annotation scope, but its decorators are not.\nThe bounds, constraints, and default values for type parameters (lazily evaluated).\nThe value of type aliases (lazily evaluated).\nAnnotation scopes differ from function scopes in the following ways:\nAnnotation scopes have access to their enclosing class namespace. If an annotation scope is immediately within a class scope, or within another annotation scope that is immediately within a class scope, the code in the annotation scope can use names defined in the class scope as if it were executed directly within the class body. This contrasts with regular functions defined within classes, which cannot access names defined in the class scope.\nExpressions in annotation scopes cannot contain\nyield\n,yield from\n,await\n, or:=\nexpressions. (These expressions are allowed in other scopes contained within the annotation scope.)Names defined in annotation scopes cannot be rebound with\nnonlocal\nstatements in inner scopes. This includes only type parameters, as no other syntactic elements that can appear within annotation scopes can introduce new names.While annotation scopes have an internal name, that name is not reflected in the qualified name of objects defined within the scope. Instead, the\n__qualname__\nof such objects is as if the object were defined in the enclosing scope.\nAdded in version 3.12: Annotation scopes were introduced in Python 3.12 as part of PEP 695.\nChanged in version 3.13: Annotation scopes are also used for type parameter defaults, as introduced by PEP 696.\n4.2.4. Lazy evaluation\u00b6\nMost annotation scopes are lazily evaluated. This includes annotations,\nthe values of type aliases created through the type\nstatement, and\nthe bounds, constraints, and default values of type\nvariables created through the type parameter syntax.\nThis means that they are not evaluated when the type alias or type variable is\ncreated, or when the object carrying annotations is created. Instead, they\nare only evaluated when necessary, for example when the __value__\nattribute on a type alias is accessed.\nExample:\n>>> type Alias = 1/0\n>>> Alias.__value__\nTraceback (most recent call last):\n...\nZeroDivisionError: division by zero\n>>> def func[T: 1/0](): pass\n>>> T = func.__type_params__[0]\n>>> T.__bound__\nTraceback (most recent call last):\n...\nZeroDivisionError: division by zero\nHere the exception is raised only when the __value__\nattribute\nof the type alias or the __bound__\nattribute of the type variable\nis accessed.\nThis behavior is primarily useful for references to types that have not yet been defined when the type alias or type variable is created. For example, lazy evaluation enables creation of mutually recursive type aliases:\nfrom typing import Literal\ntype SimpleExpr = int | Parenthesized\ntype Parenthesized = tuple[Literal[\"(\"], Expr, Literal[\")\"]]\ntype Expr = SimpleExpr | tuple[SimpleExpr, Literal[\"+\", \"-\"], Expr]\nLazily evaluated values are evaluated in annotation scope, which means that names that appear inside the lazily evaluated value are looked up as if they were used in the immediately enclosing scope.\nAdded in version 3.12.\n4.2.5. Builtins and restricted execution\u00b6\nCPython implementation detail: Users should not touch __builtins__\n; it is strictly an implementation\ndetail. Users wanting to override values in the builtins namespace should\nimport\nthe builtins\nmodule and modify its\nattributes appropriately.\nThe builtins namespace associated with the execution of a code block\nis actually found by looking up the name __builtins__\nin its\nglobal namespace; this should be a dictionary or a module (in the\nlatter case the module\u2019s dictionary is used). By default, when in the\n__main__\nmodule, __builtins__\nis the built-in module\nbuiltins\n; when in any other module, __builtins__\nis an\nalias for the dictionary of the builtins\nmodule itself.\n4.2.6. Interaction with dynamic features\u00b6\nName resolution of free variables occurs at runtime, not at compile time. This means that the following code will print 42:\ni = 10\ndef f():\nprint(i)\ni = 42\nf()\nThe eval()\nand exec()\nfunctions do not have access to the full\nenvironment for resolving names. Names may be resolved in the local and global\nnamespaces of the caller. Free variables are not resolved in the nearest\nenclosing namespace, but in the global namespace. [1] The exec()\nand\neval()\nfunctions have optional arguments to override the global and local\nnamespace. If only one namespace is specified, it is used for both.\n4.3. Exceptions\u00b6\nExceptions are a means of breaking out of the normal flow of control of a code block in order to handle errors or other exceptional conditions. An exception is raised at the point where the error is detected; it may be handled by the surrounding code block or by any code block that directly or indirectly invoked the code block where the error occurred.\nThe Python interpreter raises an exception when it detects a run-time error\n(such as division by zero). A Python program can also explicitly raise an\nexception with the raise\nstatement. Exception handlers are specified\nwith the try\n\u2026 except\nstatement. The finally\nclause of such a statement can be used to specify cleanup code which does not\nhandle the exception, but is executed whether an exception occurred or not in\nthe preceding code.\nPython uses the \u201ctermination\u201d model of error handling: an exception handler can find out what happened and continue execution at an outer level, but it cannot repair the cause of the error and retry the failing operation (except by re-entering the offending piece of code from the top).\nWhen an exception is not handled at all, the interpreter terminates execution of\nthe program, or returns to its interactive main loop. In either case, it prints\na stack traceback, except when the exception is SystemExit\n.\nExceptions are identified by class instances. The except\nclause is\nselected depending on the class of the instance: it must reference the class of\nthe instance or a non-virtual base class thereof.\nThe instance can be received by the handler and can carry additional information\nabout the exceptional condition.\nNote\nException messages are not part of the Python API. Their contents may change from one version of Python to the next without warning and should not be relied on by code which will run under multiple versions of the interpreter.\nSee also the description of the try\nstatement in section The try statement\nand raise\nstatement in section The raise statement.\n4.4. Runtime Components\u00b6\n4.4.1. General Computing Model\u00b6\nPython\u2019s execution model does not operate in a vacuum. It runs on a host machine and through that host\u2019s runtime environment, including its operating system (OS), if there is one. When a program runs, the conceptual layers of how it runs on the host look something like this:\nhost machineprocess (global resources)thread (runs machine code)\nEach process represents a program running on the host. Think of each process itself as the data part of its program. Think of the process\u2019 threads as the execution part of the program. This distinction will be important to understand the conceptual Python runtime.\nThe process, as the data part, is the execution context in which the program runs. It mostly consists of the set of resources assigned to the program by the host, including memory, signals, file handles, sockets, and environment variables.\nProcesses are isolated and independent from one another. (The same is true for hosts.) The host manages the process\u2019 access to its assigned resources, in addition to coordinating between processes.\nEach thread represents the actual execution of the program\u2019s machine code, running relative to the resources assigned to the program\u2019s process. It\u2019s strictly up to the host how and when that execution takes place.\nFrom the point of view of Python, a program always starts with exactly one thread. However, the program may grow to run in multiple simultaneous threads. Not all hosts support multiple threads per process, but most do. Unlike processes, threads in a process are not isolated and independent from one another. Specifically, all threads in a process share all of the process\u2019 resources.\nThe fundamental point of threads is that each one does run independently, at the same time as the others. That may be only conceptually at the same time (\u201cconcurrently\u201d) or physically (\u201cin parallel\u201d). Either way, the threads effectively run at a non-synchronized rate.\nNote\nThat non-synchronized rate means none of the process\u2019 memory is guaranteed to stay consistent for the code running in any given thread. Thus multi-threaded programs must take care to coordinate access to intentionally shared resources. Likewise, they must take care to be absolutely diligent about not accessing any other resources in multiple threads; otherwise two threads running at the same time might accidentally interfere with each other\u2019s use of some shared data. All this is true for both Python programs and the Python runtime.\nThe cost of this broad, unstructured requirement is the tradeoff for the kind of raw concurrency that threads provide. The alternative to the required discipline generally means dealing with non-deterministic bugs and data corruption.\n4.4.2. Python Runtime Model\u00b6\nThe same conceptual layers apply to each Python program, with some extra data layers specific to Python:\nhost machineprocess (global resources)Python global runtime (state)Python interpreter (state)thread (runs Python bytecode and \u201cC-API\u201d)Python thread state\nAt the conceptual level: when a Python program starts, it looks exactly like that diagram, with one of each. The runtime may grow to include multiple interpreters, and each interpreter may grow to include multiple thread states.\nNote\nA Python implementation won\u2019t necessarily implement the runtime\nlayers distinctly or even concretely. The only exception is places\nwhere distinct layers are directly specified or exposed to users,\nlike through the threading\nmodule.\nNote\nThe initial interpreter is typically called the \u201cmain\u201d interpreter. Some Python implementations, like CPython, assign special roles to the main interpreter.\nLikewise, the host thread where the runtime was initialized is known as the \u201cmain\u201d thread. It may be different from the process\u2019 initial thread, though they are often the same. In some cases \u201cmain thread\u201d may be even more specific and refer to the initial thread state. A Python runtime might assign specific responsibilities to the main thread, such as handling signals.\nAs a whole, the Python runtime consists of the global runtime state, interpreters, and thread states. The runtime ensures all that state stays consistent over its lifetime, particularly when used with multiple host threads.\nThe global runtime, at the conceptual level, is just a set of interpreters. While those interpreters are otherwise isolated and independent from one another, they may share some data or other resources. The runtime is responsible for managing these global resources safely. The actual nature and management of these resources is implementation-specific. Ultimately, the external utility of the global runtime is limited to managing interpreters.\nIn contrast, an \u201cinterpreter\u201d is conceptually what we would normally think of as the (full-featured) \u201cPython runtime\u201d. When machine code executing in a host thread interacts with the Python runtime, it calls into Python in the context of a specific interpreter.\nNote\nThe term \u201cinterpreter\u201d here is not the same as the \u201cbytecode interpreter\u201d, which is what regularly runs in threads, executing compiled Python code.\nIn an ideal world, \u201cPython runtime\u201d would refer to what we currently call \u201cinterpreter\u201d. However, it\u2019s been called \u201cinterpreter\u201d at least since introduced in 1997 (CPython:a027efa5b).\nEach interpreter completely encapsulates all of the non-process-global,\nnon-thread-specific state needed for the Python runtime to work.\nNotably, the interpreter\u2019s state persists between uses. It includes\nfundamental data like sys.modules\n. The runtime ensures\nmultiple threads using the same interpreter will safely\nshare it between them.\nA Python implementation may support using multiple interpreters at the\nsame time in the same process. They are independent and isolated from\none another. For example, each interpreter has its own\nsys.modules\n.\nFor thread-specific runtime state, each interpreter has a set of thread states, which it manages, in the same way the global runtime contains a set of interpreters. It can have thread states for as many host threads as it needs. It may even have multiple thread states for the same host thread, though that isn\u2019t as common.\nEach thread state, conceptually, has all the thread-specific runtime data an interpreter needs to operate in one host thread. The thread state includes the current raised exception and the thread\u2019s Python call stack. It may include other thread-specific resources.\nNote\nThe term \u201cPython thread\u201d can sometimes refer to a thread state, but\nnormally it means a thread created using the threading\nmodule.\nEach thread state, over its lifetime, is always tied to exactly one interpreter and exactly one host thread. It will only ever be used in that thread and with that interpreter.\nMultiple thread states may be tied to the same host thread, whether for different interpreters or even the same interpreter. However, for any given host thread, only one of the thread states tied to it can be used by the thread at a time.\nThread states are isolated and independent from one another and don\u2019t share any data, except for possibly sharing an interpreter and objects or other resources belonging to that interpreter.\nOnce a program is running, new Python threads can be created using the\nthreading\nmodule (on platforms and Python implementations that\nsupport threads). Additional processes can be created using the\nos\n, subprocess\n, and multiprocessing\nmodules.\nInterpreters can be created and used with the\ninterpreters\nmodule. Coroutines (async) can\nbe run using asyncio\nin each interpreter, typically only\nin a single thread (often the main thread).\nFootnotes", "code_snippets": ["\n ", " ", " ", "\n ", " ", " ", " ", " ", " ", " ", " ", " ", "\n", "\n ", " ", " ", " ", "\n ", " ", "\n\n", " ", "\n", " ", "\n\n", " ", " ", " ", " ", " ", "\n", " ", " ", " ", " ", " ", "\n", " ", " ", " ", " ", " ", " ", " ", " ", "\n", " ", " ", "\n", "\n ", "\n", " ", " ", "\n", "\n"], "language": "Python", "source": "python.org", "token_count": 5120} +{"url": "https://docs.python.org/3/library/email.utils.html", "title": ": Miscellaneous utilities", "content": "email.utils\n: Miscellaneous utilities\u00b6\nSource code: Lib/email/utils.py\nThere are a couple of useful utilities provided in the email.utils\nmodule:\n- email.utils.localtime(dt=None)\u00b6\nReturn local time as an aware datetime object. If called without arguments, return current time. Otherwise dt argument should be a\ndatetime\ninstance, and it is converted to the local time zone according to the system time zone database. If dt is naive (that is,dt.tzinfo\nisNone\n), it is assumed to be in local time.Added in version 3.3.\nDeprecated since version 3.12, removed in version 3.14: The isdst parameter.\n- email.utils.make_msgid(idstring=None, domain=None)\u00b6\nReturns a string suitable for an RFC 2822-compliant Message-ID header. Optional idstring if given, is a string used to strengthen the uniqueness of the message id. Optional domain if given provides the portion of the msgid after the \u2018@\u2019. The default is the local hostname. It is not normally necessary to override this default, but may be useful certain cases, such as a constructing distributed system that uses a consistent domain name across multiple hosts.\nChanged in version 3.2: Added the domain keyword.\nThe remaining functions are part of the legacy (Compat32\n) email API. There\nis no need to directly use these with the new API, since the parsing and\nformatting they provide is done automatically by the header parsing machinery\nof the new API.\n- email.utils.quote(str)\u00b6\nReturn a new string with backslashes in str replaced by two backslashes, and double quotes replaced by backslash-double quote.\n- email.utils.unquote(str)\u00b6\nReturn a new string which is an unquoted version of str. If str ends and begins with double quotes, they are stripped off. Likewise if str ends and begins with angle brackets, they are stripped off.\n- email.utils.parseaddr(address, *, strict=True)\u00b6\nParse address \u2013 which should be the value of some address-containing field such as To or Cc \u2013 into its constituent realname and email address parts. Returns a tuple of that information, unless the parse fails, in which case a 2-tuple of\n('', '')\nis returned.If strict is true, use a strict parser which rejects malformed inputs.\nChanged in version 3.13: Add strict optional parameter and reject malformed inputs by default.\n- email.utils.formataddr(pair, charset='utf-8')\u00b6\nThe inverse of\nparseaddr()\n, this takes a 2-tuple of the form(realname, email_address)\nand returns the string value suitable for a To or Cc header. If the first element of pair is false, then the second element is returned unmodified.Optional charset is the character set that will be used in the RFC 2047 encoding of the\nrealname\nif therealname\ncontains non-ASCII characters. Can be an instance ofstr\nor aCharset\n. Defaults toutf-8\n.Changed in version 3.3: Added the charset option.\n- email.utils.getaddresses(fieldvalues, *, strict=True)\u00b6\nThis method returns a list of 2-tuples of the form returned by\nparseaddr()\n. fieldvalues is a sequence of header field values as might be returned byMessage.get_all\n.If strict is true, use a strict parser which rejects malformed inputs.\nHere\u2019s a simple example that gets all the recipients of a message:\nfrom email.utils import getaddresses tos = msg.get_all('to', []) ccs = msg.get_all('cc', []) resent_tos = msg.get_all('resent-to', []) resent_ccs = msg.get_all('resent-cc', []) all_recipients = getaddresses(tos + ccs + resent_tos + resent_ccs)\nChanged in version 3.13: Add strict optional parameter and reject malformed inputs by default.\n- email.utils.parsedate(date)\u00b6\nAttempts to parse a date according to the rules in RFC 2822. however, some mailers don\u2019t follow that format as specified, so\nparsedate()\ntries to guess correctly in such cases. date is a string containing an RFC 2822 date, such as\"Mon, 20 Nov 1995 19:12:08 -0500\"\n. If it succeeds in parsing the date,parsedate()\nreturns a 9-tuple that can be passed directly totime.mktime()\n; otherwiseNone\nwill be returned. Note that indexes 6, 7, and 8 of the result tuple are not usable.\n- email.utils.parsedate_tz(date)\u00b6\nPerforms the same function as\nparsedate()\n, but returns eitherNone\nor a 10-tuple; the first 9 elements make up a tuple that can be passed directly totime.mktime()\n, and the tenth is the offset of the date\u2019s timezone from UTC (which is the official term for Greenwich Mean Time) [1]. If the input string has no timezone, the last element of the tuple returned is0\n, which represents UTC. Note that indexes 6, 7, and 8 of the result tuple are not usable.\n- email.utils.parsedate_to_datetime(date)\u00b6\nThe inverse of\nformat_datetime()\n. Performs the same function asparsedate()\n, but on success returns adatetime\n; otherwiseValueError\nis raised if date contains an invalid value such as an hour greater than 23 or a timezone offset not between -24 and 24 hours. If the input date has a timezone of-0000\n, thedatetime\nwill be a naivedatetime\n, and if the date is conforming to the RFCs it will represent a time in UTC but with no indication of the actual source timezone of the message the date comes from. If the input date has any other valid timezone offset, thedatetime\nwill be an awaredatetime\nwith the corresponding atimezone\ntzinfo\n.Added in version 3.3.\n- email.utils.mktime_tz(tuple)\u00b6\nTurn a 10-tuple as returned by\nparsedate_tz()\ninto a UTC timestamp (seconds since the Epoch). If the timezone item in the tuple isNone\n, assume local time.\n- email.utils.formatdate(timeval=None, localtime=False, usegmt=False)\u00b6\nReturns a date string as per RFC 2822, e.g.:\nFri, 09 Nov 2001 01:08:47 -0000\nOptional timeval if given is a floating-point time value as accepted by\ntime.gmtime()\nandtime.localtime()\n, otherwise the current time is used.Optional localtime is a flag that when\nTrue\n, interprets timeval, and returns a date relative to the local timezone instead of UTC, properly taking daylight savings time into account. The default isFalse\nmeaning UTC is used.Optional usegmt is a flag that when\nTrue\n, outputs a date string with the timezone as an ascii stringGMT\n, rather than a numeric-0000\n. This is needed for some protocols (such as HTTP). This only applies when localtime isFalse\n. The default isFalse\n.\n- email.utils.format_datetime(dt, usegmt=False)\u00b6\nLike\nformatdate\n, but the input is adatetime\ninstance. If it is a naive datetime, it is assumed to be \u201cUTC with no information about the source timezone\u201d, and the conventional-0000\nis used for the timezone. If it is an awaredatetime\n, then the numeric timezone offset is used. If it is an aware timezone with offset zero, then usegmt may be set toTrue\n, in which case the stringGMT\nis used instead of the numeric timezone offset. This provides a way to generate standards conformant HTTP date headers.Added in version 3.3.\n- email.utils.encode_rfc2231(s, charset=None, language=None)\u00b6\nEncode the string s according to RFC 2231. Optional charset and language, if given is the character set name and language name to use. If neither is given, s is returned as-is. If charset is given but language is not, the string is encoded using the empty string for language.\n- email.utils.collapse_rfc2231_value(value, errors='replace', fallback_charset='us-ascii')\u00b6\nWhen a header parameter is encoded in RFC 2231 format,\nMessage.get_param\nmay return a 3-tuple containing the character set, language, and value.collapse_rfc2231_value()\nturns this into a unicode string. Optional errors is passed to the errors argument ofstr\n\u2019sencode()\nmethod; it defaults to'replace'\n. Optional fallback_charset specifies the character set to use if the one in the RFC 2231 header is not known by Python; it defaults to'us-ascii'\n.For convenience, if the value passed to\ncollapse_rfc2231_value()\nis not a tuple, it should be a string and it is returned unquoted.\n- email.utils.decode_params(params)\u00b6\nDecode parameters list according to RFC 2231. params is a sequence of 2-tuples containing elements of the form\n(content-type, string-value)\n.\nFootnotes", "code_snippets": [" ", "\n\n", " ", " ", " ", "\n", " ", " ", " ", "\n", " ", " ", " ", "\n", " ", " ", " ", "\n", " ", " ", " ", " ", " ", " ", " ", " ", "\n", " ", " ", " ", " ", " ", "\n"], "language": "Python", "source": "python.org", "token_count": 1977} +{"url": "https://docs.python.org/3/c-api/code.html", "title": "Code Objects", "content": "Code Objects\u00b6\nCode objects are a low-level detail of the CPython implementation. Each one represents a chunk of executable code that hasn\u2019t yet been bound into a function.\n-\ntype PyCodeObject\u00b6\nThe C structure of the objects used to describe code objects. The fields of this type are subject to change at any time.\n-\nPyTypeObject PyCode_Type\u00b6\nThis is an instance of\nPyTypeObject\nrepresenting the Python code object.\n-\nint PyCode_Check(PyObject *co)\u00b6\nReturn true if co is a code object. This function always succeeds.\n-\nPy_ssize_t PyCode_GetNumFree(PyCodeObject *co)\u00b6\nReturn the number of free (closure) variables in a code object.\n-\nint PyUnstable_Code_GetFirstFree(PyCodeObject *co)\u00b6\n- This is Unstable API. It may change without warning in minor releases.\nReturn the position of the first free (closure) variable in a code object.\nChanged in version 3.13: Renamed from\nPyCode_GetFirstFree\nas part of Unstable C API. The old name is deprecated, but will remain available until the signature changes again.\n-\nPyCodeObject *PyUnstable_Code_New(int argcount, int kwonlyargcount, int nlocals, int stacksize, int flags, PyObject *code, PyObject *consts, PyObject *names, PyObject *varnames, PyObject *freevars, PyObject *cellvars, PyObject *filename, PyObject *name, PyObject *qualname, int firstlineno, PyObject *linetable, PyObject *exceptiontable)\u00b6\n- This is Unstable API. It may change without warning in minor releases.\nReturn a new code object. If you need a dummy code object to create a frame, use\nPyCode_NewEmpty()\ninstead.Since the definition of the bytecode changes often, calling\nPyUnstable_Code_New()\ndirectly can bind you to a precise Python version.The many arguments of this function are inter-dependent in complex ways, meaning that subtle changes to values are likely to result in incorrect execution or VM crashes. Use this function only with extreme care.\nChanged in version 3.11: Added\nqualname\nandexceptiontable\nparameters.Changed in version 3.12: Renamed from\nPyCode_New\nas part of Unstable C API. The old name is deprecated, but will remain available until the signature changes again.\n-\nPyCodeObject *PyUnstable_Code_NewWithPosOnlyArgs(int argcount, int posonlyargcount, int kwonlyargcount, int nlocals, int stacksize, int flags, PyObject *code, PyObject *consts, PyObject *names, PyObject *varnames, PyObject *freevars, PyObject *cellvars, PyObject *filename, PyObject *name, PyObject *qualname, int firstlineno, PyObject *linetable, PyObject *exceptiontable)\u00b6\n- This is Unstable API. It may change without warning in minor releases.\nSimilar to\nPyUnstable_Code_New()\n, but with an extra \u201cposonlyargcount\u201d for positional-only arguments. The same caveats that apply toPyUnstable_Code_New\nalso apply to this function.Added in version 3.8: as\nPyCode_NewWithPosOnlyArgs\nChanged in version 3.11: Added\nqualname\nandexceptiontable\nparameters.Changed in version 3.12: Renamed to\nPyUnstable_Code_NewWithPosOnlyArgs\n. The old name is deprecated, but will remain available until the signature changes again.\n-\nPyCodeObject *PyCode_NewEmpty(const char *filename, const char *funcname, int firstlineno)\u00b6\n- Return value: New reference.\nReturn a new empty code object with the specified filename, function name, and first line number. The resulting code object will raise an\nException\nif executed.\n-\nint PyCode_Addr2Line(PyCodeObject *co, int byte_offset)\u00b6\nReturn the line number of the instruction that occurs on or before\nbyte_offset\nand ends after it. If you just need the line number of a frame, usePyFrame_GetLineNumber()\ninstead.For efficiently iterating over the line numbers in a code object, use the API described in PEP 626.\n-\nint PyCode_Addr2Location(PyObject *co, int byte_offset, int *start_line, int *start_column, int *end_line, int *end_column)\u00b6\nSets the passed\nint\npointers to the source code line and column numbers for the instruction atbyte_offset\n. Sets the value to0\nwhen information is not available for any particular element.Returns\n1\nif the function succeeds and 0 otherwise.Added in version 3.11.\n-\nPyObject *PyCode_GetCode(PyCodeObject *co)\u00b6\nEquivalent to the Python code\ngetattr(co, 'co_code')\n. Returns a strong reference to aPyBytesObject\nrepresenting the bytecode in a code object. On error,NULL\nis returned and an exception is raised.This\nPyBytesObject\nmay be created on-demand by the interpreter and does not necessarily represent the bytecode actually executed by CPython. The primary use case for this function is debuggers and profilers.Added in version 3.11.\n-\nPyObject *PyCode_GetVarnames(PyCodeObject *co)\u00b6\nEquivalent to the Python code\ngetattr(co, 'co_varnames')\n. Returns a new reference to aPyTupleObject\ncontaining the names of the local variables. On error,NULL\nis returned and an exception is raised.Added in version 3.11.\n-\nPyObject *PyCode_GetCellvars(PyCodeObject *co)\u00b6\nEquivalent to the Python code\ngetattr(co, 'co_cellvars')\n. Returns a new reference to aPyTupleObject\ncontaining the names of the local variables that are referenced by nested functions. On error,NULL\nis returned and an exception is raised.Added in version 3.11.\n-\nPyObject *PyCode_GetFreevars(PyCodeObject *co)\u00b6\nEquivalent to the Python code\ngetattr(co, 'co_freevars')\n. Returns a new reference to aPyTupleObject\ncontaining the names of the free (closure) variables. On error,NULL\nis returned and an exception is raised.Added in version 3.11.\n-\nint PyCode_AddWatcher(PyCode_WatchCallback callback)\u00b6\nRegister callback as a code object watcher for the current interpreter. Return an ID which may be passed to\nPyCode_ClearWatcher()\n. In case of error (e.g. no more watcher IDs available), return-1\nand set an exception.Added in version 3.12.\n-\nint PyCode_ClearWatcher(int watcher_id)\u00b6\nClear watcher identified by watcher_id previously returned from\nPyCode_AddWatcher()\nfor the current interpreter. Return0\non success, or-1\nand set an exception on error (e.g. if the given watcher_id was never registered.)Added in version 3.12.\n-\ntype PyCodeEvent\u00b6\nEnumeration of possible code object watcher events: -\nPY_CODE_EVENT_CREATE\n-PY_CODE_EVENT_DESTROY\nAdded in version 3.12.\n-\ntypedef int (*PyCode_WatchCallback)(PyCodeEvent event, PyCodeObject *co)\u00b6\nType of a code object watcher callback function.\nIf event is\nPY_CODE_EVENT_CREATE\n, then the callback is invoked after co has been fully initialized. Otherwise, the callback is invoked before the destruction of co takes place, so the prior state of co can be inspected.If event is\nPY_CODE_EVENT_DESTROY\n, taking a reference in the callback to the about-to-be-destroyed code object will resurrect it and prevent it from being freed at this time. When the resurrected object is destroyed later, any watcher callbacks active at that time will be called again.Users of this API should not rely on internal runtime implementation details. Such details may include, but are not limited to, the exact order and timing of creation and destruction of code objects. While changes in these details may result in differences observable by watchers (including whether a callback is invoked or not), it does not change the semantics of the Python code being executed.\nIf the callback sets an exception, it must return\n-1\n; this exception will be printed as an unraisable exception usingPyErr_WriteUnraisable()\n. Otherwise it should return0\n.There may already be a pending exception set on entry to the callback. In this case, the callback should return\n0\nwith the same exception still set. This means the callback may not call any other API that can set an exception unless it saves and clears the exception state first, and restores it before returning.Added in version 3.12.\nCode Object Flags\u00b6\nCode objects contain a bit-field of flags, which can be retrieved as the\nco_flags\nPython attribute (for example using\nPyObject_GetAttrString()\n), and set using a flags argument to\nPyUnstable_Code_New()\nand similar functions.\nFlags whose names start with CO_FUTURE_\ncorrespond to features normally\nselectable by future statements. These flags can be used in\nPyCompilerFlags.cf_flags\n.\nNote that many CO_FUTURE_\nflags are mandatory in current versions of\nPython, and setting them has no effect.\nThe following flags are available. For their meaning, see the linked documentation of their Python equivalents.\nFlag |\nMeaning |\n|---|---|\n|\n|\n|\n|\n|\n|\n|\n|\n|\n|\n|\n|\n|\n|\n|\n|\n|\n|\n|\n|\n|\n|\n|\nno effect ( |\n|\nno effect ( |\n|\nno effect ( |\n|\nno effect ( |\n|\nno effect ( |\n|\nno effect ( |\n|\nExtra information\u00b6\nTo support low-level extensions to frame evaluation, such as external just-in-time compilers, it is possible to attach arbitrary extra data to code objects.\nThese functions are part of the unstable C API tier: this functionality is a CPython implementation detail, and the API may change without deprecation warnings.\n-\nPy_ssize_t PyUnstable_Eval_RequestCodeExtraIndex(freefunc free)\u00b6\n- This is Unstable API. It may change without warning in minor releases.\nReturn a new opaque index value used to adding data to code objects.\nYou generally call this function once (per interpreter) and use the result with\nPyCode_GetExtra\nandPyCode_SetExtra\nto manipulate data on individual code objects.If free is not\nNULL\n: when a code object is deallocated, free will be called on non-NULL\ndata stored under the new index. UsePy_DecRef()\nwhen storingPyObject\n.Added in version 3.6: as\n_PyEval_RequestCodeExtraIndex\nChanged in version 3.12: Renamed to\nPyUnstable_Eval_RequestCodeExtraIndex\n. The old private name is deprecated, but will be available until the API changes.\n-\nint PyUnstable_Code_GetExtra(PyObject *code, Py_ssize_t index, void **extra)\u00b6\n- This is Unstable API. It may change without warning in minor releases.\nSet extra to the extra data stored under the given index. Return 0 on success. Set an exception and return -1 on failure.\nIf no data was set under the index, set extra to\nNULL\nand return 0 without setting an exception.Added in version 3.6: as\n_PyCode_GetExtra\nChanged in version 3.12: Renamed to\nPyUnstable_Code_GetExtra\n. The old private name is deprecated, but will be available until the API changes.\n-\nint PyUnstable_Code_SetExtra(PyObject *code, Py_ssize_t index, void *extra)\u00b6\n- This is Unstable API. It may change without warning in minor releases.\nSet the extra data stored under the given index to extra. Return 0 on success. Set an exception and return -1 on failure.\nAdded in version 3.6: as\n_PyCode_SetExtra\nChanged in version 3.12: Renamed to\nPyUnstable_Code_SetExtra\n. The old private name is deprecated, but will be available until the API changes.", "code_snippets": [], "language": "Python", "source": "python.org", "token_count": 2629} +{"url": "https://docs.python.org/3/c-api/import.html", "title": "Importing Modules", "content": "Importing Modules\u00b6\n-\nPyObject *PyImport_ImportModule(const char *name)\u00b6\n- Return value: New reference. Part of the Stable ABI.\nThis is a wrapper around\nPyImport_Import()\nwhich takes a const char* as an argument instead of a PyObject*.\n-\nPyObject *PyImport_ImportModuleNoBlock(const char *name)\u00b6\n- Return value: New reference. Part of the Stable ABI.\nThis function is a deprecated alias of\nPyImport_ImportModule()\n.Changed in version 3.3: This function used to fail immediately when the import lock was held by another thread. In Python 3.3 though, the locking scheme switched to per-module locks for most purposes, so this function\u2019s special behaviour isn\u2019t needed anymore.\nDeprecated since version 3.13, will be removed in version 3.15: Use\nPyImport_ImportModule()\ninstead.\n-\nPyObject *PyImport_ImportModuleEx(const char *name, PyObject *globals, PyObject *locals, PyObject *fromlist)\u00b6\n- Return value: New reference.\nImport a module. This is best described by referring to the built-in Python function\n__import__()\n.The return value is a new reference to the imported module or top-level package, or\nNULL\nwith an exception set on failure. Like for__import__()\n, the return value when a submodule of a package was requested is normally the top-level package, unless a non-empty fromlist was given.Failing imports remove incomplete module objects, like with\nPyImport_ImportModule()\n.\n-\nPyObject *PyImport_ImportModuleLevelObject(PyObject *name, PyObject *globals, PyObject *locals, PyObject *fromlist, int level)\u00b6\n- Return value: New reference. Part of the Stable ABI since version 3.7.\nImport a module. This is best described by referring to the built-in Python function\n__import__()\n, as the standard__import__()\nfunction calls this function directly.The return value is a new reference to the imported module or top-level package, or\nNULL\nwith an exception set on failure. Like for__import__()\n, the return value when a submodule of a package was requested is normally the top-level package, unless a non-empty fromlist was given.Added in version 3.3.\n-\nPyObject *PyImport_ImportModuleLevel(const char *name, PyObject *globals, PyObject *locals, PyObject *fromlist, int level)\u00b6\n- Return value: New reference. Part of the Stable ABI.\nSimilar to\nPyImport_ImportModuleLevelObject()\n, but the name is a UTF-8 encoded string instead of a Unicode object.Changed in version 3.3: Negative values for level are no longer accepted.\n-\nPyObject *PyImport_Import(PyObject *name)\u00b6\n- Return value: New reference. Part of the Stable ABI.\nThis is a higher-level interface that calls the current \u201cimport hook function\u201d (with an explicit level of 0, meaning absolute import). It invokes the\n__import__()\nfunction from the__builtins__\nof the current globals. This means that the import is done using whatever import hooks are installed in the current environment.This function always uses absolute imports.\n-\nPyObject *PyImport_ReloadModule(PyObject *m)\u00b6\n- Return value: New reference. Part of the Stable ABI.\nReload a module. Return a new reference to the reloaded module, or\nNULL\nwith an exception set on failure (the module still exists in this case).\n-\nPyObject *PyImport_AddModuleRef(const char *name)\u00b6\n- Return value: New reference. Part of the Stable ABI since version 3.13.\nReturn the module object corresponding to a module name.\nThe name argument may be of the form\npackage.module\n. First check the modules dictionary if there\u2019s one there, and if not, create a new one and insert it in the modules dictionary.Return a strong reference to the module on success. Return\nNULL\nwith an exception set on failure.The module name name is decoded from UTF-8.\nThis function does not load or import the module; if the module wasn\u2019t already loaded, you will get an empty module object. Use\nPyImport_ImportModule()\nor one of its variants to import a module. Package structures implied by a dotted name for name are not created if not already present.Added in version 3.13.\n-\nPyObject *PyImport_AddModuleObject(PyObject *name)\u00b6\n- Return value: Borrowed reference. Part of the Stable ABI since version 3.7.\nSimilar to\nPyImport_AddModuleRef()\n, but return a borrowed reference and name is a Pythonstr\nobject.Added in version 3.3.\n-\nPyObject *PyImport_AddModule(const char *name)\u00b6\n- Return value: Borrowed reference. Part of the Stable ABI.\nSimilar to\nPyImport_AddModuleRef()\n, but return a borrowed reference.\n-\nPyObject *PyImport_ExecCodeModule(const char *name, PyObject *co)\u00b6\n- Return value: New reference. Part of the Stable ABI.\nGiven a module name (possibly of the form\npackage.module\n) and a code object read from a Python bytecode file or obtained from the built-in functioncompile()\n, load the module. Return a new reference to the module object, orNULL\nwith an exception set if an error occurred. name is removed fromsys.modules\nin error cases, even if name was already insys.modules\non entry toPyImport_ExecCodeModule()\n. Leaving incompletely initialized modules insys.modules\nis dangerous, as imports of such modules have no way to know that the module object is an unknown (and probably damaged with respect to the module author\u2019s intents) state.The module\u2019s\n__spec__\nand__loader__\nwill be set, if not set already, with the appropriate values. The spec\u2019s loader will be set to the module\u2019s__loader__\n(if set) and to an instance ofSourceFileLoader\notherwise.The module\u2019s\n__file__\nattribute will be set to the code object\u2019sco_filename\n. If applicable,__cached__\nwill also be set.This function will reload the module if it was already imported. See\nPyImport_ReloadModule()\nfor the intended way to reload a module.If name points to a dotted name of the form\npackage.module\n, any package structures not already created will still not be created.See also\nPyImport_ExecCodeModuleEx()\nandPyImport_ExecCodeModuleWithPathnames()\n.Changed in version 3.12: The setting of\n__cached__\nand__loader__\nis deprecated. SeeModuleSpec\nfor alternatives.\n-\nPyObject *PyImport_ExecCodeModuleEx(const char *name, PyObject *co, const char *pathname)\u00b6\n- Return value: New reference. Part of the Stable ABI.\nLike\nPyImport_ExecCodeModule()\n, but the__file__\nattribute of the module object is set to pathname if it is non-NULL\n.See also\nPyImport_ExecCodeModuleWithPathnames()\n.\n-\nPyObject *PyImport_ExecCodeModuleObject(PyObject *name, PyObject *co, PyObject *pathname, PyObject *cpathname)\u00b6\n- Return value: New reference. Part of the Stable ABI since version 3.7.\nLike\nPyImport_ExecCodeModuleEx()\n, but the__cached__\nattribute of the module object is set to cpathname if it is non-NULL\n. Of the three functions, this is the preferred one to use.Added in version 3.3.\nChanged in version 3.12: Setting\n__cached__\nis deprecated. SeeModuleSpec\nfor alternatives.\n-\nPyObject *PyImport_ExecCodeModuleWithPathnames(const char *name, PyObject *co, const char *pathname, const char *cpathname)\u00b6\n- Return value: New reference. Part of the Stable ABI.\nLike\nPyImport_ExecCodeModuleObject()\n, but name, pathname and cpathname are UTF-8 encoded strings. Attempts are also made to figure out what the value for pathname should be from cpathname if the former is set toNULL\n.Added in version 3.2.\nChanged in version 3.3: Uses\nimp.source_from_cache()\nin calculating the source path if only the bytecode path is provided.Changed in version 3.12: No longer uses the removed\nimp\nmodule.\n-\nlong PyImport_GetMagicNumber()\u00b6\n- Part of the Stable ABI.\nReturn the magic number for Python bytecode files (a.k.a.\n.pyc\nfile). The magic number should be present in the first four bytes of the bytecode file, in little-endian byte order. Returns-1\non error.Changed in version 3.3: Return value of\n-1\nupon failure.\n-\nconst char *PyImport_GetMagicTag()\u00b6\n- Part of the Stable ABI.\nReturn the magic tag string for PEP 3147 format Python bytecode file names. Keep in mind that the value at\nsys.implementation.cache_tag\nis authoritative and should be used instead of this function.Added in version 3.2.\n-\nPyObject *PyImport_GetModuleDict()\u00b6\n- Return value: Borrowed reference. Part of the Stable ABI.\nReturn the dictionary used for the module administration (a.k.a.\nsys.modules\n). Note that this is a per-interpreter variable.\n-\nPyObject *PyImport_GetModule(PyObject *name)\u00b6\n- Return value: New reference. Part of the Stable ABI since version 3.8.\nReturn the already imported module with the given name. If the module has not been imported yet then returns\nNULL\nbut does not set an error. ReturnsNULL\nand sets an error if the lookup failed.Added in version 3.7.\n-\nPyObject *PyImport_GetImporter(PyObject *path)\u00b6\n- Return value: New reference. Part of the Stable ABI.\nReturn a finder object for a\nsys.path\n/pkg.__path__\nitem path, possibly by fetching it from thesys.path_importer_cache\ndict. If it wasn\u2019t yet cached, traversesys.path_hooks\nuntil a hook is found that can handle the path item. ReturnNone\nif no hook could; this tells our caller that the path based finder could not find a finder for this path item. Cache the result insys.path_importer_cache\n. Return a new reference to the finder object.\n-\nint PyImport_ImportFrozenModuleObject(PyObject *name)\u00b6\n- Part of the Stable ABI since version 3.7.\nLoad a frozen module named name. Return\n1\nfor success,0\nif the module is not found, and-1\nwith an exception set if the initialization failed. To access the imported module on a successful load, usePyImport_ImportModule()\n. (Note the misnomer \u2014 this function would reload the module if it was already imported.)Added in version 3.3.\nChanged in version 3.4: The\n__file__\nattribute is no longer set on the module.\n-\nint PyImport_ImportFrozenModule(const char *name)\u00b6\n- Part of the Stable ABI.\nSimilar to\nPyImport_ImportFrozenModuleObject()\n, but the name is a UTF-8 encoded string instead of a Unicode object.\n-\nstruct _frozen\u00b6\nThis is the structure type definition for frozen module descriptors, as generated by the freeze utility (see\nTools/freeze/\nin the Python source distribution). Its definition, found inInclude/import.h\n, is:struct _frozen { const char *name; const unsigned char *code; int size; bool is_package; };\nChanged in version 3.11: The new\nis_package\nfield indicates whether the module is a package or not. This replaces setting thesize\nfield to a negative value.\n-\nconst struct _frozen *PyImport_FrozenModules\u00b6\nThis pointer is initialized to point to an array of\n_frozen\nrecords, terminated by one whose members are allNULL\nor zero. When a frozen module is imported, it is searched in this table. Third-party code could play tricks with this to provide a dynamically created collection of frozen modules.\n-\nint PyImport_AppendInittab(const char *name, PyObject *(*initfunc)(void))\u00b6\n- Part of the Stable ABI.\nAdd a single module to the existing table of built-in modules. This is a convenience wrapper around\nPyImport_ExtendInittab()\n, returning-1\nif the table could not be extended. The new module can be imported by the name name, and uses the function initfunc as the initialization function called on the first attempted import. This should be called beforePy_Initialize()\n.\n-\nstruct _inittab\u00b6\nStructure describing a single entry in the list of built-in modules. Programs which embed Python may use an array of these structures in conjunction with\nPyImport_ExtendInittab()\nto provide additional built-in modules. The structure consists of two members:-\nconst char *name\u00b6\nThe module name, as an ASCII encoded string.\n-\nconst char *name\u00b6\n-\nint PyImport_ExtendInittab(struct _inittab *newtab)\u00b6\nAdd a collection of modules to the table of built-in modules. The newtab array must end with a sentinel entry which contains\nNULL\nfor thename\nfield; failure to provide the sentinel value can result in a memory fault. Returns0\non success or-1\nif insufficient memory could be allocated to extend the internal table. In the event of failure, no modules are added to the internal table. This must be called beforePy_Initialize()\n.If Python is initialized multiple times,\nPyImport_AppendInittab()\norPyImport_ExtendInittab()\nmust be called before each Python initialization.\n-\nstruct _inittab *PyImport_Inittab\u00b6\nThe table of built-in modules used by Python initialization. Do not use this directly; use\nPyImport_AppendInittab()\nandPyImport_ExtendInittab()\ninstead.\n-\nPyObject *PyImport_ImportModuleAttr(PyObject *mod_name, PyObject *attr_name)\u00b6\n- Return value: New reference.\nImport the module mod_name and get its attribute attr_name.\nNames must be Python\nstr\nobjects.Helper function combining\nPyImport_Import()\nandPyObject_GetAttr()\n. For example, it can raiseImportError\nif the module is not found, andAttributeError\nif the attribute doesn\u2019t exist.Added in version 3.14.\n-\nPyObject *PyImport_ImportModuleAttrString(const char *mod_name, const char *attr_name)\u00b6\n- Return value: New reference.\nSimilar to\nPyImport_ImportModuleAttr()\n, but names are UTF-8 encoded strings instead of Pythonstr\nobjects.Added in version 3.14.", "code_snippets": [], "language": "Python", "source": "python.org", "token_count": 3214} +{"url": "https://docs.python.org/3/c-api/refcounting.html", "title": "Reference Counting", "content": "Reference Counting\u00b6\nThe functions and macros in this section are used for managing reference counts of Python objects.\n-\nPy_ssize_t Py_REFCNT(PyObject *o)\u00b6\n- Part of the Stable ABI since version 3.14.\nGet the reference count of the Python object o.\nNote that the returned value may not actually reflect how many references to the object are actually held. For example, some objects are immortal and have a very high refcount that does not reflect the actual number of references. Consequently, do not rely on the returned value to be accurate, other than a value of 0 or 1.\nUse the\nPy_SET_REFCNT()\nfunction to set an object reference count.Note\nOn free-threaded builds of Python, returning 1 isn\u2019t sufficient to determine if it\u2019s safe to treat o as having no access by other threads. Use\nPyUnstable_Object_IsUniquelyReferenced()\nfor that instead.See also the function\nPyUnstable_Object_IsUniqueReferencedTemporary()\n.Changed in version 3.10:\nPy_REFCNT()\nis changed to the inline static function.Changed in version 3.11: The parameter type is no longer const PyObject*.\n-\nvoid Py_SET_REFCNT(PyObject *o, Py_ssize_t refcnt)\u00b6\nSet the object o reference counter to refcnt.\nOn Python build with Free Threading, if refcnt is larger than\nUINT32_MAX\n, the object is made immortal.This function has no effect on immortal objects.\nAdded in version 3.9.\nChanged in version 3.12: Immortal objects are not modified.\n-\nvoid Py_INCREF(PyObject *o)\u00b6\nIndicate taking a new strong reference to object o, indicating it is in use and should not be destroyed.\nThis function has no effect on immortal objects.\nThis function is usually used to convert a borrowed reference to a strong reference in-place. The\nPy_NewRef()\nfunction can be used to create a new strong reference.When done using the object, release is by calling\nPy_DECREF()\n.The object must not be\nNULL\n; if you aren\u2019t sure that it isn\u2019tNULL\n, usePy_XINCREF()\n.Do not expect this function to actually modify o in any way. For at least some objects, this function has no effect.\nChanged in version 3.12: Immortal objects are not modified.\n-\nvoid Py_XINCREF(PyObject *o)\u00b6\nSimilar to\nPy_INCREF()\n, but the object o can beNULL\n, in which case this has no effect.See also\nPy_XNewRef()\n.\n-\nPyObject *Py_NewRef(PyObject *o)\u00b6\n- Part of the Stable ABI since version 3.10.\nCreate a new strong reference to an object: call\nPy_INCREF()\non o and return the object o.When the strong reference is no longer needed,\nPy_DECREF()\nshould be called on it to release the reference.The object o must not be\nNULL\n; usePy_XNewRef()\nif o can beNULL\n.For example:\nPy_INCREF(obj); self->attr = obj;\ncan be written as:\nself->attr = Py_NewRef(obj);\nSee also\nPy_INCREF()\n.Added in version 3.10.\n-\nPyObject *Py_XNewRef(PyObject *o)\u00b6\n- Part of the Stable ABI since version 3.10.\nSimilar to\nPy_NewRef()\n, but the object o can be NULL.If the object o is\nNULL\n, the function just returnsNULL\n.Added in version 3.10.\n-\nvoid Py_DECREF(PyObject *o)\u00b6\nRelease a strong reference to object o, indicating the reference is no longer used.\nThis function has no effect on immortal objects.\nOnce the last strong reference is released (i.e. the object\u2019s reference count reaches 0), the object\u2019s type\u2019s deallocation function (which must not be\nNULL\n) is invoked.This function is usually used to delete a strong reference before exiting its scope.\nThe object must not be\nNULL\n; if you aren\u2019t sure that it isn\u2019tNULL\n, usePy_XDECREF()\n.Do not expect this function to actually modify o in any way. For at least some objects, this function has no effect.\nWarning\nThe deallocation function can cause arbitrary Python code to be invoked (e.g. when a class instance with a\n__del__()\nmethod is deallocated). While exceptions in such code are not propagated, the executed code has free access to all Python global variables. This means that any object that is reachable from a global variable should be in a consistent state beforePy_DECREF()\nis invoked. For example, code to delete an object from a list should copy a reference to the deleted object in a temporary variable, update the list data structure, and then callPy_DECREF()\nfor the temporary variable.Changed in version 3.12: Immortal objects are not modified.\n-\nvoid Py_XDECREF(PyObject *o)\u00b6\nSimilar to\nPy_DECREF()\n, but the object o can beNULL\n, in which case this has no effect. The same warning fromPy_DECREF()\napplies here as well.\n-\nvoid Py_CLEAR(PyObject *o)\u00b6\nRelease a strong reference for object o. The object may be\nNULL\n, in which case the macro has no effect; otherwise the effect is the same as forPy_DECREF()\n, except that the argument is also set toNULL\n. The warning forPy_DECREF()\ndoes not apply with respect to the object passed because the macro carefully uses a temporary variable and sets the argument toNULL\nbefore releasing the reference.It is a good idea to use this macro whenever releasing a reference to an object that might be traversed during garbage collection.\nChanged in version 3.12: The macro argument is now only evaluated once. If the argument has side effects, these are no longer duplicated.\n-\nvoid Py_IncRef(PyObject *o)\u00b6\n- Part of the Stable ABI.\nIndicate taking a new strong reference to object o. A function version of\nPy_XINCREF()\n. It can be used for runtime dynamic embedding of Python.\n-\nvoid Py_DecRef(PyObject *o)\u00b6\n- Part of the Stable ABI.\nRelease a strong reference to object o. A function version of\nPy_XDECREF()\n. It can be used for runtime dynamic embedding of Python.\n-\nPy_SETREF(dst, src)\u00b6\nMacro safely releasing a strong reference to object dst and setting dst to src.\nAs in case of\nPy_CLEAR()\n, \u201cthe obvious\u201d code can be deadly:Py_DECREF(dst); dst = src;\nThe safe way is:\nPy_SETREF(dst, src);\nThat arranges to set dst to src before releasing the reference to the old value of dst, so that any code triggered as a side-effect of dst getting torn down no longer believes dst points to a valid object.\nAdded in version 3.6.\nChanged in version 3.12: The macro arguments are now only evaluated once. If an argument has side effects, these are no longer duplicated.\n-\nPy_XSETREF(dst, src)\u00b6\nVariant of\nPy_SETREF\nmacro that usesPy_XDECREF()\ninstead ofPy_DECREF()\n.Added in version 3.6.\nChanged in version 3.12: The macro arguments are now only evaluated once. If an argument has side effects, these are no longer duplicated.", "code_snippets": [], "language": "Python", "source": "python.org", "token_count": 1578} +{"url": "https://docs.python.org/3/library/asyncio-extending.html", "title": "Extending", "content": "Extending\u00b6\nThe main direction for asyncio\nextending is writing custom event loop\nclasses. Asyncio has helpers that could be used to simplify this task.\nNote\nThird-parties should reuse existing asyncio code with caution, a new Python version is free to break backward compatibility in internal part of API.\nWriting a Custom Event Loop\u00b6\nasyncio.AbstractEventLoop\ndeclares very many methods. Implementing all them\nfrom scratch is a tedious job.\nA loop can get many common methods implementation for free by inheriting from\nasyncio.BaseEventLoop\n.\nIn turn, the successor should implement a bunch of private methods declared but not\nimplemented in asyncio.BaseEventLoop\n.\nFor example, loop.create_connection()\nchecks arguments, resolves DNS addresses, and\ncalls loop._make_socket_transport()\nthat should be implemented by inherited class.\nThe _make_socket_transport()\nmethod is not documented and is considered as an\ninternal API.\nFuture and Task private constructors\u00b6\nasyncio.Future\nand asyncio.Task\nshould be never created directly,\nplease use corresponding loop.create_future()\nand loop.create_task()\n,\nor asyncio.create_task()\nfactories instead.\nHowever, third-party event loops may reuse built-in future and task implementations for the sake of getting a complex and highly optimized code for free.\nFor this purpose the following, private constructors are listed:\n- Future.__init__(*, loop=None)\u00b6\nCreate a built-in future instance.\nloop is an optional event loop instance.\n- Task.__init__(coro, *, loop=None, name=None, context=None)\u00b6\nCreate a built-in task instance.\nloop is an optional event loop instance. The rest of arguments are described in\nloop.create_task()\ndescription.Changed in version 3.11: context argument is added.\nTask lifetime support\u00b6\nA third party task implementation should call the following functions to keep a task\nvisible by asyncio.all_tasks()\nand asyncio.current_task()\n:\n- asyncio._register_task(task)\u00b6\nRegister a new task as managed by asyncio.\nCall the function from a task constructor.\n- asyncio._unregister_task(task)\u00b6\nUnregister a task from asyncio internal structures.\nThe function should be called when a task is about to finish.\n- asyncio._enter_task(loop, task)\u00b6\nSwitch the current task to the task argument.\nCall the function just before executing a portion of embedded coroutine (\ncoroutine.send()\norcoroutine.throw()\n).\n- asyncio._leave_task(loop, task)\u00b6\nSwitch the current task back from task to\nNone\n.Call the function just after\ncoroutine.send()\norcoroutine.throw()\nexecution.", "code_snippets": [], "language": "Python", "source": "python.org", "token_count": 630} +{"url": "https://docs.python.org/3/library/ossaudiodev.html", "title": " \u2014 Access to OSS-compatible audio devices", "content": "ossaudiodev\n\u2014 Access to OSS-compatible audio devices\u00b6\nDeprecated since version 3.11, removed in version 3.13.\nThis module is no longer part of the Python standard library. It was removed in Python 3.13 after being deprecated in Python 3.11. The removal was decided in PEP 594.\nThe last version of Python that provided the ossaudiodev\nmodule was\nPython 3.12.", "code_snippets": [], "language": "Python", "source": "python.org", "token_count": 89} +{"url": "https://docs.python.org/3/c-api/typehints.html", "title": "Objects for Type Hinting", "content": "Objects for Type Hinting\u00b6\nVarious built-in types for type hinting are provided. Currently,\ntwo types exist \u2013 GenericAlias and\nUnion. Only GenericAlias\nis exposed to C.\n-\nPyObject *Py_GenericAlias(PyObject *origin, PyObject *args)\u00b6\n- Part of the Stable ABI since version 3.9.\nCreate a GenericAlias object. Equivalent to calling the Python class\ntypes.GenericAlias\n. The origin and args arguments set theGenericAlias\n\u2018s__origin__\nand__args__\nattributes respectively. origin should be a PyTypeObject*, and args can be a PyTupleObject* or anyPyObject*\n. If args passed is not a tuple, a 1-tuple is automatically constructed and__args__\nis set to(args,)\n. Minimal checking is done for the arguments, so the function will succeed even if origin is not a type. TheGenericAlias\n\u2018s__parameters__\nattribute is constructed lazily from__args__\n. On failure, an exception is raised andNULL\nis returned.Here\u2019s an example of how to make an extension type generic:\n... static PyMethodDef my_obj_methods[] = { // Other methods. ... {\"__class_getitem__\", Py_GenericAlias, METH_O|METH_CLASS, \"See PEP 585\"} ... }\nSee also\nThe data model method\n__class_getitem__()\n.Added in version 3.9.\n-\nPyTypeObject Py_GenericAliasType\u00b6\n- Part of the Stable ABI since version 3.9.\nThe C type of the object returned by\nPy_GenericAlias()\n. Equivalent totypes.GenericAlias\nin Python.Added in version 3.9.", "code_snippets": [], "language": "Python", "source": "python.org", "token_count": 342} +{"url": "https://docs.python.org/3/installing/index.html", "title": "Installing Python Modules", "content": "Installing Python Modules\u00b6\n- Email:\nAs a popular open source development project, Python has an active supporting community of contributors and users that also make their software available for other Python developers to use under open source license terms.\nThis allows Python users to share and collaborate effectively, benefiting from the solutions others have already created to common (and sometimes even rare!) problems, as well as potentially contributing their own solutions to the common pool.\nThis guide covers the installation part of the process. For a guide to creating and sharing your own Python projects, refer to the Python packaging user guide.\nNote\nFor corporate and other institutional users, be aware that many organisations have their own policies around using and contributing to open source software. Please take such policies into account when making use of the distribution and installation tools provided with Python.\nKey terms\u00b6\npip\nis the preferred installer program. Starting with Python 3.4, it is included by default with the Python binary installers.A virtual environment is a semi-isolated Python environment that allows packages to be installed for use by a particular application, rather than being installed system wide.\nvenv\nis the standard tool for creating virtual environments, and has been part of Python since Python 3.3. Starting with Python 3.4, it defaults to installingpip\ninto all created virtual environments.virtualenv\nis a third party alternative (and predecessor) tovenv\n. It allows virtual environments to be used on versions of Python prior to 3.4, which either don\u2019t providevenv\nat all, or aren\u2019t able to automatically installpip\ninto created environments.The Python Package Index is a public repository of open source licensed packages made available for use by other Python users.\nthe Python Packaging Authority is the group of developers and documentation authors responsible for the maintenance and evolution of the standard packaging tools and the associated metadata and file format standards. They maintain a variety of tools, documentation, and issue trackers on GitHub.\ndistutils\nis the original build and distribution system first added to the Python standard library in 1998. While direct use ofdistutils\nis being phased out, it still laid the foundation for the current packaging and distribution infrastructure, and it not only remains part of the standard library, but its name lives on in other ways (such as the name of the mailing list used to coordinate Python packaging standards development).\nChanged in version 3.5: The use of venv\nis now recommended for creating virtual environments.\nBasic usage\u00b6\nThe standard packaging tools are all designed to be used from the command line.\nThe following command will install the latest version of a module and its dependencies from the Python Package Index:\npython -m pip install SomePackage\nNote\nFor POSIX users (including macOS and Linux users), the examples in this guide assume the use of a virtual environment.\nFor Windows users, the examples in this guide assume that the option to adjust the system PATH environment variable was selected when installing Python.\nIt\u2019s also possible to specify an exact or minimum version directly on the\ncommand line. When using comparator operators such as >\n, <\nor some other\nspecial character which get interpreted by shell, the package name and the\nversion should be enclosed within double quotes:\npython -m pip install SomePackage==1.0.4 # specific version\npython -m pip install \"SomePackage>=1.0.4\" # minimum version\nNormally, if a suitable module is already installed, attempting to install it again will have no effect. Upgrading existing modules must be requested explicitly:\npython -m pip install --upgrade SomePackage\nMore information and resources regarding pip\nand its capabilities can be\nfound in the Python Packaging User Guide.\nCreation of virtual environments is done through the venv\nmodule.\nInstalling packages into an active virtual environment uses the commands shown\nabove.\nHow do I \u2026?\u00b6\nThese are quick answers or links for some common tasks.\n\u2026 install pip\nin versions of Python prior to Python 3.4?\u00b6\nPython only started bundling pip\nwith Python 3.4. For earlier versions,\npip\nneeds to be \u201cbootstrapped\u201d as described in the Python Packaging\nUser Guide.\n\u2026 install packages just for the current user?\u00b6\nPassing the --user\noption to python -m pip install\nwill install a\npackage just for the current user, rather than for all users of the system.\n\u2026 install scientific Python packages?\u00b6\nA number of scientific Python packages have complex binary dependencies, and\naren\u2019t currently easy to install using pip\ndirectly. At this point in\ntime, it will often be easier for users to install these packages by\nother means\nrather than attempting to install them with pip\n.\n\u2026 work with multiple versions of Python installed in parallel?\u00b6\nOn Linux, macOS, and other POSIX systems, use the versioned Python commands\nin combination with the -m\nswitch to run the appropriate copy of\npip\n:\npython2 -m pip install SomePackage # default Python 2\npython2.7 -m pip install SomePackage # specifically Python 2.7\npython3 -m pip install SomePackage # default Python 3\npython3.4 -m pip install SomePackage # specifically Python 3.4\nAppropriately versioned pip\ncommands may also be available.\nOn Windows, use the py\nPython launcher in combination with the -m\nswitch:\npy -2 -m pip install SomePackage # default Python 2\npy -2.7 -m pip install SomePackage # specifically Python 2.7\npy -3 -m pip install SomePackage # default Python 3\npy -3.4 -m pip install SomePackage # specifically Python 3.4\nCommon installation issues\u00b6\nInstalling into the system Python on Linux\u00b6\nOn Linux systems, a Python installation will typically be included as part\nof the distribution. Installing into this Python installation requires\nroot access to the system, and may interfere with the operation of the\nsystem package manager and other components of the system if a component\nis unexpectedly upgraded using pip\n.\nOn such systems, it is often better to use a virtual environment or a\nper-user installation when installing packages with pip\n.\nPip not installed\u00b6\nIt is possible that pip\ndoes not get installed by default. One potential fix is:\npython -m ensurepip --default-pip\nThere are also additional resources for installing pip.\nInstalling binary extensions\u00b6\nPython has typically relied heavily on source based distribution, with end users being expected to compile extension modules from source as part of the installation process.\nWith the introduction of support for the binary wheel\nformat, and the\nability to publish wheels for at least Windows and macOS through the\nPython Package Index, this problem is expected to diminish over time,\nas users are more regularly able to install pre-built extensions rather\nthan needing to build them themselves.\nSome of the solutions for installing scientific software\nthat are not yet available as pre-built wheel\nfiles may also help with\nobtaining other binary extensions without needing to build them locally.", "code_snippets": [], "language": "Python", "source": "python.org", "token_count": 1770} +{"url": "https://docs.python.org/3/c-api/bool.html", "title": "Boolean Objects", "content": "Boolean Objects\u00b6\nBooleans in Python are implemented as a subclass of integers. There are only\ntwo booleans, Py_False\nand Py_True\n. As such, the normal\ncreation and deletion functions don\u2019t apply to booleans. The following macros\nare available, however.\n-\nPyTypeObject PyBool_Type\u00b6\n- Part of the Stable ABI.\nThis instance of\nPyTypeObject\nrepresents the Python boolean type; it is the same object asbool\nin the Python layer.\n-\nint PyBool_Check(PyObject *o)\u00b6\nReturn true if o is of type\nPyBool_Type\n. This function always succeeds.\n-\nPyObject *PyBool_FromLong(long v)\u00b6\n- Return value: New reference. Part of the Stable ABI.\nReturn\nPy_True\norPy_False\n, depending on the truth value of v.", "code_snippets": [], "language": "Python", "source": "python.org", "token_count": 171} +{"url": "https://docs.python.org/3/tutorial/errors.html", "title": "Errors and Exceptions", "content": "8. Errors and Exceptions\u00b6\nUntil now error messages haven\u2019t been more than mentioned, but if you have tried out the examples you have probably seen some. There are (at least) two distinguishable kinds of errors: syntax errors and exceptions.\n8.1. Syntax Errors\u00b6\nSyntax errors, also known as parsing errors, are perhaps the most common kind of complaint you get while you are still learning Python:\n>>> while True print('Hello world')\nFile \"\", line 1\nwhile True print('Hello world')\n^^^^^\nSyntaxError: invalid syntax\nThe parser repeats the offending line and displays little arrows pointing\nat the place where the error was detected. Note that this is not always the\nplace that needs to be fixed. In the example, the error is detected at the\nfunction print()\n, since a colon (':'\n) is missing just before it.\nThe file name (\nin our example) and line number are printed so you\nknow where to look in case the input came from a file.\n8.2. Exceptions\u00b6\nEven if a statement or expression is syntactically correct, it may cause an error when an attempt is made to execute it. Errors detected during execution are called exceptions and are not unconditionally fatal: you will soon learn how to handle them in Python programs. Most exceptions are not handled by programs, however, and result in error messages as shown here:\n>>> 10 * (1/0)\nTraceback (most recent call last):\nFile \"\", line 1, in \n10 * (1/0)\n~^~\nZeroDivisionError: division by zero\n>>> 4 + spam*3\nTraceback (most recent call last):\nFile \"\", line 1, in \n4 + spam*3\n^^^^\nNameError: name 'spam' is not defined\n>>> '2' + 2\nTraceback (most recent call last):\nFile \"\", line 1, in \n'2' + 2\n~~~~^~~\nTypeError: can only concatenate str (not \"int\") to str\nThe last line of the error message indicates what happened. Exceptions come in\ndifferent types, and the type is printed as part of the message: the types in\nthe example are ZeroDivisionError\n, NameError\nand TypeError\n.\nThe string printed as the exception type is the name of the built-in exception\nthat occurred. This is true for all built-in exceptions, but need not be true\nfor user-defined exceptions (although it is a useful convention). Standard\nexception names are built-in identifiers (not reserved keywords).\nThe rest of the line provides detail based on the type of exception and what caused it.\nThe preceding part of the error message shows the context where the exception occurred, in the form of a stack traceback. In general it contains a stack traceback listing source lines; however, it will not display lines read from standard input.\nBuilt-in Exceptions lists the built-in exceptions and their meanings.\n8.3. Handling Exceptions\u00b6\nIt is possible to write programs that handle selected exceptions. Look at the\nfollowing example, which asks the user for input until a valid integer has been\nentered, but allows the user to interrupt the program (using Control-C or\nwhatever the operating system supports); note that a user-generated interruption\nis signalled by raising the KeyboardInterrupt\nexception.\n>>> while True:\n... try:\n... x = int(input(\"Please enter a number: \"))\n... break\n... except ValueError:\n... print(\"Oops! That was no valid number. Try again...\")\n...\nThe try\nstatement works as follows.\nFirst, the try clause (the statement(s) between the\ntry\nandexcept\nkeywords) is executed.If no exception occurs, the except clause is skipped and execution of the\ntry\nstatement is finished.If an exception occurs during execution of the\ntry\nclause, the rest of the clause is skipped. Then, if its type matches the exception named after theexcept\nkeyword, the except clause is executed, and then execution continues after the try/except block.If an exception occurs which does not match the exception named in the except clause, it is passed on to outer\ntry\nstatements; if no handler is found, it is an unhandled exception and execution stops with an error message.\nA try\nstatement may have more than one except clause, to specify\nhandlers for different exceptions. At most one handler will be executed.\nHandlers only handle exceptions that occur in the corresponding try clause,\nnot in other handlers of the same try\nstatement. An except clause\nmay name multiple exceptions as a parenthesized tuple, for example:\n... except (RuntimeError, TypeError, NameError):\n... pass\nA class in an except\nclause matches exceptions which are instances of the\nclass itself or one of its derived classes (but not the other way around \u2014 an\nexcept clause listing a derived class does not match instances of its base classes).\nFor example, the following code will print B, C, D in that order:\nclass B(Exception):\npass\nclass C(B):\npass\nclass D(C):\npass\nfor cls in [B, C, D]:\ntry:\nraise cls()\nexcept D:\nprint(\"D\")\nexcept C:\nprint(\"C\")\nexcept B:\nprint(\"B\")\nNote that if the except clauses were reversed (with except B\nfirst), it\nwould have printed B, B, B \u2014 the first matching except clause is triggered.\nWhen an exception occurs, it may have associated values, also known as the exception\u2019s arguments. The presence and types of the arguments depend on the exception type.\nThe except clause may specify a variable after the exception name. The\nvariable is bound to the exception instance which typically has an args\nattribute that stores the arguments. For convenience, builtin exception\ntypes define __str__()\nto print all the arguments without explicitly\naccessing .args\n.\n>>> try:\n... raise Exception('spam', 'eggs')\n... except Exception as inst:\n... print(type(inst)) # the exception type\n... print(inst.args) # arguments stored in .args\n... print(inst) # __str__ allows args to be printed directly,\n... # but may be overridden in exception subclasses\n... x, y = inst.args # unpack args\n... print('x =', x)\n... print('y =', y)\n...\n\n('spam', 'eggs')\n('spam', 'eggs')\nx = spam\ny = eggs\nThe exception\u2019s __str__()\noutput is printed as the last part (\u2018detail\u2019)\nof the message for unhandled exceptions.\nBaseException\nis the common base class of all exceptions. One of its\nsubclasses, Exception\n, is the base class of all the non-fatal exceptions.\nExceptions which are not subclasses of Exception\nare not typically\nhandled, because they are used to indicate that the program should terminate.\nThey include SystemExit\nwhich is raised by sys.exit()\nand\nKeyboardInterrupt\nwhich is raised when a user wishes to interrupt\nthe program.\nException\ncan be used as a wildcard that catches (almost) everything.\nHowever, it is good practice to be as specific as possible with the types\nof exceptions that we intend to handle, and to allow any unexpected\nexceptions to propagate on.\nThe most common pattern for handling Exception\nis to print or log\nthe exception and then re-raise it (allowing a caller to handle the\nexception as well):\nimport sys\ntry:\nf = open('myfile.txt')\ns = f.readline()\ni = int(s.strip())\nexcept OSError as err:\nprint(\"OS error:\", err)\nexcept ValueError:\nprint(\"Could not convert data to an integer.\")\nexcept Exception as err:\nprint(f\"Unexpected {err=}, {type(err)=}\")\nraise\nThe try\n\u2026 except\nstatement has an optional else\nclause, which, when present, must follow all except clauses. It is useful\nfor code that must be executed if the try clause does not raise an exception.\nFor example:\nfor arg in sys.argv[1:]:\ntry:\nf = open(arg, 'r')\nexcept OSError:\nprint('cannot open', arg)\nelse:\nprint(arg, 'has', len(f.readlines()), 'lines')\nf.close()\nThe use of the else\nclause is better than adding additional code to\nthe try\nclause because it avoids accidentally catching an exception\nthat wasn\u2019t raised by the code being protected by the try\n\u2026\nexcept\nstatement.\nException handlers do not handle only exceptions that occur immediately in the try clause, but also those that occur inside functions that are called (even indirectly) in the try clause. For example:\n>>> def this_fails():\n... x = 1/0\n...\n>>> try:\n... this_fails()\n... except ZeroDivisionError as err:\n... print('Handling run-time error:', err)\n...\nHandling run-time error: division by zero\n8.4. Raising Exceptions\u00b6\nThe raise\nstatement allows the programmer to force a specified\nexception to occur. For example:\n>>> raise NameError('HiThere')\nTraceback (most recent call last):\nFile \"\", line 1, in \nraise NameError('HiThere')\nNameError: HiThere\nThe sole argument to raise\nindicates the exception to be raised.\nThis must be either an exception instance or an exception class (a class that\nderives from BaseException\n, such as Exception\nor one of its\nsubclasses). If an exception class is passed, it will be implicitly\ninstantiated by calling its constructor with no arguments:\nraise ValueError # shorthand for 'raise ValueError()'\nIf you need to determine whether an exception was raised but don\u2019t intend to\nhandle it, a simpler form of the raise\nstatement allows you to\nre-raise the exception:\n>>> try:\n... raise NameError('HiThere')\n... except NameError:\n... print('An exception flew by!')\n... raise\n...\nAn exception flew by!\nTraceback (most recent call last):\nFile \"\", line 2, in \nraise NameError('HiThere')\nNameError: HiThere\n8.5. Exception Chaining\u00b6\nIf an unhandled exception occurs inside an except\nsection, it will\nhave the exception being handled attached to it and included in the error\nmessage:\n>>> try:\n... open(\"database.sqlite\")\n... except OSError:\n... raise RuntimeError(\"unable to handle error\")\n...\nTraceback (most recent call last):\nFile \"\", line 2, in \nopen(\"database.sqlite\")\n~~~~^^^^^^^^^^^^^^^^^^^\nFileNotFoundError: [Errno 2] No such file or directory: 'database.sqlite'\nDuring handling of the above exception, another exception occurred:\nTraceback (most recent call last):\nFile \"\", line 4, in \nraise RuntimeError(\"unable to handle error\")\nRuntimeError: unable to handle error\nTo indicate that an exception is a direct consequence of another, the\nraise\nstatement allows an optional from\nclause:\n# exc must be exception instance or None.\nraise RuntimeError from exc\nThis can be useful when you are transforming exceptions. For example:\n>>> def func():\n... raise ConnectionError\n...\n>>> try:\n... func()\n... except ConnectionError as exc:\n... raise RuntimeError('Failed to open database') from exc\n...\nTraceback (most recent call last):\nFile \"\", line 2, in \nfunc()\n~~~~^^\nFile \"\", line 2, in func\nConnectionError\nThe above exception was the direct cause of the following exception:\nTraceback (most recent call last):\nFile \"\", line 4, in \nraise RuntimeError('Failed to open database') from exc\nRuntimeError: Failed to open database\nIt also allows disabling automatic exception chaining using the from None\nidiom:\n>>> try:\n... open('database.sqlite')\n... except OSError:\n... raise RuntimeError from None\n...\nTraceback (most recent call last):\nFile \"\", line 4, in \nraise RuntimeError from None\nRuntimeError\nFor more information about chaining mechanics, see Built-in Exceptions.\n8.6. User-defined Exceptions\u00b6\nPrograms may name their own exceptions by creating a new exception class (see\nClasses for more about Python classes). Exceptions should typically\nbe derived from the Exception\nclass, either directly or indirectly.\nException classes can be defined which do anything any other class can do, but are usually kept simple, often only offering a number of attributes that allow information about the error to be extracted by handlers for the exception.\nMost exceptions are defined with names that end in \u201cError\u201d, similar to the naming of the standard exceptions.\nMany standard modules define their own exceptions to report errors that may occur in functions they define.\n8.7. Defining Clean-up Actions\u00b6\nThe try\nstatement has another optional clause which is intended to\ndefine clean-up actions that must be executed under all circumstances. For\nexample:\n>>> try:\n... raise KeyboardInterrupt\n... finally:\n... print('Goodbye, world!')\n...\nGoodbye, world!\nTraceback (most recent call last):\nFile \"\", line 2, in \nraise KeyboardInterrupt\nKeyboardInterrupt\nIf a finally\nclause is present, the finally\nclause will execute as the last task before the try\nstatement completes. The finally\nclause runs whether or\nnot the try\nstatement produces an exception. The following\npoints discuss more complex cases when an exception occurs:\nIf an exception occurs during execution of the\ntry\nclause, the exception may be handled by anexcept\nclause. If the exception is not handled by anexcept\nclause, the exception is re-raised after thefinally\nclause has been executed.An exception could occur during execution of an\nexcept\norelse\nclause. Again, the exception is re-raised after thefinally\nclause has been executed.If the\nfinally\nclause executes abreak\n,continue\norreturn\nstatement, exceptions are not re-raised. This can be confusing and is therefore discouraged. From version 3.14 the compiler emits aSyntaxWarning\nfor it (see PEP 765).If the\ntry\nstatement reaches abreak\n,continue\norreturn\nstatement, thefinally\nclause will execute just prior to thebreak\n,continue\norreturn\nstatement\u2019s execution.If a\nfinally\nclause includes areturn\nstatement, the returned value will be the one from thefinally\nclause\u2019sreturn\nstatement, not the value from thetry\nclause\u2019sreturn\nstatement. This can be confusing and is therefore discouraged. From version 3.14 the compiler emits aSyntaxWarning\nfor it (see PEP 765).\nFor example:\n>>> def bool_return():\n... try:\n... return True\n... finally:\n... return False\n...\n>>> bool_return()\nFalse\nA more complicated example:\n>>> def divide(x, y):\n... try:\n... result = x / y\n... except ZeroDivisionError:\n... print(\"division by zero!\")\n... else:\n... print(\"result is\", result)\n... finally:\n... print(\"executing finally clause\")\n...\n>>> divide(2, 1)\nresult is 2.0\nexecuting finally clause\n>>> divide(2, 0)\ndivision by zero!\nexecuting finally clause\n>>> divide(\"2\", \"1\")\nexecuting finally clause\nTraceback (most recent call last):\nFile \"\", line 1, in \ndivide(\"2\", \"1\")\n~~~~~~^^^^^^^^^^\nFile \"\", line 3, in divide\nresult = x / y\n~~^~~\nTypeError: unsupported operand type(s) for /: 'str' and 'str'\nAs you can see, the finally\nclause is executed in any event. The\nTypeError\nraised by dividing two strings is not handled by the\nexcept\nclause and therefore re-raised after the finally\nclause has been executed.\nIn real world applications, the finally\nclause is useful for\nreleasing external resources (such as files or network connections), regardless\nof whether the use of the resource was successful.\n8.8. Predefined Clean-up Actions\u00b6\nSome objects define standard clean-up actions to be undertaken when the object is no longer needed, regardless of whether or not the operation using the object succeeded or failed. Look at the following example, which tries to open a file and print its contents to the screen.\nfor line in open(\"myfile.txt\"):\nprint(line, end=\"\")\nThe problem with this code is that it leaves the file open for an indeterminate\namount of time after this part of the code has finished executing.\nThis is not an issue in simple scripts, but can be a problem for larger\napplications. The with\nstatement allows objects like files to be\nused in a way that ensures they are always cleaned up promptly and correctly.\nwith open(\"myfile.txt\") as f:\nfor line in f:\nprint(line, end=\"\")\nAfter the statement is executed, the file f is always closed, even if a problem was encountered while processing the lines. Objects which, like files, provide predefined clean-up actions will indicate this in their documentation.\n8.10. Enriching Exceptions with Notes\u00b6\nWhen an exception is created in order to be raised, it is usually initialized\nwith information that describes the error that has occurred. There are cases\nwhere it is useful to add information after the exception was caught. For this\npurpose, exceptions have a method add_note(note)\nthat accepts a string and\nadds it to the exception\u2019s notes list. The standard traceback rendering\nincludes all notes, in the order they were added, after the exception.\n>>> try:\n... raise TypeError('bad type')\n... except Exception as e:\n... e.add_note('Add some information')\n... e.add_note('Add some more information')\n... raise\n...\nTraceback (most recent call last):\nFile \"\", line 2, in \nraise TypeError('bad type')\nTypeError: bad type\nAdd some information\nAdd some more information\n>>>\nFor example, when collecting exceptions into an exception group, we may want to add context information for the individual errors. In the following each exception in the group has a note indicating when this error has occurred.\n>>> def f():\n... raise OSError('operation failed')\n...\n>>> excs = []\n>>> for i in range(3):\n... try:\n... f()\n... except Exception as e:\n... e.add_note(f'Happened in Iteration {i+1}')\n... excs.append(e)\n...\n>>> raise ExceptionGroup('We have some problems', excs)\n+ Exception Group Traceback (most recent call last):\n| File \"\", line 1, in \n| raise ExceptionGroup('We have some problems', excs)\n| ExceptionGroup: We have some problems (3 sub-exceptions)\n+-+---------------- 1 ----------------\n| Traceback (most recent call last):\n| File \"\", line 3, in \n| f()\n| ~^^\n| File \"\", line 2, in f\n| raise OSError('operation failed')\n| OSError: operation failed\n| Happened in Iteration 1\n+---------------- 2 ----------------\n| Traceback (most recent call last):\n| File \"\", line 3, in \n| f()\n| ~^^\n| File \"\", line 2, in f\n| raise OSError('operation failed')\n| OSError: operation failed\n| Happened in Iteration 2\n+---------------- 3 ----------------\n| Traceback (most recent call last):\n| File \"\", line 3, in \n| f()\n| ~^^\n| File \"\", line 2, in f\n| raise OSError('operation failed')\n| OSError: operation failed\n| Happened in Iteration 3\n+------------------------------------\n>>>", "code_snippets": [" ", " ", "\n File ", ", line ", "\n", " ", " ", "\n", "\n", ": ", "\n", " ", " ", "\n", "\n File ", ", line ", ", in ", "\n", " ", " ", "\n", "\n", ": ", "\n", " ", " ", "\n", "\n File ", ", line ", ", in ", "\n", " ", " ", "\n", "\n", ": ", "\n", " ", " ", "\n", "\n File ", ", line ", ", in ", "\n", " ", " ", "\n", "\n", ": ", "\n", " ", "\n", " ", "\n", " ", " ", " ", "\n", " ", "\n", " ", " ", "\n", " ", "\n", "\n", " ", " ", " ", " ", "\n", " ", "\n", "\n ", "\n\n", "\n ", "\n\n", "\n ", "\n\n", " ", " ", " ", " ", " ", "\n ", "\n ", " ", "\n ", " ", "\n ", "\n ", " ", "\n ", "\n ", " ", "\n ", "\n", "\n", " ", " ", " ", "\n", " ", " ", " ", "\n", " ", " ", "\n", " ", " ", "\n", " ", " ", "\n", " ", "\n", " ", " ", " ", " ", " ", "\n", " ", " ", "\n", " ", " ", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n\n", "\n ", " ", " ", "\n ", " ", " ", "\n ", " ", " ", "\n", " ", " ", " ", "\n ", " ", "\n", " ", "\n ", "\n", " ", " ", " ", "\n ", "\n ", "\n", " ", " ", " ", "\n ", "\n ", " ", " ", " ", "\n ", " ", "\n ", " ", "\n ", "\n ", " ", " ", " ", "\n ", "\n", "\n", " ", " ", " ", "\n", "\n", "\n", " ", "\n", " ", " ", " ", "\n", " ", " ", "\n", "\n", "\n", " ", "\n", "\n File ", ", line ", ", in ", "\n", " ", "\n", ": ", "\n", " ", " ", "\n", "\n", " ", " ", "\n", " ", "\n", " ", "\n", " ", "\n", "\n", "\n", "\n File ", ", line ", ", in ", "\n", " ", "\n", ": ", "\n", "\n", " ", "\n", " ", "\n", " ", " ", "\n", "\n", "\n File ", ", line ", ", in ", "\n", "\n", "\n", ": ", "\n\n", "\n\n", "\n File ", ", line ", ", in ", "\n", " ", "\n", ": ", "\n", "\n", " ", " ", "\n", "\n", " ", " ", "\n", "\n", "\n", " ", "\n", " ", " ", " ", "\n", " ", " ", " ", "\n", "\n", "\n File ", ", line ", ", in ", "\n", "\n", "\n File ", ", line ", ", in ", "\n", "\n\n", "\n\n", "\n File ", ", line ", ", in ", "\n", " ", " ", "\n", ": ", "\n", "\n", " ", "\n", " ", "\n", " ", " ", " ", "\n", "\n", "\n File ", ", line ", ", in ", "\n", " ", " ", "\n", "\n", "\n", " ", " ", "\n", "\n", " ", "\n", "\n", "\n", "\n File ", ", line ", ", in ", "\n", " ", "\n", "\n", "\n", " ", "\n", " ", " ", "\n", " ", "\n", " ", " ", "\n", "\n", "\n", "\n", " ", "\n", " ", "\n", " ", " ", " ", " ", " ", "\n", " ", " ", "\n", " ", "\n", " ", "\n", " ", " ", "\n", " ", "\n", " ", "\n", "\n", " ", "\n", "\n", "\n", " ", "\n", "\n", "\n", " ", "\n", "\n", "\n File ", ", line ", ", in ", "\n", " ", "\n", "\n File ", ", line ", ", in ", "\n", " ", " ", " ", " ", "\n", "\n", ": ", "\n", " ", " ", " ", "\n ", " ", "\n", " ", " ", " ", "\n ", " ", " ", " ", "\n ", " ", "\n", "\n", " ", " ", " ", " ", "\n", " ", " ", " ", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", " ", "\n", " ", " ", " ", "\n", " ", "\n", "\n", "\n", "\n", "\n", " ", " ", "\n", " ", "\n", " ", "\n", " ", "\n", " ", "\n", " ", "\n", " ", "\n", " ", "\n", " ", "\n", " ", "\n", " ", "\n", " ", "\n", " ", "\n", " ", "\n", "\n", "\n", " ", "\n", " ", " ", " ", "\n", " ", "\n", " ", " ", " ", "\n", " ", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", " ", " ", "\n", " ", " ", " ", "\n", " ", "\n", " ", "\n", " ", " ", " ", " ", "\n", " ", "\n", "\n", " ", "\n", " ", " ", " ", "\n", "\n", "\n", " ", " ", "\n", " ", " ", " ", "\n", " ", "\n", " ", "\n", " ", "\n", "\n", "\n File ", ", line ", ", in ", "\n", " ", "\n", ": ", "\n", "\n", "\n", "\n", "\n", " ", " ", "\n", "\n", " ", " ", "\n", " ", " ", " ", "\n", " ", "\n", " ", "\n", " ", " ", " ", " ", "\n", " ", "\n", " ", "\n", "\n", " ", " ", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n"], "language": "Python", "source": "python.org", "token_count": 4806} +{"url": "https://docs.python.org/3/howto/free-threading-python.html", "title": "Python support for free threading", "content": "Python support for free threading\u00b6\nStarting with the 3.13 release, CPython has support for a build of Python called free threading where the global interpreter lock (GIL) is disabled. Free-threaded execution allows for full utilization of the available processing power by running threads in parallel on available CPU cores. While not all software will benefit from this automatically, programs designed with threading in mind will run faster on multi-core hardware.\nSome third-party packages, in particular ones with an extension module, may not be ready for use in a free-threaded build, and will re-enable the GIL.\nThis document describes the implications of free threading for Python code. See C API Extension Support for Free Threading for information on how to write C extensions that support the free-threaded build.\nSee also\nPEP 703 \u2013 Making the Global Interpreter Lock Optional in CPython for an overall description of free-threaded Python.\nInstallation\u00b6\nStarting with Python 3.13, the official macOS and Windows installers optionally support installing free-threaded Python binaries. The installers are available at https://www.python.org/downloads/.\nFor information on other platforms, see the Installing a Free-Threaded Python, a community-maintained installation guide for installing free-threaded Python.\nWhen building CPython from source, the --disable-gil\nconfigure option\nshould be used to build a free-threaded Python interpreter.\nIdentifying free-threaded Python\u00b6\nTo check if the current interpreter supports free-threading, python -VV\nand sys.version\ncontain \u201cfree-threading build\u201d.\nThe new sys._is_gil_enabled()\nfunction can be used to check whether\nthe GIL is actually disabled in the running process.\nThe sysconfig.get_config_var(\"Py_GIL_DISABLED\")\nconfiguration variable can\nbe used to determine whether the build supports free threading. If the variable\nis set to 1\n, then the build supports free threading. This is the recommended\nmechanism for decisions related to the build configuration.\nThe global interpreter lock in free-threaded Python\u00b6\nFree-threaded builds of CPython support optionally running with the GIL enabled\nat runtime using the environment variable PYTHON_GIL\nor\nthe command-line option -X gil\n.\nThe GIL may also automatically be enabled when importing a C-API extension module that is not explicitly marked as supporting free threading. A warning will be printed in this case.\nIn addition to individual package documentation, the following websites track the status of popular packages support for free threading:\nThread safety\u00b6\nThe free-threaded build of CPython aims to provide similar thread-safety\nbehavior at the Python level to the default GIL-enabled build. Built-in\ntypes like dict\n, list\n, and set\nuse internal locks\nto protect against concurrent modifications in ways that behave similarly to\nthe GIL. However, Python has not historically guaranteed specific behavior for\nconcurrent modifications to these built-in types, so this should be treated\nas a description of the current implementation, not a guarantee of current or\nfuture behavior.\nNote\nIt\u2019s recommended to use the threading.Lock\nor other synchronization\nprimitives instead of relying on the internal locks of built-in types, when\npossible.\nKnown limitations\u00b6\nThis section describes known limitations of the free-threaded CPython build.\nImmortalization\u00b6\nIn the free-threaded build, some objects are immortal. Immortal objects are not deallocated and have reference counts that are never modified. This is done to avoid reference count contention that would prevent efficient multi-threaded scaling.\nAs of the 3.14 release, immortalization is limited to:\nCode constants: numeric literals, string literals, and tuple literals composed of other constants.\nStrings interned by\nsys.intern()\n.\nFrame objects\u00b6\nIt is not safe to access frame.f_locals\nfrom a frame\nobject if that frame is currently executing in another thread, and doing so may\ncrash the interpreter.\nIterators\u00b6\nIt is generally not thread-safe to access the same iterator object from multiple threads concurrently, and threads may see duplicate or missing elements.\nSingle-threaded performance\u00b6\nThe free-threaded build has additional overhead when executing Python code compared to the default GIL-enabled build. The amount of overhead depends on the workload and hardware. On the pyperformance benchmark suite, the average overhead ranges from about 1% on macOS aarch64 to 8% on x86-64 Linux systems.\nBehavioral changes\u00b6\nThis section describes CPython behavioural changes with the free-threaded build.\nContext variables\u00b6\nIn the free-threaded build, the flag thread_inherit_context\nis set to true by default which causes threads created with\nthreading.Thread\nto start with a copy of the\nContext()\nof the caller of\nstart()\n. In the default GIL-enabled build, the flag\ndefaults to false so threads start with an\nempty Context()\n.\nWarning filters\u00b6\nIn the free-threaded build, the flag context_aware_warnings\nis set to true by default. In the default GIL-enabled build, the flag defaults\nto false. If the flag is true then the warnings.catch_warnings\ncontext manager uses a context variable for warning filters. If the flag is\nfalse then catch_warnings\nmodifies the global filters list,\nwhich is not thread-safe. See the warnings\nmodule for more details.", "code_snippets": [], "language": "Python", "source": "python.org", "token_count": 1331} +{"url": "https://docs.python.org/3/library/email.header.html", "title": ": Internationalized headers", "content": "email.header\n: Internationalized headers\u00b6\nSource code: Lib/email/header.py\nThis module is part of the legacy (Compat32\n) email API. In the current API\nencoding and decoding of headers is handled transparently by the\ndictionary-like API of the EmailMessage\nclass. In\naddition to uses in legacy code, this module can be useful in applications that\nneed to completely control the character sets used when encoding headers.\nThe remaining text in this section is the original documentation of the module.\nRFC 2822 is the base standard that describes the format of email messages. It derives from the older RFC 822 standard which came into widespread use at a time when most email was composed of ASCII characters only. RFC 2822 is a specification written assuming email contains only 7-bit ASCII characters.\nOf course, as email has been deployed worldwide, it has become\ninternationalized, such that language specific character sets can now be used in\nemail messages. The base standard still requires email messages to be\ntransferred using only 7-bit ASCII characters, so a slew of RFCs have been\nwritten describing how to encode email containing non-ASCII characters into\nRFC 2822-compliant format. These RFCs include RFC 2045, RFC 2046,\nRFC 2047, and RFC 2231. The email\npackage supports these standards\nin its email.header\nand email.charset\nmodules.\nIf you want to include non-ASCII characters in your email headers, say in the\nSubject or To fields, you should use the\nHeader\nclass and assign the field in the Message\nobject to an instance of Header\ninstead of using a string for the header\nvalue. Import the Header\nclass from the email.header\nmodule.\nFor example:\n>>> from email.message import Message\n>>> from email.header import Header\n>>> msg = Message()\n>>> h = Header('p\\xf6stal', 'iso-8859-1')\n>>> msg['Subject'] = h\n>>> msg.as_string()\n'Subject: =?iso-8859-1?q?p=F6stal?=\\n\\n'\nNotice here how we wanted the Subject field to contain a non-ASCII\ncharacter? We did this by creating a Header\ninstance and passing in\nthe character set that the byte string was encoded in. When the subsequent\nMessage\ninstance was flattened, the Subject\nfield was properly RFC 2047 encoded. MIME-aware mail readers would show this\nheader using the embedded ISO-8859-1 character.\nHere is the Header\nclass description:\n- class email.header.Header(s=None, charset=None, maxlinelen=None, header_name=None, continuation_ws=' ', errors='strict')\u00b6\nCreate a MIME-compliant header that can contain strings in different character sets.\nOptional s is the initial header value. If\nNone\n(the default), the initial header value is not set. You can later append to the header withappend()\nmethod calls. s may be an instance ofbytes\norstr\n, but see theappend()\ndocumentation for semantics.Optional charset serves two purposes: it has the same meaning as the charset argument to the\nappend()\nmethod. It also sets the default character set for all subsequentappend()\ncalls that omit the charset argument. If charset is not provided in the constructor (the default), theus-ascii\ncharacter set is used both as s\u2019s initial charset and as the default for subsequentappend()\ncalls.The maximum line length can be specified explicitly via maxlinelen. For splitting the first line to a shorter value (to account for the field header which isn\u2019t included in s, e.g. Subject) pass in the name of the field in header_name. The default maxlinelen is 78, and the default value for header_name is\nNone\n, meaning it is not taken into account for the first line of a long, split header.Optional continuation_ws must be RFC 2822-compliant folding whitespace, and is usually either a space or a hard tab character. This character will be prepended to continuation lines. continuation_ws defaults to a single space character.\nOptional errors is passed straight through to the\nappend()\nmethod.- append(s, charset=None, errors='strict')\u00b6\nAppend the string s to the MIME header.\nOptional charset, if given, should be a\nCharset\ninstance (seeemail.charset\n) or the name of a character set, which will be converted to aCharset\ninstance. A value ofNone\n(the default) means that the charset given in the constructor is used.s may be an instance of\nbytes\norstr\n. If it is an instance ofbytes\n, then charset is the encoding of that byte string, and aUnicodeError\nwill be raised if the string cannot be decoded with that character set.If s is an instance of\nstr\n, then charset is a hint specifying the character set of the characters in the string.In either case, when producing an RFC 2822-compliant header using RFC 2047 rules, the string will be encoded using the output codec of the charset. If the string cannot be encoded using the output codec, a UnicodeError will be raised.\nOptional errors is passed as the errors argument to the decode call if s is a byte string.\n- encode(splitchars=';, \\t', maxlinelen=None, linesep='\\n')\u00b6\nEncode a message header into an RFC-compliant format, possibly wrapping long lines and encapsulating non-ASCII parts in base64 or quoted-printable encodings.\nOptional splitchars is a string containing characters which should be given extra weight by the splitting algorithm during normal header wrapping. This is in very rough support of RFC 2822's \u2018higher level syntactic breaks\u2019: split points preceded by a splitchar are preferred during line splitting, with the characters preferred in the order in which they appear in the string. Space and tab may be included in the string to indicate whether preference should be given to one over the other as a split point when other split chars do not appear in the line being split. Splitchars does not affect RFC 2047 encoded lines.\nmaxlinelen, if given, overrides the instance\u2019s value for the maximum line length.\nlinesep specifies the characters used to separate the lines of the folded header. It defaults to the most useful value for Python application code (\n\\n\n), but\\r\\n\ncan be specified in order to produce headers with RFC-compliant line separators.Changed in version 3.2: Added the linesep argument.\nThe\nHeader\nclass also provides a number of methods to support standard operators and built-in functions.- __str__()\u00b6\nReturns an approximation of the\nHeader\nas a string, using an unlimited line length. All pieces are converted to unicode using the specified encoding and joined together appropriately. Any pieces with a charset of'unknown-8bit'\nare decoded as ASCII using the'replace'\nerror handler.Changed in version 3.2: Added handling for the\n'unknown-8bit'\ncharset.\nThe email.header\nmodule also provides the following convenient functions.\n- email.header.decode_header(header)\u00b6\nDecode a message header value without converting the character set. The header value is in header.\nFor historical reasons, this function may return either:\nA list of pairs containing each of the decoded parts of the header,\n(decoded_bytes, charset)\n, where decoded_bytes is always an instance ofbytes\n, and charset is either:A lower case string containing the name of the character set specified.\nNone\nfor non-encoded parts of the header.\nA list of length 1 containing a pair\n(string, None)\n, where string is always an instance ofstr\n.\nAn\nemail.errors.HeaderParseError\nmay be raised when certain decoding errors occur (e.g. a base64 decoding exception).Here are examples:\n>>> from email.header import decode_header >>> decode_header('=?iso-8859-1?q?p=F6stal?=') [(b'p\\xf6stal', 'iso-8859-1')] >>> decode_header('unencoded_string') [('unencoded_string', None)] >>> decode_header('bar =?utf-8?B?ZsOzbw==?=') [(b'bar ', None), (b'f\\xc3\\xb3o', 'utf-8')]\nNote\nThis function exists for backwards compatibility only. For new code, we recommend using\nemail.headerregistry.HeaderRegistry\n.\n- email.header.make_header(decoded_seq, maxlinelen=None, header_name=None, continuation_ws=' ')\u00b6\nCreate a\nHeader\ninstance from a sequence of pairs as returned bydecode_header()\n.decode_header()\ntakes a header value string and returns a sequence of pairs of the format(decoded_string, charset)\nwhere charset is the name of the character set.This function takes one of those sequence of pairs and returns a\nHeader\ninstance. Optional maxlinelen, header_name, and continuation_ws are as in theHeader\nconstructor.Note\nThis function exists for backwards compatibility only, and is not recommended for use in new code.", "code_snippets": [" ", "\n", " ", "\n", " ", " ", "\n", " ", " ", " ", "\n", " ", " ", "\n", "\n", "\n"], "language": "Python", "source": "python.org", "token_count": 2074} +{"url": "https://docs.python.org/3/reference/compound_stmts.html", "title": "Compound statements", "content": "8. Compound statements\u00b6\nCompound statements contain (groups of) other statements; they affect or control the execution of those other statements in some way. In general, compound statements span multiple lines, although in simple incarnations a whole compound statement may be contained in one line.\nThe if\n, while\nand for\nstatements implement\ntraditional control flow constructs. try\nspecifies exception\nhandlers and/or cleanup code for a group of statements, while the\nwith\nstatement allows the execution of initialization and\nfinalization code around a block of code. Function and class definitions are\nalso syntactically compound statements.\nA compound statement consists of one or more \u2018clauses.\u2019 A clause consists of a\nheader and a \u2018suite.\u2019 The clause headers of a particular compound statement are\nall at the same indentation level. Each clause header begins with a uniquely\nidentifying keyword and ends with a colon. A suite is a group of statements\ncontrolled by a clause. A suite can be one or more semicolon-separated simple\nstatements on the same line as the header, following the header\u2019s colon, or it\ncan be one or more indented statements on subsequent lines. Only the latter\nform of a suite can contain nested compound statements; the following is illegal,\nmostly because it wouldn\u2019t be clear to which if\nclause a following\nelse\nclause would belong:\nif test1: if test2: print(x)\nAlso note that the semicolon binds tighter than the colon in this context, so\nthat in the following example, either all or none of the print()\ncalls are\nexecuted:\nif x < y < z: print(x); print(y); print(z)\nSummarizing:\ncompound_stmt:if_stmt\n|while_stmt\n|for_stmt\n|try_stmt\n|with_stmt\n|match_stmt\n|funcdef\n|classdef\n|async_with_stmt\n|async_for_stmt\n|async_funcdef\nsuite:stmt_list\nNEWLINE | NEWLINE INDENTstatement\n+ DEDENT statement:stmt_list\nNEWLINE |compound_stmt\nstmt_list:simple_stmt\n(\";\"simple_stmt\n)* [\";\"]\nNote that statements always end in a NEWLINE\npossibly followed by a\nDEDENT\n. Also note that optional continuation clauses always begin with a\nkeyword that cannot start a statement, thus there are no ambiguities (the\n\u2018dangling else\n\u2019 problem is solved in Python by requiring nested\nif\nstatements to be indented).\nThe formatting of the grammar rules in the following sections places each clause on a separate line for clarity.\n8.1. The if\nstatement\u00b6\nThe if\nstatement is used for conditional execution:\nif_stmt: \"if\"assignment_expression\n\":\"suite\n(\"elif\"assignment_expression\n\":\"suite\n)* [\"else\" \":\"suite\n]\nIt selects exactly one of the suites by evaluating the expressions one by one\nuntil one is found to be true (see section Boolean operations for the definition of\ntrue and false); then that suite is executed (and no other part of the\nif\nstatement is executed or evaluated). If all expressions are\nfalse, the suite of the else\nclause, if present, is executed.\n8.2. The while\nstatement\u00b6\nThe while\nstatement is used for repeated execution as long as an\nexpression is true:\nwhile_stmt: \"while\"assignment_expression\n\":\"suite\n[\"else\" \":\"suite\n]\nThis repeatedly tests the expression and, if it is true, executes the first\nsuite; if the expression is false (which may be the first time it is tested) the\nsuite of the else\nclause, if present, is executed and the loop\nterminates.\nA break\nstatement executed in the first suite terminates the loop\nwithout executing the else\nclause\u2019s suite. A continue\nstatement executed in the first suite skips the rest of the suite and goes back\nto testing the expression.\n8.3. The for\nstatement\u00b6\nThe for\nstatement is used to iterate over the elements of a sequence\n(such as a string, tuple or list) or other iterable object:\nfor_stmt: \"for\"target_list\n\"in\"starred_expression_list\n\":\"suite\n[\"else\" \":\"suite\n]\nThe starred_expression_list\nexpression is evaluated\nonce; it should yield an iterable object. An iterator is\ncreated for that iterable. The first item provided by the iterator is then\nassigned to the target list using the standard rules for assignments\n(see Assignment statements), and the suite is executed. This repeats for each\nitem provided by the iterator. When the iterator is exhausted,\nthe suite in the else\nclause,\nif present, is executed, and the loop terminates.\nA break\nstatement executed in the first suite terminates the loop\nwithout executing the else\nclause\u2019s suite. A continue\nstatement executed in the first suite skips the rest of the suite and continues\nwith the next item, or with the else\nclause if there is no next\nitem.\nThe for-loop makes assignments to the variables in the target list. This overwrites all previous assignments to those variables including those made in the suite of the for-loop:\nfor i in range(10):\nprint(i)\ni = 5 # this will not affect the for-loop\n# because i will be overwritten with the next\n# index in the range\nNames in the target list are not deleted when the loop is finished, but if the\nsequence is empty, they will not have been assigned to at all by the loop. Hint:\nthe built-in type range()\nrepresents immutable arithmetic sequences of integers.\nFor instance, iterating range(3)\nsuccessively yields 0, 1, and then 2.\nChanged in version 3.11: Starred elements are now allowed in the expression list.\n8.4. The try\nstatement\u00b6\nThe try\nstatement specifies exception handlers and/or cleanup code\nfor a group of statements:\ntry_stmt:try1_stmt\n|try2_stmt\n|try3_stmt\ntry1_stmt: \"try\" \":\"suite\n(\"except\" [expression\n[\"as\"identifier\n]] \":\"suite\n)+ [\"else\" \":\"suite\n] [\"finally\" \":\"suite\n] try2_stmt: \"try\" \":\"suite\n(\"except\" \"*\"expression\n[\"as\"identifier\n] \":\"suite\n)+ [\"else\" \":\"suite\n] [\"finally\" \":\"suite\n] try3_stmt: \"try\" \":\"suite\n\"finally\" \":\"suite\nAdditional information on exceptions can be found in section Exceptions,\nand information on using the raise\nstatement to generate exceptions\nmay be found in section The raise statement.\nChanged in version 3.14: Support for optionally dropping grouping parentheses when using multiple exception types. See PEP 758.\n8.4.1. except\nclause\u00b6\nThe except\nclause(s) specify one or more exception handlers. When no\nexception occurs in the try\nclause, no exception handler is executed.\nWhen an exception occurs in the try\nsuite, a search for an exception\nhandler is started. This search inspects the except\nclauses in turn\nuntil one is found that matches the exception.\nAn expression-less except\nclause, if present, must be last;\nit matches any exception.\nFor an except\nclause with an expression, the\nexpression must evaluate to an exception type or a tuple of exception types. Parentheses\ncan be dropped if multiple exception types are provided and the as\nclause is not used.\nThe raised exception matches an except\nclause whose expression evaluates\nto the class or a non-virtual base class of the exception object,\nor to a tuple that contains such a class.\nIf no except\nclause matches the exception,\nthe search for an exception handler\ncontinues in the surrounding code and on the invocation stack. [1]\nIf the evaluation of an expression\nin the header of an except\nclause raises an exception,\nthe original search for a handler is canceled and a search starts for\nthe new exception in the surrounding code and on the call stack (it is treated\nas if the entire try\nstatement raised the exception).\nWhen a matching except\nclause is found,\nthe exception is assigned to the target\nspecified after the as\nkeyword in that except\nclause,\nif present, and the except\nclause\u2019s suite is executed.\nAll except\nclauses must have an executable block.\nWhen the end of this block is reached, execution continues\nnormally after the entire try\nstatement.\n(This means that if two nested handlers exist for the same exception,\nand the exception occurs in the try\nclause of the inner handler,\nthe outer handler will not handle the exception.)\nWhen an exception has been assigned using as target\n, it is cleared at the\nend of the except\nclause. This is as if\nexcept E as N:\nfoo\nwas translated to\nexcept E as N:\ntry:\nfoo\nfinally:\ndel N\nThis means the exception must be assigned to a different name to be able to\nrefer to it after the except\nclause.\nExceptions are cleared because with the\ntraceback attached to them, they form a reference cycle with the stack frame,\nkeeping all locals in that frame alive until the next garbage collection occurs.\nBefore an except\nclause\u2019s suite is executed,\nthe exception is stored in the sys\nmodule, where it can be accessed\nfrom within the body of the except\nclause by calling\nsys.exception()\n. When leaving an exception handler, the exception\nstored in the sys\nmodule is reset to its previous value:\n>>> print(sys.exception())\nNone\n>>> try:\n... raise TypeError\n... except:\n... print(repr(sys.exception()))\n... try:\n... raise ValueError\n... except:\n... print(repr(sys.exception()))\n... print(repr(sys.exception()))\n...\nTypeError()\nValueError()\nTypeError()\n>>> print(sys.exception())\nNone\n8.4.2. except*\nclause\u00b6\nThe except*\nclause(s) specify one or more handlers for groups of\nexceptions (BaseExceptionGroup\ninstances). A try\nstatement\ncan have either except\nor except*\nclauses, but not both.\nThe exception type for matching is mandatory in the case of except*\n,\nso except*:\nis a syntax error. The type is interpreted as in the case of\nexcept\n, but matching is performed on the exceptions contained in the\ngroup that is being handled. An TypeError\nis raised if a matching\ntype is a subclass of BaseExceptionGroup\n, because that would have\nambiguous semantics.\nWhen an exception group is raised in the try block, each except*\nclause splits (see split()\n) it into the subgroups\nof matching and non-matching exceptions. If the matching subgroup is not empty,\nit becomes the handled exception (the value returned from sys.exception()\n)\nand assigned to the target of the except*\nclause (if there is one).\nThen, the body of the except*\nclause executes. If the non-matching\nsubgroup is not empty, it is processed by the next except*\nin the\nsame manner. This continues until all exceptions in the group have been matched,\nor the last except*\nclause has run.\nAfter all except*\nclauses execute, the group of unhandled exceptions\nis merged with any exceptions that were raised or re-raised from within\nexcept*\nclauses. This merged exception group propagates on.:\n>>> try:\n... raise ExceptionGroup(\"eg\",\n... [ValueError(1), TypeError(2), OSError(3), OSError(4)])\n... except* TypeError as e:\n... print(f'caught {type(e)} with nested {e.exceptions}')\n... except* OSError as e:\n... print(f'caught {type(e)} with nested {e.exceptions}')\n...\ncaught with nested (TypeError(2),)\ncaught with nested (OSError(3), OSError(4))\n+ Exception Group Traceback (most recent call last):\n| File \"\", line 2, in \n| raise ExceptionGroup(\"eg\",\n| [ValueError(1), TypeError(2), OSError(3), OSError(4)])\n| ExceptionGroup: eg (1 sub-exception)\n+-+---------------- 1 ----------------\n| ValueError: 1\n+------------------------------------\nIf the exception raised from the try\nblock is not an exception group\nand its type matches one of the except*\nclauses, it is caught and\nwrapped by an exception group with an empty message string. This ensures that the\ntype of the target e\nis consistently BaseExceptionGroup\n:\n>>> try:\n... raise BlockingIOError\n... except* BlockingIOError as e:\n... print(repr(e))\n...\nExceptionGroup('', (BlockingIOError(),))\nbreak\n, continue\nand return\ncannot appear in an except*\nclause.\n8.4.3. else\nclause\u00b6\nThe optional else\nclause is executed if the control flow leaves the\ntry\nsuite, no exception was raised, and no return\n,\ncontinue\n, or break\nstatement was executed. Exceptions in\nthe else\nclause are not handled by the preceding except\nclauses.\n8.4.4. finally\nclause\u00b6\nIf finally\nis present, it specifies a \u2018cleanup\u2019 handler. The\ntry\nclause is executed, including any except\nand else\nclauses.\nIf an exception occurs in any of the clauses and is not handled,\nthe exception is temporarily saved.\nThe finally\nclause is executed. If there is a saved exception\nit is re-raised at the end of the finally\nclause.\nIf the finally\nclause raises another exception, the saved exception\nis set as the context of the new exception.\nIf the finally\nclause executes a return\n, break\nor continue\nstatement, the saved exception is discarded. For example,\nthis function returns 42.\ndef f():\ntry:\n1/0\nfinally:\nreturn 42\nThe exception information is not available to the program during execution of\nthe finally\nclause.\nWhen a return\n, break\nor continue\nstatement is\nexecuted in the try\nsuite of a try\n\u2026finally\nstatement, the finally\nclause is also executed \u2018on the way out.\u2019\nThe return value of a function is determined by the last return\nstatement executed. Since the finally\nclause always executes, a\nreturn\nstatement executed in the finally\nclause will\nalways be the last one executed. The following function returns \u2018finally\u2019.\ndef foo():\ntry:\nreturn 'try'\nfinally:\nreturn 'finally'\nChanged in version 3.8: Prior to Python 3.8, a continue\nstatement was illegal in the\nfinally\nclause due to a problem with the implementation.\nChanged in version 3.14: The compiler emits a SyntaxWarning\nwhen a return\n,\nbreak\nor continue\nappears in a finally\nblock (see PEP 765).\n8.5. The with\nstatement\u00b6\nThe with\nstatement is used to wrap the execution of a block with\nmethods defined by a context manager (see section With Statement Context Managers).\nThis allows common try\n\u2026except\n\u2026finally\nusage patterns to be encapsulated for convenient reuse.\nwith_stmt: \"with\" ( \"(\"with_stmt_contents\n\",\"? \")\" |with_stmt_contents\n) \":\"suite\nwith_stmt_contents:with_item\n(\",\"with_item\n)* with_item:expression\n[\"as\"target\n]\nThe execution of the with\nstatement with one \u201citem\u201d proceeds as follows:\nThe context expression (the expression given in the\nwith_item\n) is evaluated to obtain a context manager.The context manager\u2019s\n__enter__()\nis loaded for later use.The context manager\u2019s\n__exit__()\nis loaded for later use.The context manager\u2019s\n__enter__()\nmethod is invoked.If a target was included in the\nwith\nstatement, the return value from__enter__()\nis assigned to it.Note\nThe\nwith\nstatement guarantees that if the__enter__()\nmethod returns without an error, then__exit__()\nwill always be called. Thus, if an error occurs during the assignment to the target list, it will be treated the same as an error occurring within the suite would be. See step 7 below.The suite is executed.\nThe context manager\u2019s\n__exit__()\nmethod is invoked. If an exception caused the suite to be exited, its type, value, and traceback are passed as arguments to__exit__()\n. Otherwise, threeNone\narguments are supplied.If the suite was exited due to an exception, and the return value from the\n__exit__()\nmethod was false, the exception is reraised. If the return value was true, the exception is suppressed, and execution continues with the statement following thewith\nstatement.If the suite was exited for any reason other than an exception, the return value from\n__exit__()\nis ignored, and execution proceeds at the normal location for the kind of exit that was taken.\nThe following code:\nwith EXPRESSION as TARGET:\nSUITE\nis semantically equivalent to:\nmanager = (EXPRESSION)\nenter = manager.__enter__\nexit = manager.__exit__\nvalue = enter()\nhit_except = False\ntry:\nTARGET = value\nSUITE\nexcept:\nhit_except = True\nif not exit(*sys.exc_info()):\nraise\nfinally:\nif not hit_except:\nexit(None, None, None)\nexcept that implicit special method lookup is used\nfor __enter__()\nand __exit__()\n.\nWith more than one item, the context managers are processed as if multiple\nwith\nstatements were nested:\nwith A() as a, B() as b:\nSUITE\nis semantically equivalent to:\nwith A() as a:\nwith B() as b:\nSUITE\nYou can also write multi-item context managers in multiple lines if the items are surrounded by parentheses. For example:\nwith (\nA() as a,\nB() as b,\n):\nSUITE\nChanged in version 3.1: Support for multiple context expressions.\nChanged in version 3.10: Support for using grouping parentheses to break the statement in multiple lines.\n8.6. The match\nstatement\u00b6\nAdded in version 3.10.\nThe match statement is used for pattern matching. Syntax:\nmatch_stmt: 'match'subject_expr\n\":\" NEWLINE INDENTcase_block\n+ DEDENT subject_expr: `!star_named_expression` \",\" `!star_named_expressions`? | `!named_expression` case_block: 'case'patterns\n[guard\n] \":\" `!block`\nNote\nThis section uses single quotes to denote soft keywords.\nPattern matching takes a pattern as input (following case\n) and a subject\nvalue (following match\n). The pattern (which may contain subpatterns) is\nmatched against the subject value. The outcomes are:\nA match success or failure (also termed a pattern success or failure).\nPossible binding of matched values to a name. The prerequisites for this are further discussed below.\nThe match\nand case\nkeywords are soft keywords.\nSee also\n8.6.1. Overview\u00b6\nHere\u2019s an overview of the logical flow of a match statement:\nThe subject expression\nsubject_expr\nis evaluated and a resulting subject value obtained. If the subject expression contains a comma, a tuple is constructed using the standard rules.Each pattern in a\ncase_block\nis attempted to match with the subject value. The specific rules for success or failure are described below. The match attempt can also bind some or all of the standalone names within the pattern. The precise pattern binding rules vary per pattern type and are specified below. Name bindings made during a successful pattern match outlive the executed block and can be used after the match statement.Note\nDuring failed pattern matches, some subpatterns may succeed. Do not rely on bindings being made for a failed match. Conversely, do not rely on variables remaining unchanged after a failed match. The exact behavior is dependent on implementation and may vary. This is an intentional decision made to allow different implementations to add optimizations.\nIf the pattern succeeds, the corresponding guard (if present) is evaluated. In this case all name bindings are guaranteed to have happened.\nIf the guard evaluates as true or is missing, the\nblock\ninsidecase_block\nis executed.Otherwise, the next\ncase_block\nis attempted as described above.If there are no further case blocks, the match statement is completed.\nNote\nUsers should generally never rely on a pattern being evaluated. Depending on implementation, the interpreter may cache values or use other optimizations which skip repeated evaluations.\nA sample match statement:\n>>> flag = False\n>>> match (100, 200):\n... case (100, 300): # Mismatch: 200 != 300\n... print('Case 1')\n... case (100, 200) if flag: # Successful match, but guard fails\n... print('Case 2')\n... case (100, y): # Matches and binds y to 200\n... print(f'Case 3, y: {y}')\n... case _: # Pattern not attempted\n... print('Case 4, I match anything!')\n...\nCase 3, y: 200\nIn this case, if flag\nis a guard. Read more about that in the next section.\n8.6.2. Guards\u00b6\nguard: \"if\" `!named_expression`\nA guard\n(which is part of the case\n) must succeed for code inside\nthe case\nblock to execute. It takes the form: if\nfollowed by an\nexpression.\nThe logical flow of a case\nblock with a guard\nfollows:\nCheck that the pattern in the\ncase\nblock succeeded. If the pattern failed, theguard\nis not evaluated and the nextcase\nblock is checked.If the pattern succeeded, evaluate the\nguard\n.If the\nguard\ncondition evaluates as true, the case block is selected.If the\nguard\ncondition evaluates as false, the case block is not selected.If the\nguard\nraises an exception during evaluation, the exception bubbles up.\nGuards are allowed to have side effects as they are expressions. Guard evaluation must proceed from the first to the last case block, one at a time, skipping case blocks whose pattern(s) don\u2019t all succeed. (I.e., guard evaluation must happen in order.) Guard evaluation must stop once a case block is selected.\n8.6.3. Irrefutable Case Blocks\u00b6\nAn irrefutable case block is a match-all case block. A match statement may have at most one irrefutable case block, and it must be last.\nA case block is considered irrefutable if it has no guard and its pattern is irrefutable. A pattern is considered irrefutable if we can prove from its syntax alone that it will always succeed. Only the following patterns are irrefutable:\nAS Patterns whose left-hand side is irrefutable\nOR Patterns containing at least one irrefutable pattern\nparenthesized irrefutable patterns\n8.6.4. Patterns\u00b6\nNote\nThis section uses grammar notations beyond standard EBNF:\nthe notation\nSEP.RULE+\nis shorthand forRULE (SEP RULE)*\nthe notation\n!RULE\nis shorthand for a negative lookahead assertion\nThe top-level syntax for patterns\nis:\npatterns:open_sequence_pattern\n|pattern\npattern:as_pattern\n|or_pattern\nclosed_pattern: |literal_pattern\n|capture_pattern\n|wildcard_pattern\n|value_pattern\n|group_pattern\n|sequence_pattern\n|mapping_pattern\n|class_pattern\nThe descriptions below will include a description \u201cin simple terms\u201d of what a pattern does for illustration purposes (credits to Raymond Hettinger for a document that inspired most of the descriptions). Note that these descriptions are purely for illustration purposes and may not reflect the underlying implementation. Furthermore, they do not cover all valid forms.\n8.6.4.1. OR Patterns\u00b6\nAn OR pattern is two or more patterns separated by vertical\nbars |\n. Syntax:\nor_pattern: \"|\".closed_pattern\n+\nOnly the final subpattern may be irrefutable, and each subpattern must bind the same set of names to avoid ambiguity.\nAn OR pattern matches each of its subpatterns in turn to the subject value, until one succeeds. The OR pattern is then considered successful. Otherwise, if none of the subpatterns succeed, the OR pattern fails.\nIn simple terms, P1 | P2 | ...\nwill try to match P1\n, if it fails it will try to\nmatch P2\n, succeeding immediately if any succeeds, failing otherwise.\n8.6.4.2. AS Patterns\u00b6\nAn AS pattern matches an OR pattern on the left of the as\nkeyword against a subject. Syntax:\nas_pattern:or_pattern\n\"as\"capture_pattern\nIf the OR pattern fails, the AS pattern fails. Otherwise, the AS pattern binds\nthe subject to the name on the right of the as keyword and succeeds.\ncapture_pattern\ncannot be a _\n.\nIn simple terms P as NAME\nwill match with P\n, and on success it will\nset NAME = \n.\n8.6.4.3. Literal Patterns\u00b6\nA literal pattern corresponds to most literals in Python. Syntax:\nliteral_pattern:signed_number\n|signed_number\n\"+\" NUMBER |signed_number\n\"-\" NUMBER |strings\n| \"None\" | \"True\" | \"False\" signed_number: [\"-\"] NUMBER\nThe rule strings\nand the token NUMBER\nare defined in the\nstandard Python grammar. Triple-quoted strings are\nsupported. Raw strings and byte strings are supported. f-strings\nand t-strings are not supported.\nThe forms signed_number '+' NUMBER\nand signed_number '-' NUMBER\nare\nfor expressing complex numbers; they require a real number\non the left and an imaginary number on the right. E.g. 3 + 4j\n.\nIn simple terms, LITERAL\nwill succeed only if == LITERAL\n. For\nthe singletons None\n, True\nand False\n, the is\noperator is used.\n8.6.4.4. Capture Patterns\u00b6\nA capture pattern binds the subject value to a name. Syntax:\ncapture_pattern: !'_' NAME\nA single underscore _\nis not a capture pattern (this is what !'_'\nexpresses). It is instead treated as a\nwildcard_pattern\n.\nIn a given pattern, a given name can only be bound once. E.g.\ncase x, x: ...\nis invalid while case [x] | x: ...\nis allowed.\nCapture patterns always succeed. The binding follows scoping rules\nestablished by the assignment expression operator in PEP 572; the\nname becomes a local variable in the closest containing function scope unless\nthere\u2019s an applicable global\nor nonlocal\nstatement.\nIn simple terms NAME\nwill always succeed and it will set NAME = \n.\n8.6.4.5. Wildcard Patterns\u00b6\nA wildcard pattern always succeeds (matches anything) and binds no name. Syntax:\nwildcard_pattern: '_'\n_\nis a soft keyword within any pattern,\nbut only within patterns. It is an identifier, as usual, even within\nmatch\nsubject expressions, guard\ns, and case\nblocks.\nIn simple terms, _\nwill always succeed.\n8.6.4.6. Value Patterns\u00b6\nA value pattern represents a named value in Python. Syntax:\nvalue_pattern:attr\nattr:name_or_attr\n\".\" NAME name_or_attr:attr\n| NAME\nThe dotted name in the pattern is looked up using standard Python\nname resolution rules. The pattern succeeds if the\nvalue found compares equal to the subject value (using the ==\nequality\noperator).\nIn simple terms NAME1.NAME2\nwill succeed only if == NAME1.NAME2\nNote\nIf the same value occurs multiple times in the same match statement, the interpreter may cache the first value found and reuse it rather than repeat the same lookup. This cache is strictly tied to a given execution of a given match statement.\n8.6.4.7. Group Patterns\u00b6\nA group pattern allows users to add parentheses around patterns to emphasize the intended grouping. Otherwise, it has no additional syntax. Syntax:\ngroup_pattern: \"(\" pattern\n\")\"\nIn simple terms (P)\nhas the same effect as P\n.\n8.6.4.8. Sequence Patterns\u00b6\nA sequence pattern contains several subpatterns to be matched against sequence elements. The syntax is similar to the unpacking of a list or tuple.\nsequence_pattern: \"[\" [maybe_sequence_pattern\n] \"]\" | \"(\" [open_sequence_pattern\n] \")\" open_sequence_pattern:maybe_star_pattern\n\",\" [maybe_sequence_pattern\n] maybe_sequence_pattern: \",\".maybe_star_pattern\n+ \",\"? maybe_star_pattern:star_pattern\n|pattern\nstar_pattern: \"*\" (capture_pattern\n|wildcard_pattern\n)\nThere is no difference if parentheses or square brackets\nare used for sequence patterns (i.e. (...)\nvs [...]\n).\nNote\nA single pattern enclosed in parentheses without a trailing comma\n(e.g. (3 | 4)\n) is a group pattern.\nWhile a single pattern enclosed in square brackets (e.g. [3 | 4]\n) is\nstill a sequence pattern.\nAt most one star subpattern may be in a sequence pattern. The star subpattern may occur in any position. If no star subpattern is present, the sequence pattern is a fixed-length sequence pattern; otherwise it is a variable-length sequence pattern.\nThe following is the logical flow for matching a sequence pattern against a subject value:\nIf the subject value is not a sequence [2], the sequence pattern fails.\nIf the subject value is an instance of\nstr\n,bytes\norbytearray\nthe sequence pattern fails.The subsequent steps depend on whether the sequence pattern is fixed or variable-length.\nIf the sequence pattern is fixed-length:\nIf the length of the subject sequence is not equal to the number of subpatterns, the sequence pattern fails\nSubpatterns in the sequence pattern are matched to their corresponding items in the subject sequence from left to right. Matching stops as soon as a subpattern fails. If all subpatterns succeed in matching their corresponding item, the sequence pattern succeeds.\nOtherwise, if the sequence pattern is variable-length:\nIf the length of the subject sequence is less than the number of non-star subpatterns, the sequence pattern fails.\nThe leading non-star subpatterns are matched to their corresponding items as for fixed-length sequences.\nIf the previous step succeeds, the star subpattern matches a list formed of the remaining subject items, excluding the remaining items corresponding to non-star subpatterns following the star subpattern.\nRemaining non-star subpatterns are matched to their corresponding subject items, as for a fixed-length sequence.\nNote\nThe length of the subject sequence is obtained via\nlen()\n(i.e. via the__len__()\nprotocol). This length may be cached by the interpreter in a similar manner as value patterns.\nIn simple terms [P1, P2, P3,\n\u2026 , P]\nmatches only if all the following\nhappens:\ncheck\n\nis a sequencelen(subject) == \nP1\nmatches[0]\n(note that this match can also bind names)P2\nmatches[1]\n(note that this match can also bind names)\u2026 and so on for the corresponding pattern/element.\n8.6.4.9. Mapping Patterns\u00b6\nA mapping pattern contains one or more key-value patterns. The syntax is similar to the construction of a dictionary. Syntax:\nmapping_pattern: \"{\" [items_pattern\n] \"}\" items_pattern: \",\".key_value_pattern\n+ \",\"? key_value_pattern: (literal_pattern\n|value_pattern\n) \":\"pattern\n|double_star_pattern\ndouble_star_pattern: \"**\"capture_pattern\nAt most one double star pattern may be in a mapping pattern. The double star pattern must be the last subpattern in the mapping pattern.\nDuplicate keys in mapping patterns are disallowed. Duplicate literal keys will\nraise a SyntaxError\n. Two keys that otherwise have the same value will\nraise a ValueError\nat runtime.\nThe following is the logical flow for matching a mapping pattern against a subject value:\nIf the subject value is not a mapping [3],the mapping pattern fails.\nIf every key given in the mapping pattern is present in the subject mapping, and the pattern for each key matches the corresponding item of the subject mapping, the mapping pattern succeeds.\nIf duplicate keys are detected in the mapping pattern, the pattern is considered invalid. A\nSyntaxError\nis raised for duplicate literal values; or aValueError\nfor named keys of the same value.\nNote\nKey-value pairs are matched using the two-argument form of the mapping\nsubject\u2019s get()\nmethod. Matched key-value pairs must already be present\nin the mapping, and not created on-the-fly via __missing__()\nor __getitem__()\n.\nIn simple terms {KEY1: P1, KEY2: P2, ... }\nmatches only if all the following\nhappens:\ncheck\n\nis a mappingKEY1 in \nP1\nmatches[KEY1]\n\u2026 and so on for the corresponding KEY/pattern pair.\n8.6.4.10. Class Patterns\u00b6\nA class pattern represents a class and its positional and keyword arguments (if any). Syntax:\nclass_pattern:name_or_attr\n\"(\" [pattern_arguments\n\",\"?] \")\" pattern_arguments:positional_patterns\n[\",\"keyword_patterns\n] |keyword_patterns\npositional_patterns: \",\".pattern\n+ keyword_patterns: \",\".keyword_pattern\n+ keyword_pattern: NAME \"=\"pattern\nThe same keyword should not be repeated in class patterns.\nThe following is the logical flow for matching a class pattern against a subject value:\nIf\nname_or_attr\nis not an instance of the builtintype\n, raiseTypeError\n.If the subject value is not an instance of\nname_or_attr\n(tested viaisinstance()\n), the class pattern fails.If no pattern arguments are present, the pattern succeeds. Otherwise, the subsequent steps depend on whether keyword or positional argument patterns are present.\nFor a number of built-in types (specified below), a single positional subpattern is accepted which will match the entire subject; for these types keyword patterns also work as for other types.\nIf only keyword patterns are present, they are processed as follows, one by one:\nThe keyword is looked up as an attribute on the subject.\nIf this raises an exception other than\nAttributeError\n, the exception bubbles up.If this raises\nAttributeError\n, the class pattern has failed.Else, the subpattern associated with the keyword pattern is matched against the subject\u2019s attribute value. If this fails, the class pattern fails; if this succeeds, the match proceeds to the next keyword.\nIf all keyword patterns succeed, the class pattern succeeds.\nIf any positional patterns are present, they are converted to keyword patterns using the\n__match_args__\nattribute on the classname_or_attr\nbefore matching:The equivalent of\ngetattr(cls, \"__match_args__\", ())\nis called.If this raises an exception, the exception bubbles up.\nIf the returned value is not a tuple, the conversion fails and\nTypeError\nis raised.If there are more positional patterns than\nlen(cls.__match_args__)\n,TypeError\nis raised.Otherwise, positional pattern\ni\nis converted to a keyword pattern using__match_args__[i]\nas the keyword.__match_args__[i]\nmust be a string; if notTypeError\nis raised.If there are duplicate keywords,\nTypeError\nis raised.\nOnce all positional patterns have been converted to keyword patterns, the match proceeds as if there were only keyword patterns.\nFor the following built-in types the handling of positional subpatterns is different:\nThese classes accept a single positional argument, and the pattern there is matched against the whole object rather than an attribute. For example\nint(0|1)\nmatches the value0\n, but not the value0.0\n.\nIn simple terms CLS(P1, attr=P2)\nmatches only if the following happens:\nisinstance(, CLS)\nconvert\nP1\nto a keyword pattern usingCLS.__match_args__\nFor each keyword argument\nattr=P2\n:hasattr(, \"attr\")\nP2\nmatches.attr\n\u2026 and so on for the corresponding keyword argument/pattern pair.\n8.7. Function definitions\u00b6\nA function definition defines a user-defined function object (see section The standard type hierarchy):\nfuncdef: [decorators\n] \"def\"funcname\n[type_params\n] \"(\" [parameter_list\n] \")\" [\"->\"expression\n] \":\"suite\ndecorators:decorator\n+ decorator: \"@\"assignment_expression\nNEWLINE parameter_list:defparameter\n(\",\"defparameter\n)* \",\" \"/\" [\",\" [parameter_list_no_posonly\n]] |parameter_list_no_posonly\nparameter_list_no_posonly:defparameter\n(\",\"defparameter\n)* [\",\" [parameter_list_starargs\n]] |parameter_list_starargs\nparameter_list_starargs: \"*\" [star_parameter\n] (\",\"defparameter\n)* [\",\" [parameter_star_kwargs\n]] | \"*\" (\",\"defparameter\n)+ [\",\" [parameter_star_kwargs\n]] |parameter_star_kwargs\nparameter_star_kwargs: \"**\"parameter\n[\",\"] parameter:identifier\n[\":\"expression\n] star_parameter:identifier\n[\":\" [\"*\"]expression\n] defparameter:parameter\n[\"=\"expression\n] funcname:identifier\nA function definition is an executable statement. Its execution binds the function name in the current local namespace to a function object (a wrapper around the executable code for the function). This function object contains a reference to the current global namespace as the global namespace to be used when the function is called.\nThe function definition does not execute the function body; this gets executed only when the function is called. [4]\nA function definition may be wrapped by one or more decorator expressions. Decorator expressions are evaluated when the function is defined, in the scope that contains the function definition. The result must be a callable, which is invoked with the function object as the only argument. The returned value is bound to the function name instead of the function object. Multiple decorators are applied in nested fashion. For example, the following code\n@f1(arg)\n@f2\ndef func(): pass\nis roughly equivalent to\ndef func(): pass\nfunc = f1(arg)(f2(func))\nexcept that the original function is not temporarily bound to the name func\n.\nChanged in version 3.9: Functions may be decorated with any valid\nassignment_expression\n. Previously, the grammar was\nmuch more restrictive; see PEP 614 for details.\nA list of type parameters may be given in square brackets\nbetween the function\u2019s name and the opening parenthesis for its parameter list.\nThis indicates to static type checkers that the function is generic. At runtime,\nthe type parameters can be retrieved from the function\u2019s\n__type_params__\nattribute. See Generic functions for more.\nChanged in version 3.12: Type parameter lists are new in Python 3.12.\nWhen one or more parameters have the form parameter =\nexpression, the function is said to have \u201cdefault parameter values.\u201d For a\nparameter with a default value, the corresponding argument may be\nomitted from a call, in which\ncase the parameter\u2019s default value is substituted. If a parameter has a default\nvalue, all following parameters up until the \u201c*\n\u201d must also have a default\nvalue \u2014 this is a syntactic restriction that is not expressed by the grammar.\nDefault parameter values are evaluated from left to right when the function\ndefinition is executed. This means that the expression is evaluated once, when\nthe function is defined, and that the same \u201cpre-computed\u201d value is used for each\ncall. This is especially important to understand when a default parameter value is a\nmutable object, such as a list or a dictionary: if the function modifies the\nobject (e.g. by appending an item to a list), the default parameter value is in effect\nmodified. This is generally not what was intended. A way around this is to use\nNone\nas the default, and explicitly test for it in the body of the function,\ne.g.:\ndef whats_on_the_telly(penguin=None):\nif penguin is None:\npenguin = []\npenguin.append(\"property of the zoo\")\nreturn penguin\nFunction call semantics are described in more detail in section Calls. A\nfunction call always assigns values to all parameters mentioned in the parameter\nlist, either from positional arguments, from keyword arguments, or from default\nvalues. If the form \u201c*identifier\n\u201d is present, it is initialized to a tuple\nreceiving any excess positional parameters, defaulting to the empty tuple.\nIf the form \u201c**identifier\n\u201d is present, it is initialized to a new\nordered mapping receiving any excess keyword arguments, defaulting to a\nnew empty mapping of the same type. Parameters after \u201c*\n\u201d or\n\u201c*identifier\n\u201d are keyword-only parameters and may only be passed\nby keyword arguments. Parameters before \u201c/\n\u201d are positional-only parameters\nand may only be passed by positional arguments.\nChanged in version 3.8: The /\nfunction parameter syntax may be used to indicate positional-only\nparameters. See PEP 570 for details.\nParameters may have an annotation of the form \u201c: expression\n\u201d\nfollowing the parameter name. Any parameter may have an annotation, even those of the form\n*identifier\nor **identifier\n. (As a special case, parameters of the form\n*identifier\nmay have an annotation \u201c: *expression\n\u201d.) Functions may have \u201creturn\u201d annotation of\nthe form \u201c-> expression\n\u201d after the parameter list. These annotations can be\nany valid Python expression. The presence of annotations does not change the\nsemantics of a function. See Annotations for more information on annotations.\nChanged in version 3.11: Parameters of the form \u201c*identifier\n\u201d may have an annotation\n\u201c: *expression\n\u201d. See PEP 646.\nIt is also possible to create anonymous functions (functions not bound to a\nname), for immediate use in expressions. This uses lambda expressions, described in\nsection Lambdas. Note that the lambda expression is merely a shorthand for a\nsimplified function definition; a function defined in a \u201cdef\n\u201d\nstatement can be passed around or assigned to another name just like a function\ndefined by a lambda expression. The \u201cdef\n\u201d form is actually more powerful\nsince it allows the execution of multiple statements and annotations.\nProgrammer\u2019s note: Functions are first-class objects. A \u201cdef\n\u201d statement\nexecuted inside a function definition defines a local function that can be\nreturned or passed around. Free variables used in the nested function can\naccess the local variables of the function containing the def. See section\nNaming and binding for details.\nSee also\n- PEP 3107 - Function Annotations\nThe original specification for function annotations.\n- PEP 484 - Type Hints\nDefinition of a standard meaning for annotations: type hints.\n- PEP 526 - Syntax for Variable Annotations\nAbility to type hint variable declarations, including class variables and instance variables.\n- PEP 563 - Postponed Evaluation of Annotations\nSupport for forward references within annotations by preserving annotations in a string form at runtime instead of eager evaluation.\n- PEP 318 - Decorators for Functions and Methods\nFunction and method decorators were introduced. Class decorators were introduced in PEP 3129.\n8.8. Class definitions\u00b6\nA class definition defines a class object (see section The standard type hierarchy):\nclassdef: [decorators\n] \"class\"classname\n[type_params\n] [inheritance\n] \":\"suite\ninheritance: \"(\" [argument_list\n] \")\" classname:identifier\nA class definition is an executable statement. The inheritance list usually\ngives a list of base classes (see Metaclasses for more advanced uses), so\neach item in the list should evaluate to a class object which allows\nsubclassing. Classes without an inheritance list inherit, by default, from the\nbase class object\n; hence,\nclass Foo:\npass\nis equivalent to\nclass Foo(object):\npass\nThe class\u2019s suite is then executed in a new execution frame (see Naming and binding), using a newly created local namespace and the original global namespace. (Usually, the suite contains mostly function definitions.) When the class\u2019s suite finishes execution, its execution frame is discarded but its local namespace is saved. [5] A class object is then created using the inheritance list for the base classes and the saved local namespace for the attribute dictionary. The class name is bound to this class object in the original local namespace.\nThe order in which attributes are defined in the class body is preserved\nin the new class\u2019s __dict__\n. Note that this is reliable only right\nafter the class is created and only for classes that were defined using\nthe definition syntax.\nClass creation can be customized heavily using metaclasses.\nClasses can also be decorated: just like when decorating functions,\n@f1(arg)\n@f2\nclass Foo: pass\nis roughly equivalent to\nclass Foo: pass\nFoo = f1(arg)(f2(Foo))\nThe evaluation rules for the decorator expressions are the same as for function decorators. The result is then bound to the class name.\nChanged in version 3.9: Classes may be decorated with any valid\nassignment_expression\n. Previously, the grammar was\nmuch more restrictive; see PEP 614 for details.\nA list of type parameters may be given in square brackets\nimmediately after the class\u2019s name.\nThis indicates to static type checkers that the class is generic. At runtime,\nthe type parameters can be retrieved from the class\u2019s\n__type_params__\nattribute. See Generic classes for more.\nChanged in version 3.12: Type parameter lists are new in Python 3.12.\nProgrammer\u2019s note: Variables defined in the class definition are class\nattributes; they are shared by instances. Instance attributes can be set in a\nmethod with self.name = value\n. Both class and instance attributes are\naccessible through the notation \u201cself.name\n\u201d, and an instance attribute hides\na class attribute with the same name when accessed in this way. Class\nattributes can be used as defaults for instance attributes, but using mutable\nvalues there can lead to unexpected results. Descriptors\ncan be used to create instance variables with different implementation details.\nSee also\n- PEP 3115 - Metaclasses in Python 3000\nThe proposal that changed the declaration of metaclasses to the current syntax, and the semantics for how classes with metaclasses are constructed.\n- PEP 3129 - Class Decorators\nThe proposal that added class decorators. Function and method decorators were introduced in PEP 318.\n8.9. Coroutines\u00b6\nAdded in version 3.5.\n8.9.1. Coroutine function definition\u00b6\nasync_funcdef: [decorators\n] \"async\" \"def\"funcname\n\"(\" [parameter_list\n] \")\" [\"->\"expression\n] \":\"suite\nExecution of Python coroutines can be suspended and resumed at many points\n(see coroutine). await\nexpressions, async for\nand\nasync with\ncan only be used in the body of a coroutine function.\nFunctions defined with async def\nsyntax are always coroutine functions,\neven if they do not contain await\nor async\nkeywords.\nIt is a SyntaxError\nto use a yield from\nexpression inside the body\nof a coroutine function.\nAn example of a coroutine function:\nasync def func(param1, param2):\ndo_stuff()\nawait some_coroutine()\nChanged in version 3.7: await\nand async\nare now keywords; previously they were only\ntreated as such inside the body of a coroutine function.\n8.9.2. The async for\nstatement\u00b6\nasync_for_stmt: \"async\" for_stmt\nAn asynchronous iterable provides an __aiter__\nmethod that directly\nreturns an asynchronous iterator, which can call asynchronous code in\nits __anext__\nmethod.\nThe async for\nstatement allows convenient iteration over asynchronous\niterables.\nThe following code:\nasync for TARGET in ITER:\nSUITE\nelse:\nSUITE2\nIs semantically equivalent to:\niter = (ITER).__aiter__()\nrunning = True\nwhile running:\ntry:\nTARGET = await iter.__anext__()\nexcept StopAsyncIteration:\nrunning = False\nelse:\nSUITE\nelse:\nSUITE2\nexcept that implicit special method lookup is used\nfor __aiter__()\nand __anext__()\n.\nIt is a SyntaxError\nto use an async for\nstatement outside the\nbody of a coroutine function.\n8.9.3. The async with\nstatement\u00b6\nasync_with_stmt: \"async\" with_stmt\nAn asynchronous context manager is a context manager that is able to suspend execution in its enter and exit methods.\nThe following code:\nasync with EXPRESSION as TARGET:\nSUITE\nis semantically equivalent to:\nmanager = (EXPRESSION)\naenter = manager.__aenter__\naexit = manager.__aexit__\nvalue = await aenter()\nhit_except = False\ntry:\nTARGET = value\nSUITE\nexcept:\nhit_except = True\nif not await aexit(*sys.exc_info()):\nraise\nfinally:\nif not hit_except:\nawait aexit(None, None, None)\nexcept that implicit special method lookup is used\nfor __aenter__()\nand __aexit__()\n.\nIt is a SyntaxError\nto use an async with\nstatement outside the\nbody of a coroutine function.\nSee also\n- PEP 492 - Coroutines with async and await syntax\nThe proposal that made coroutines a proper standalone concept in Python, and added supporting syntax.\n8.10. Type parameter lists\u00b6\nAdded in version 3.12.\nChanged in version 3.13: Support for default values was added (see PEP 696).\ntype_params: \"[\"type_param\n(\",\"type_param\n)* \"]\" type_param:typevar\n|typevartuple\n|paramspec\ntypevar:identifier\n(\":\"expression\n)? (\"=\"expression\n)? typevartuple: \"*\"identifier\n(\"=\"expression\n)? paramspec: \"**\"identifier\n(\"=\"expression\n)?\nFunctions (including coroutines), classes and type aliases may contain a type parameter list:\ndef max[T](args: list[T]) -> T:\n...\nasync def amax[T](args: list[T]) -> T:\n...\nclass Bag[T]:\ndef __iter__(self) -> Iterator[T]:\n...\ndef add(self, arg: T) -> None:\n...\ntype ListOrSet[T] = list[T] | set[T]\nSemantically, this indicates that the function, class, or type alias is generic over a type variable. This information is primarily used by static type checkers, and at runtime, generic objects behave much like their non-generic counterparts.\nType parameters are declared in square brackets ([]\n) immediately\nafter the name of the function, class, or type alias. The type parameters\nare accessible within the scope of the generic object, but not elsewhere.\nThus, after a declaration def func[T](): pass\n, the name T\nis not available in\nthe module scope. Below, the semantics of generic objects are described\nwith more precision. The scope of type parameters is modeled with a special\nfunction (technically, an annotation scope) that\nwraps the creation of the generic object.\nGeneric functions, classes, and type aliases have a\n__type_params__\nattribute listing their type parameters.\nType parameters come in three kinds:\ntyping.TypeVar\n, introduced by a plain name (e.g.,T\n). Semantically, this represents a single type to a type checker.typing.TypeVarTuple\n, introduced by a name prefixed with a single asterisk (e.g.,*Ts\n). Semantically, this stands for a tuple of any number of types.typing.ParamSpec\n, introduced by a name prefixed with two asterisks (e.g.,**P\n). Semantically, this stands for the parameters of a callable.\ntyping.TypeVar\ndeclarations can define bounds and constraints with\na colon (:\n) followed by an expression. A single expression after the colon\nindicates a bound (e.g. T: int\n). Semantically, this means\nthat the typing.TypeVar\ncan only represent types that are a subtype of\nthis bound. A parenthesized tuple of expressions after the colon indicates a\nset of constraints (e.g. T: (str, bytes)\n). Each member of the tuple should be a\ntype (again, this is not enforced at runtime). Constrained type variables can only\ntake on one of the types in the list of constraints.\nFor typing.TypeVar\ns declared using the type parameter list syntax,\nthe bound and constraints are not evaluated when the generic object is created,\nbut only when the value is explicitly accessed through the attributes __bound__\nand __constraints__\n. To accomplish this, the bounds or constraints are\nevaluated in a separate annotation scope.\ntyping.TypeVarTuple\ns and typing.ParamSpec\ns cannot have bounds\nor constraints.\nAll three flavors of type parameters can also have a default value, which is used\nwhen the type parameter is not explicitly provided. This is added by appending\na single equals sign (=\n) followed by an expression. Like the bounds and\nconstraints of type variables, the default value is not evaluated when the\nobject is created, but only when the type parameter\u2019s __default__\nattribute\nis accessed. To this end, the default value is evaluated in a separate\nannotation scope. If no default value is specified\nfor a type parameter, the __default__\nattribute is set to the special\nsentinel object typing.NoDefault\n.\nThe following example indicates the full set of allowed type parameter declarations:\ndef overly_generic[\nSimpleTypeVar,\nTypeVarWithDefault = int,\nTypeVarWithBound: int,\nTypeVarWithConstraints: (str, bytes),\n*SimpleTypeVarTuple = (int, float),\n**SimpleParamSpec = (str, bytearray),\n](\na: SimpleTypeVar,\nb: TypeVarWithDefault,\nc: TypeVarWithBound,\nd: Callable[SimpleParamSpec, TypeVarWithConstraints],\n*e: SimpleTypeVarTuple,\n): ...\n8.10.1. Generic functions\u00b6\nGeneric functions are declared as follows:\ndef func[T](arg: T): ...\nThis syntax is equivalent to:\nannotation-def TYPE_PARAMS_OF_func():\nT = typing.TypeVar(\"T\")\ndef func(arg: T): ...\nfunc.__type_params__ = (T,)\nreturn func\nfunc = TYPE_PARAMS_OF_func()\nHere annotation-def\nindicates an annotation scope,\nwhich is not actually bound to any name at runtime. (One\nother liberty is taken in the translation: the syntax does not go through\nattribute access on the typing\nmodule, but creates an instance of\ntyping.TypeVar\ndirectly.)\nThe annotations of generic functions are evaluated within the annotation scope used for declaring the type parameters, but the function\u2019s defaults and decorators are not.\nThe following example illustrates the scoping rules for these cases, as well as for additional flavors of type parameters:\n@decorator\ndef func[T: int, *Ts, **P](*args: *Ts, arg: Callable[P, T] = some_default):\n...\nExcept for the lazy evaluation of the\nTypeVar\nbound, this is equivalent to:\nDEFAULT_OF_arg = some_default\nannotation-def TYPE_PARAMS_OF_func():\nannotation-def BOUND_OF_T():\nreturn int\n# In reality, BOUND_OF_T() is evaluated only on demand.\nT = typing.TypeVar(\"T\", bound=BOUND_OF_T())\nTs = typing.TypeVarTuple(\"Ts\")\nP = typing.ParamSpec(\"P\")\ndef func(*args: *Ts, arg: Callable[P, T] = DEFAULT_OF_arg):\n...\nfunc.__type_params__ = (T, Ts, P)\nreturn func\nfunc = decorator(TYPE_PARAMS_OF_func())\nThe capitalized names like DEFAULT_OF_arg\nare not actually\nbound at runtime.\n8.10.2. Generic classes\u00b6\nGeneric classes are declared as follows:\nclass Bag[T]: ...\nThis syntax is equivalent to:\nannotation-def TYPE_PARAMS_OF_Bag():\nT = typing.TypeVar(\"T\")\nclass Bag(typing.Generic[T]):\n__type_params__ = (T,)\n...\nreturn Bag\nBag = TYPE_PARAMS_OF_Bag()\nHere again annotation-def\n(not a real keyword) indicates an\nannotation scope, and the name\nTYPE_PARAMS_OF_Bag\nis not actually bound at runtime.\nGeneric classes implicitly inherit from typing.Generic\n.\nThe base classes and keyword arguments of generic classes are\nevaluated within the type scope for the type parameters,\nand decorators are evaluated outside that scope. This is illustrated\nby this example:\n@decorator\nclass Bag(Base[T], arg=T): ...\nThis is equivalent to:\nannotation-def TYPE_PARAMS_OF_Bag():\nT = typing.TypeVar(\"T\")\nclass Bag(Base[T], typing.Generic[T], arg=T):\n__type_params__ = (T,)\n...\nreturn Bag\nBag = decorator(TYPE_PARAMS_OF_Bag())\n8.10.3. Generic type aliases\u00b6\nThe type\nstatement can also be used to create a generic type alias:\ntype ListOrSet[T] = list[T] | set[T]\nExcept for the lazy evaluation of the value, this is equivalent to:\nannotation-def TYPE_PARAMS_OF_ListOrSet():\nT = typing.TypeVar(\"T\")\nannotation-def VALUE_OF_ListOrSet():\nreturn list[T] | set[T]\n# In reality, the value is lazily evaluated\nreturn typing.TypeAliasType(\"ListOrSet\", VALUE_OF_ListOrSet(), type_params=(T,))\nListOrSet = TYPE_PARAMS_OF_ListOrSet()\nHere, annotation-def\n(not a real keyword) indicates an\nannotation scope. The capitalized names\nlike TYPE_PARAMS_OF_ListOrSet\nare not actually bound at runtime.\n8.11. Annotations\u00b6\nChanged in version 3.14: Annotations are now lazily evaluated by default.\nVariables and function parameters may carry annotations, created by adding a colon after the name, followed by an expression:\nx: annotation = 1\ndef f(param: annotation): ...\nFunctions may also carry a return annotation following an arrow:\ndef f() -> annotation: ...\nAnnotations are conventionally used for type hints, but this\nis not enforced by the language, and in general annotations may contain arbitrary\nexpressions. The presence of annotations does not change the runtime semantics of\nthe code, except if some mechanism is used that introspects and uses the annotations\n(such as dataclasses\nor functools.singledispatch()\n).\nBy default, annotations are lazily evaluated in an annotation scope.\nThis means that they are not evaluated when the code containing the annotation is evaluated.\nInstead, the interpreter saves information that can be used to evaluate the annotation later\nif requested. The annotationlib\nmodule provides tools for evaluating annotations.\nIf the future statement from __future__ import annotations\nis present,\nall annotations are instead stored as strings:\n>>> from __future__ import annotations\n>>> def f(param: annotation): ...\n>>> f.__annotations__\n{'param': 'annotation'}\nThis future statement will be deprecated and removed in a future version of Python,\nbut not before Python 3.13 reaches its end of life (see PEP 749).\nWhen it is used, introspection tools like\nannotationlib.get_annotations()\nand typing.get_type_hints()\nare\nless likely to be able to resolve annotations at runtime.\nFootnotes", "code_snippets": [" ", " ", " ", " ", "\n", " ", " ", " ", " ", " ", " ", " ", " ", "\n", " ", " ", " ", "\n ", "\n ", " ", " ", " ", "\n ", "\n ", "\n", " ", " ", " ", "\n ", "\n", " ", " ", " ", "\n ", "\n ", "\n ", "\n ", " ", "\n", "\n", "\n", "\n", " ", " ", "\n", "\n", " ", "\n", " ", "\n", " ", " ", "\n", " ", "\n", " ", "\n", " ", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", " ", " ", "\n", " ", " ", " ", " ", "\n", " ", " ", " ", "\n", " ", "\n", " ", " ", " ", "\n", " ", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", " ", " ", "\n", " ", " ", " ", "\n", " ", "\n", "\n", "\n", "\n ", "\n ", "\n ", "\n ", " ", "\n", "\n ", "\n ", " ", "\n ", "\n ", " ", "\n", " ", " ", " ", "\n ", "\n", " ", " ", "\n", " ", " ", "\n", " ", " ", "\n", " ", " ", "\n", " ", " ", "\n\n", "\n ", " ", " ", "\n ", "\n", "\n ", " ", " ", "\n ", " ", " ", "\n ", "\n", "\n ", " ", " ", "\n ", " ", " ", "\n", " ", " ", " ", " ", " ", " ", "\n ", "\n", " ", " ", " ", "\n ", " ", " ", " ", "\n ", "\n", " ", "\n ", " ", " ", "\n ", " ", " ", "\n", "\n ", "\n", " ", " ", "\n", " ", " ", "\n", " ", " ", " ", " ", "\n", " ", "\n", " ", " ", " ", " ", " ", " ", "\n", " ", "\n", " ", " ", " ", " ", "\n", " ", "\n", " ", " ", "\n", " ", "\n", "\n", "\n", "\n", "\n", " ", "\n", " ", "\n", " ", " ", "\n", "\n ", " ", " ", " ", "\n ", " ", " ", "\n ", "\n ", " ", "\n", "\n ", "\n", "\n ", "\n", "\n", "\n", " ", "\n", " ", "\n", " ", " ", "\n", " ", " ", "\n ", "\n ", " ", "\n", " ", " ", " ", " ", "\n ", "\n", "\n ", "\n", " ", " ", "\n", " ", " ", "\n\n", " ", "\n ", "\n ", " ", " ", " ", "\n ", " ", "\n ", " ", " ", "\n ", "\n ", "\n", "\n ", "\n", " ", " ", " ", " ", "\n ", "\n", " ", " ", "\n", " ", " ", "\n", " ", " ", "\n", " ", " ", " ", "\n", " ", " ", "\n\n", "\n ", " ", " ", "\n ", "\n", "\n ", " ", " ", "\n ", " ", " ", " ", "\n ", "\n", "\n ", " ", " ", "\n ", " ", " ", " ", "\n", " ", " ", " ", "\n ", "\n\n", " ", " ", " ", " ", "\n ", "\n\n", "\n ", " ", " ", "\n ", "\n\n ", " ", " ", " ", " ", "\n ", "\n\n", " ", " ", " ", " ", " ", "\n", "\n ", "\n ", " ", " ", "\n ", " ", "\n ", " ", " ", "\n ", " ", " ", " ", "\n ", " ", " ", " ", "\n", "\n ", " ", "\n ", " ", "\n ", " ", "\n ", " ", " ", "\n ", " ", "\n", " ", "\n", " ", " ", "\n", "\n ", " ", " ", "\n ", " ", " ", "\n ", " ", " ", "\n ", " ", "\n", " ", " ", "\n", "\n", " ", " ", " ", " ", " ", " ", " ", " ", " ", "\n ", "\n", " ", " ", "\n\n", "\n\n ", "\n ", " ", "\n ", "\n ", " ", " ", " ", "\n\n ", " ", " ", "\n ", " ", " ", "\n\n ", " ", " ", " ", " ", " ", " ", "\n ", "\n\n ", " ", " ", " ", " ", "\n ", " ", "\n", " ", " ", "\n", " ", "\n", "\n ", " ", " ", "\n ", "\n ", " ", " ", "\n ", "\n ", " ", "\n", " ", " ", "\n", "\n", " ", " ", "\n", "\n ", " ", " ", "\n ", " ", " ", "\n ", " ", " ", "\n ", "\n ", " ", "\n", " ", " ", "\n", " ", " ", " ", " ", " ", "\n", "\n ", " ", " ", "\n\n ", "\n ", " ", " ", " ", "\n ", "\n ", " ", " ", " ", "\n", " ", " ", "\n", " ", " ", " ", "\n", " ", " ", "\n", " ", " ", " ", "\n", " ", "\n", " ", " ", "\n", "\n", "\n"], "language": "Python", "source": "python.org", "token_count": 13426} +{"url": "https://docs.python.org/3/c-api/datetime.html", "title": "DateTime Objects", "content": "DateTime Objects\u00b6\nVarious date and time objects are supplied by the datetime\nmodule.\nBefore using any of these functions, the header file datetime.h\nmust be\nincluded in your source (note that this is not included by Python.h\n),\nand the macro PyDateTime_IMPORT\nmust be invoked, usually as part of\nthe module initialisation function. The macro puts a pointer to a C structure\ninto a static variable, PyDateTimeAPI\n, that is used by the following\nmacros.\n-\nPyDateTime_IMPORT()\u00b6\nImport the datetime C API.\nOn success, populate the\nPyDateTimeAPI\npointer. On failure, setPyDateTimeAPI\ntoNULL\nand set an exception. The caller must check if an error occurred viaPyErr_Occurred()\n:PyDateTime_IMPORT; if (PyErr_Occurred()) { /* cleanup */ }\nWarning\nThis is not compatible with subinterpreters.\n-\ntype PyDateTime_CAPI\u00b6\nStructure containing the fields for the datetime C API.\nThe fields of this structure are private and subject to change.\nDo not use this directly; prefer\nPyDateTime_*\nAPIs instead.\n-\nPyDateTime_CAPI *PyDateTimeAPI\u00b6\nDynamically allocated object containing the datetime C API.\nThis variable is only available once\nPyDateTime_IMPORT\nsucceeds.\n-\ntype PyDateTime_Delta\u00b6\nThis subtype of\nPyObject\nrepresents the difference between two datetime values.\n-\nPyTypeObject PyDateTime_DateType\u00b6\nThis instance of\nPyTypeObject\nrepresents the Python date type; it is the same object asdatetime.date\nin the Python layer.\n-\nPyTypeObject PyDateTime_DateTimeType\u00b6\nThis instance of\nPyTypeObject\nrepresents the Python datetime type; it is the same object asdatetime.datetime\nin the Python layer.\n-\nPyTypeObject PyDateTime_TimeType\u00b6\nThis instance of\nPyTypeObject\nrepresents the Python time type; it is the same object asdatetime.time\nin the Python layer.\n-\nPyTypeObject PyDateTime_DeltaType\u00b6\nThis instance of\nPyTypeObject\nrepresents the Python type for the difference between two datetime values; it is the same object asdatetime.timedelta\nin the Python layer.\n-\nPyTypeObject PyDateTime_TZInfoType\u00b6\nThis instance of\nPyTypeObject\nrepresents the Python time zone info type; it is the same object asdatetime.tzinfo\nin the Python layer.\nMacro for access to the UTC singleton:\n-\nPyObject *PyDateTime_TimeZone_UTC\u00b6\nReturns the time zone singleton representing UTC, the same object as\ndatetime.timezone.utc\n.Added in version 3.7.\nType-check macros:\n-\nint PyDate_Check(PyObject *ob)\u00b6\nReturn true if ob is of type\nPyDateTime_DateType\nor a subtype ofPyDateTime_DateType\n. ob must not beNULL\n. This function always succeeds.\n-\nint PyDate_CheckExact(PyObject *ob)\u00b6\nReturn true if ob is of type\nPyDateTime_DateType\n. ob must not beNULL\n. This function always succeeds.\n-\nint PyDateTime_Check(PyObject *ob)\u00b6\nReturn true if ob is of type\nPyDateTime_DateTimeType\nor a subtype ofPyDateTime_DateTimeType\n. ob must not beNULL\n. This function always succeeds.\n-\nint PyDateTime_CheckExact(PyObject *ob)\u00b6\nReturn true if ob is of type\nPyDateTime_DateTimeType\n. ob must not beNULL\n. This function always succeeds.\n-\nint PyTime_Check(PyObject *ob)\u00b6\nReturn true if ob is of type\nPyDateTime_TimeType\nor a subtype ofPyDateTime_TimeType\n. ob must not beNULL\n. This function always succeeds.\n-\nint PyTime_CheckExact(PyObject *ob)\u00b6\nReturn true if ob is of type\nPyDateTime_TimeType\n. ob must not beNULL\n. This function always succeeds.\n-\nint PyDelta_Check(PyObject *ob)\u00b6\nReturn true if ob is of type\nPyDateTime_DeltaType\nor a subtype ofPyDateTime_DeltaType\n. ob must not beNULL\n. This function always succeeds.\n-\nint PyDelta_CheckExact(PyObject *ob)\u00b6\nReturn true if ob is of type\nPyDateTime_DeltaType\n. ob must not beNULL\n. This function always succeeds.\n-\nint PyTZInfo_Check(PyObject *ob)\u00b6\nReturn true if ob is of type\nPyDateTime_TZInfoType\nor a subtype ofPyDateTime_TZInfoType\n. ob must not beNULL\n. This function always succeeds.\n-\nint PyTZInfo_CheckExact(PyObject *ob)\u00b6\nReturn true if ob is of type\nPyDateTime_TZInfoType\n. ob must not beNULL\n. This function always succeeds.\nMacros to create objects:\n-\nPyObject *PyDate_FromDate(int year, int month, int day)\u00b6\n- Return value: New reference.\nReturn a\ndatetime.date\nobject with the specified year, month and day.\n-\nPyObject *PyDateTime_FromDateAndTime(int year, int month, int day, int hour, int minute, int second, int usecond)\u00b6\n- Return value: New reference.\nReturn a\ndatetime.datetime\nobject with the specified year, month, day, hour, minute, second and microsecond.\n-\nPyObject *PyDateTime_FromDateAndTimeAndFold(int year, int month, int day, int hour, int minute, int second, int usecond, int fold)\u00b6\n- Return value: New reference.\nReturn a\ndatetime.datetime\nobject with the specified year, month, day, hour, minute, second, microsecond and fold.Added in version 3.6.\n-\nPyObject *PyTime_FromTime(int hour, int minute, int second, int usecond)\u00b6\n- Return value: New reference.\nReturn a\ndatetime.time\nobject with the specified hour, minute, second and microsecond.\n-\nPyObject *PyTime_FromTimeAndFold(int hour, int minute, int second, int usecond, int fold)\u00b6\n- Return value: New reference.\nReturn a\ndatetime.time\nobject with the specified hour, minute, second, microsecond and fold.Added in version 3.6.\n-\nPyObject *PyDelta_FromDSU(int days, int seconds, int useconds)\u00b6\n- Return value: New reference.\nReturn a\ndatetime.timedelta\nobject representing the given number of days, seconds and microseconds. Normalization is performed so that the resulting number of microseconds and seconds lie in the ranges documented fordatetime.timedelta\nobjects.\n-\nPyObject *PyTimeZone_FromOffset(PyObject *offset)\u00b6\n- Return value: New reference.\nReturn a\ndatetime.timezone\nobject with an unnamed fixed offset represented by the offset argument.Added in version 3.7.\n-\nPyObject *PyTimeZone_FromOffsetAndName(PyObject *offset, PyObject *name)\u00b6\n- Return value: New reference.\nReturn a\ndatetime.timezone\nobject with a fixed offset represented by the offset argument and with tzname name.Added in version 3.7.\nMacros to extract fields from date objects. The argument must be an instance of\nPyDateTime_Date\n, including subclasses (such as\nPyDateTime_DateTime\n). The argument must not be NULL\n, and the type is\nnot checked:\n-\nint PyDateTime_GET_YEAR(PyDateTime_Date *o)\u00b6\nReturn the year, as a positive int.\n-\nint PyDateTime_GET_MONTH(PyDateTime_Date *o)\u00b6\nReturn the month, as an int from 1 through 12.\n-\nint PyDateTime_GET_DAY(PyDateTime_Date *o)\u00b6\nReturn the day, as an int from 1 through 31.\nMacros to extract fields from datetime objects. The argument must be an\ninstance of PyDateTime_DateTime\n, including subclasses. The argument\nmust not be NULL\n, and the type is not checked:\n-\nint PyDateTime_DATE_GET_HOUR(PyDateTime_DateTime *o)\u00b6\nReturn the hour, as an int from 0 through 23.\n-\nint PyDateTime_DATE_GET_MINUTE(PyDateTime_DateTime *o)\u00b6\nReturn the minute, as an int from 0 through 59.\n-\nint PyDateTime_DATE_GET_SECOND(PyDateTime_DateTime *o)\u00b6\nReturn the second, as an int from 0 through 59.\n-\nint PyDateTime_DATE_GET_MICROSECOND(PyDateTime_DateTime *o)\u00b6\nReturn the microsecond, as an int from 0 through 999999.\n-\nint PyDateTime_DATE_GET_FOLD(PyDateTime_DateTime *o)\u00b6\nReturn the fold, as an int from 0 through 1.\nAdded in version 3.6.\n-\nPyObject *PyDateTime_DATE_GET_TZINFO(PyDateTime_DateTime *o)\u00b6\nReturn the tzinfo (which may be\nNone\n).Added in version 3.10.\nMacros to extract fields from time objects. The argument must be an instance of\nPyDateTime_Time\n, including subclasses. The argument must not be NULL\n,\nand the type is not checked:\n-\nint PyDateTime_TIME_GET_HOUR(PyDateTime_Time *o)\u00b6\nReturn the hour, as an int from 0 through 23.\n-\nint PyDateTime_TIME_GET_MINUTE(PyDateTime_Time *o)\u00b6\nReturn the minute, as an int from 0 through 59.\n-\nint PyDateTime_TIME_GET_SECOND(PyDateTime_Time *o)\u00b6\nReturn the second, as an int from 0 through 59.\n-\nint PyDateTime_TIME_GET_MICROSECOND(PyDateTime_Time *o)\u00b6\nReturn the microsecond, as an int from 0 through 999999.\n-\nint PyDateTime_TIME_GET_FOLD(PyDateTime_Time *o)\u00b6\nReturn the fold, as an int from 0 through 1.\nAdded in version 3.6.\n-\nPyObject *PyDateTime_TIME_GET_TZINFO(PyDateTime_Time *o)\u00b6\nReturn the tzinfo (which may be\nNone\n).Added in version 3.10.\nMacros to extract fields from time delta objects. The argument must be an\ninstance of PyDateTime_Delta\n, including subclasses. The argument must\nnot be NULL\n, and the type is not checked:\n-\nint PyDateTime_DELTA_GET_DAYS(PyDateTime_Delta *o)\u00b6\nReturn the number of days, as an int from -999999999 to 999999999.\nAdded in version 3.3.\n-\nint PyDateTime_DELTA_GET_SECONDS(PyDateTime_Delta *o)\u00b6\nReturn the number of seconds, as an int from 0 through 86399.\nAdded in version 3.3.\n-\nint PyDateTime_DELTA_GET_MICROSECONDS(PyDateTime_Delta *o)\u00b6\nReturn the number of microseconds, as an int from 0 through 999999.\nAdded in version 3.3.\nMacros for the convenience of modules implementing the DB API:\n-\nPyObject *PyDateTime_FromTimestamp(PyObject *args)\u00b6\n- Return value: New reference.\nCreate and return a new\ndatetime.datetime\nobject given an argument tuple suitable for passing todatetime.datetime.fromtimestamp()\n.\n-\nPyObject *PyDate_FromTimestamp(PyObject *args)\u00b6\n- Return value: New reference.\nCreate and return a new\ndatetime.date\nobject given an argument tuple suitable for passing todatetime.date.fromtimestamp()\n.\nInternal data\u00b6\nThe following symbols are exposed by the C API but should be considered internal-only.\n-\nPyDateTime_CAPSULE_NAME\u00b6\nName of the datetime capsule to pass to\nPyCapsule_Import()\n.Internal usage only. Use\nPyDateTime_IMPORT\ninstead.", "code_snippets": [], "language": "Python", "source": "python.org", "token_count": 2358} +{"url": "https://docs.python.org/3/c-api/dict.html", "title": "Dictionary Objects", "content": "Dictionary Objects\u00b6\n-\nPyTypeObject PyDict_Type\u00b6\n- Part of the Stable ABI.\nThis instance of\nPyTypeObject\nrepresents the Python dictionary type. This is the same object asdict\nin the Python layer.\n-\nint PyDict_Check(PyObject *p)\u00b6\nReturn true if p is a dict object or an instance of a subtype of the dict type. This function always succeeds.\n-\nint PyDict_CheckExact(PyObject *p)\u00b6\nReturn true if p is a dict object, but not an instance of a subtype of the dict type. This function always succeeds.\n-\nPyObject *PyDict_New()\u00b6\n- Return value: New reference. Part of the Stable ABI.\nReturn a new empty dictionary, or\nNULL\non failure.\n-\nPyObject *PyDictProxy_New(PyObject *mapping)\u00b6\n- Return value: New reference. Part of the Stable ABI.\nReturn a\ntypes.MappingProxyType\nobject for a mapping which enforces read-only behavior. This is normally used to create a view to prevent modification of the dictionary for non-dynamic class types.\n-\nPyTypeObject PyDictProxy_Type\u00b6\n- Part of the Stable ABI.\nThe type object for mapping proxy objects created by\nPyDictProxy_New()\nand for the read-only__dict__\nattribute of many built-in types. APyDictProxy_Type\ninstance provides a dynamic, read-only view of an underlying dictionary: changes to the underlying dictionary are reflected in the proxy, but the proxy itself does not support mutation operations. This corresponds totypes.MappingProxyType\nin Python.\n-\nvoid PyDict_Clear(PyObject *p)\u00b6\n- Part of the Stable ABI.\nEmpty an existing dictionary of all key-value pairs.\n-\nint PyDict_Contains(PyObject *p, PyObject *key)\u00b6\n- Part of the Stable ABI.\nDetermine if dictionary p contains key. If an item in p matches key, return\n1\n, otherwise return0\n. On error, return-1\n. This is equivalent to the Python expressionkey in p\n.\n-\nint PyDict_ContainsString(PyObject *p, const char *key)\u00b6\nThis is the same as\nPyDict_Contains()\n, but key is specified as a const char* UTF-8 encoded bytes string, rather than a PyObject*.Added in version 3.13.\n-\nPyObject *PyDict_Copy(PyObject *p)\u00b6\n- Return value: New reference. Part of the Stable ABI.\nReturn a new dictionary that contains the same key-value pairs as p.\n-\nint PyDict_SetItem(PyObject *p, PyObject *key, PyObject *val)\u00b6\n- Part of the Stable ABI.\nInsert val into the dictionary p with a key of key. key must be hashable; if it isn\u2019t,\nTypeError\nwill be raised. Return0\non success or-1\non failure. This function does not steal a reference to val.\n-\nint PyDict_SetItemString(PyObject *p, const char *key, PyObject *val)\u00b6\n- Part of the Stable ABI.\nThis is the same as\nPyDict_SetItem()\n, but key is specified as a const char* UTF-8 encoded bytes string, rather than a PyObject*.\n-\nint PyDict_DelItem(PyObject *p, PyObject *key)\u00b6\n- Part of the Stable ABI.\nRemove the entry in dictionary p with key key. key must be hashable; if it isn\u2019t,\nTypeError\nis raised. If key is not in the dictionary,KeyError\nis raised. Return0\non success or-1\non failure.\n-\nint PyDict_DelItemString(PyObject *p, const char *key)\u00b6\n- Part of the Stable ABI.\nThis is the same as\nPyDict_DelItem()\n, but key is specified as a const char* UTF-8 encoded bytes string, rather than a PyObject*.\n-\nint PyDict_GetItemRef(PyObject *p, PyObject *key, PyObject **result)\u00b6\n- Part of the Stable ABI since version 3.13.\nReturn a new strong reference to the object from dictionary p which has a key key:\nIf the key is present, set *result to a new strong reference to the value and return\n1\n.If the key is missing, set *result to\nNULL\nand return0\n.On error, raise an exception and return\n-1\n.\nAdded in version 3.13.\nSee also the\nPyObject_GetItem()\nfunction.\n-\nPyObject *PyDict_GetItem(PyObject *p, PyObject *key)\u00b6\n- Return value: Borrowed reference. Part of the Stable ABI.\nReturn a borrowed reference to the object from dictionary p which has a key key. Return\nNULL\nif the key key is missing without setting an exception.Note\nExceptions that occur while this calls\n__hash__()\nand__eq__()\nmethods are silently ignored. Prefer thePyDict_GetItemWithError()\nfunction instead.Changed in version 3.10: Calling this API without an attached thread state had been allowed for historical reason. It is no longer allowed.\n-\nPyObject *PyDict_GetItemWithError(PyObject *p, PyObject *key)\u00b6\n- Return value: Borrowed reference. Part of the Stable ABI.\nVariant of\nPyDict_GetItem()\nthat does not suppress exceptions. ReturnNULL\nwith an exception set if an exception occurred. ReturnNULL\nwithout an exception set if the key wasn\u2019t present.\n-\nPyObject *PyDict_GetItemString(PyObject *p, const char *key)\u00b6\n- Return value: Borrowed reference. Part of the Stable ABI.\nThis is the same as\nPyDict_GetItem()\n, but key is specified as a const char* UTF-8 encoded bytes string, rather than a PyObject*.Note\nExceptions that occur while this calls\n__hash__()\nand__eq__()\nmethods or while creating the temporarystr\nobject are silently ignored. Prefer using thePyDict_GetItemWithError()\nfunction with your ownPyUnicode_FromString()\nkey instead.\n-\nint PyDict_GetItemStringRef(PyObject *p, const char *key, PyObject **result)\u00b6\n- Part of the Stable ABI since version 3.13.\nSimilar to\nPyDict_GetItemRef()\n, but key is specified as a const char* UTF-8 encoded bytes string, rather than a PyObject*.Added in version 3.13.\n-\nPyObject *PyDict_SetDefault(PyObject *p, PyObject *key, PyObject *defaultobj)\u00b6\n- Return value: Borrowed reference.\nThis is the same as the Python-level\ndict.setdefault()\n. If present, it returns the value corresponding to key from the dictionary p. If the key is not in the dict, it is inserted with value defaultobj and defaultobj is returned. This function evaluates the hash function of key only once, instead of evaluating it independently for the lookup and the insertion.Added in version 3.4.\n-\nint PyDict_SetDefaultRef(PyObject *p, PyObject *key, PyObject *default_value, PyObject **result)\u00b6\nInserts default_value into the dictionary p with a key of key if the key is not already present in the dictionary. If result is not\nNULL\n, then *result is set to a strong reference to either default_value, if the key was not present, or the existing value, if key was already present in the dictionary. Returns1\nif the key was present and default_value was not inserted, or0\nif the key was not present and default_value was inserted. On failure, returns-1\n, sets an exception, and sets*result\ntoNULL\n.For clarity: if you have a strong reference to default_value before calling this function, then after it returns, you hold a strong reference to both default_value and *result (if it\u2019s not\nNULL\n). These may refer to the same object: in that case you hold two separate references to it.Added in version 3.13.\n-\nint PyDict_Pop(PyObject *p, PyObject *key, PyObject **result)\u00b6\nRemove key from dictionary p and optionally return the removed value. Do not raise\nKeyError\nif the key is missing.If the key is present, set *result to a new reference to the removed value if result is not\nNULL\n, and return1\n.If the key is missing, set *result to\nNULL\nif result is notNULL\n, and return0\n.On error, raise an exception and return\n-1\n.\nSimilar to\ndict.pop()\n, but without the default value and not raisingKeyError\nif the key is missing.Added in version 3.13.\n-\nint PyDict_PopString(PyObject *p, const char *key, PyObject **result)\u00b6\nSimilar to\nPyDict_Pop()\n, but key is specified as a const char* UTF-8 encoded bytes string, rather than a PyObject*.Added in version 3.13.\n-\nPyObject *PyDict_Items(PyObject *p)\u00b6\n- Return value: New reference. Part of the Stable ABI.\nReturn a\nPyListObject\ncontaining all the items from the dictionary.\n-\nPyObject *PyDict_Keys(PyObject *p)\u00b6\n- Return value: New reference. Part of the Stable ABI.\nReturn a\nPyListObject\ncontaining all the keys from the dictionary.\n-\nPyObject *PyDict_Values(PyObject *p)\u00b6\n- Return value: New reference. Part of the Stable ABI.\nReturn a\nPyListObject\ncontaining all the values from the dictionary p.\n-\nPy_ssize_t PyDict_Size(PyObject *p)\u00b6\n- Part of the Stable ABI.\nReturn the number of items in the dictionary. This is equivalent to\nlen(p)\non a dictionary.\n-\nPy_ssize_t PyDict_GET_SIZE(PyObject *p)\u00b6\nSimilar to\nPyDict_Size()\n, but without error checking.\n-\nint PyDict_Next(PyObject *p, Py_ssize_t *ppos, PyObject **pkey, PyObject **pvalue)\u00b6\n- Part of the Stable ABI.\nIterate over all key-value pairs in the dictionary p. The\nPy_ssize_t\nreferred to by ppos must be initialized to0\nprior to the first call to this function to start the iteration; the function returns true for each pair in the dictionary, and false once all pairs have been reported. The parameters pkey and pvalue should either point to PyObject* variables that will be filled in with each key and value, respectively, or may beNULL\n. Any references returned through them are borrowed. ppos should not be altered during iteration. Its value represents offsets within the internal dictionary structure, and since the structure is sparse, the offsets are not consecutive.For example:\nPyObject *key, *value; Py_ssize_t pos = 0; while (PyDict_Next(self->dict, &pos, &key, &value)) { /* do something interesting with the values... */ ... }\nThe dictionary p should not be mutated during iteration. It is safe to modify the values of the keys as you iterate over the dictionary, but only so long as the set of keys does not change. For example:\nPyObject *key, *value; Py_ssize_t pos = 0; while (PyDict_Next(self->dict, &pos, &key, &value)) { long i = PyLong_AsLong(value); if (i == -1 && PyErr_Occurred()) { return -1; } PyObject *o = PyLong_FromLong(i + 1); if (o == NULL) return -1; if (PyDict_SetItem(self->dict, key, o) < 0) { Py_DECREF(o); return -1; } Py_DECREF(o); }\nThe function is not thread-safe in the free-threaded build without external synchronization. You can use\nPy_BEGIN_CRITICAL_SECTION\nto lock the dictionary while iterating over it:Py_BEGIN_CRITICAL_SECTION(self->dict); while (PyDict_Next(self->dict, &pos, &key, &value)) { ... } Py_END_CRITICAL_SECTION();\nNote\nOn the free-threaded build, this function can be used safely inside a critical section. However, the references returned for pkey and pvalue are borrowed and are only valid while the critical section is held. If you need to use these objects outside the critical section or when the critical section can be suspended, create a strong reference (for example, using\nPy_NewRef()\n).\n-\nint PyDict_Merge(PyObject *a, PyObject *b, int override)\u00b6\n- Part of the Stable ABI.\nIterate over mapping object b adding key-value pairs to dictionary a. b may be a dictionary, or any object supporting\nPyMapping_Keys()\nandPyObject_GetItem()\n. If override is true, existing pairs in a will be replaced if a matching key is found in b, otherwise pairs will only be added if there is not a matching key in a. Return0\non success or-1\nif an exception was raised.\n-\nint PyDict_Update(PyObject *a, PyObject *b)\u00b6\n- Part of the Stable ABI.\nThis is the same as\nPyDict_Merge(a, b, 1)\nin C, and is similar toa.update(b)\nin Python except thatPyDict_Update()\ndoesn\u2019t fall back to the iterating over a sequence of key value pairs if the second argument has no \u201ckeys\u201d attribute. Return0\non success or-1\nif an exception was raised.\n-\nint PyDict_MergeFromSeq2(PyObject *a, PyObject *seq2, int override)\u00b6\n- Part of the Stable ABI.\nUpdate or merge into dictionary a, from the key-value pairs in seq2. seq2 must be an iterable object producing iterable objects of length 2, viewed as key-value pairs. In case of duplicate keys, the last wins if override is true, else the first wins. Return\n0\non success or-1\nif an exception was raised. Equivalent Python (except for the return value):def PyDict_MergeFromSeq2(a, seq2, override): for key, value in seq2: if override or key not in a: a[key] = value\n-\nint PyDict_AddWatcher(PyDict_WatchCallback callback)\u00b6\nRegister callback as a dictionary watcher. Return a non-negative integer id which must be passed to future calls to\nPyDict_Watch()\n. In case of error (e.g. no more watcher IDs available), return-1\nand set an exception.Added in version 3.12.\n-\nint PyDict_ClearWatcher(int watcher_id)\u00b6\nClear watcher identified by watcher_id previously returned from\nPyDict_AddWatcher()\n. Return0\non success,-1\non error (e.g. if the given watcher_id was never registered.)Added in version 3.12.\n-\nint PyDict_Watch(int watcher_id, PyObject *dict)\u00b6\nMark dictionary dict as watched. The callback granted watcher_id by\nPyDict_AddWatcher()\nwill be called when dict is modified or deallocated. Return0\non success or-1\non error.Added in version 3.12.\n-\nint PyDict_Unwatch(int watcher_id, PyObject *dict)\u00b6\nMark dictionary dict as no longer watched. The callback granted watcher_id by\nPyDict_AddWatcher()\nwill no longer be called when dict is modified or deallocated. The dict must previously have been watched by this watcher. Return0\non success or-1\non error.Added in version 3.12.\n-\ntype PyDict_WatchEvent\u00b6\nEnumeration of possible dictionary watcher events:\nPyDict_EVENT_ADDED\n,PyDict_EVENT_MODIFIED\n,PyDict_EVENT_DELETED\n,PyDict_EVENT_CLONED\n,PyDict_EVENT_CLEARED\n, orPyDict_EVENT_DEALLOCATED\n.Added in version 3.12.\n-\ntypedef int (*PyDict_WatchCallback)(PyDict_WatchEvent event, PyObject *dict, PyObject *key, PyObject *new_value)\u00b6\nType of a dict watcher callback function.\nIf event is\nPyDict_EVENT_CLEARED\norPyDict_EVENT_DEALLOCATED\n, both key and new_value will beNULL\n. If event isPyDict_EVENT_ADDED\norPyDict_EVENT_MODIFIED\n, new_value will be the new value for key. If event isPyDict_EVENT_DELETED\n, key is being deleted from the dictionary and new_value will beNULL\n.PyDict_EVENT_CLONED\noccurs when dict was previously empty and another dict is merged into it. To maintain efficiency of this operation, per-keyPyDict_EVENT_ADDED\nevents are not issued in this case; instead a singlePyDict_EVENT_CLONED\nis issued, and key will be the source dictionary.The callback may inspect but must not modify dict; doing so could have unpredictable effects, including infinite recursion. Do not trigger Python code execution in the callback, as it could modify the dict as a side effect.\nIf event is\nPyDict_EVENT_DEALLOCATED\n, taking a new reference in the callback to the about-to-be-destroyed dictionary will resurrect it and prevent it from being freed at this time. When the resurrected object is destroyed later, any watcher callbacks active at that time will be called again.Callbacks occur before the notified modification to dict takes place, so the prior state of dict can be inspected.\nIf the callback sets an exception, it must return\n-1\n; this exception will be printed as an unraisable exception usingPyErr_WriteUnraisable()\n. Otherwise it should return0\n.There may already be a pending exception set on entry to the callback. In this case, the callback should return\n0\nwith the same exception still set. This means the callback may not call any other API that can set an exception unless it saves and clears the exception state first, and restores it before returning.Added in version 3.12.\nDictionary View Objects\u00b6\n-\nint PyDictViewSet_Check(PyObject *op)\u00b6\nReturn true if op is a view of a set inside a dictionary. This is currently equivalent to PyDictKeys_Check(op) || PyDictItems_Check(op). This function always succeeds.\n-\nPyTypeObject PyDictKeys_Type\u00b6\n- Part of the Stable ABI.\nType object for a view of dictionary keys. In Python, this is the type of the object returned by\ndict.keys()\n.\n-\nint PyDictKeys_Check(PyObject *op)\u00b6\nReturn true if op is an instance of a dictionary keys view. This function always succeeds.\n-\nPyTypeObject PyDictValues_Type\u00b6\n- Part of the Stable ABI.\nType object for a view of dictionary values. In Python, this is the type of the object returned by\ndict.values()\n.\n-\nint PyDictValues_Check(PyObject *op)\u00b6\nReturn true if op is an instance of a dictionary values view. This function always succeeds.\n-\nPyTypeObject PyDictItems_Type\u00b6\n- Part of the Stable ABI.\nType object for a view of dictionary items. In Python, this is the type of the object returned by\ndict.items()\n.\nOrdered Dictionaries\u00b6\nPython\u2019s C API provides interface for collections.OrderedDict\nfrom C.\nSince Python 3.7, dictionaries are ordered by default, so there is usually\nlittle need for these functions; prefer PyDict*\nwhere possible.\n-\nPyTypeObject PyODict_Type\u00b6\nType object for ordered dictionaries. This is the same object as\ncollections.OrderedDict\nin the Python layer.\n-\nint PyODict_Check(PyObject *od)\u00b6\nReturn true if od is an ordered dictionary object or an instance of a subtype of the\nOrderedDict\ntype. This function always succeeds.\n-\nint PyODict_CheckExact(PyObject *od)\u00b6\nReturn true if od is an ordered dictionary object, but not an instance of a subtype of the\nOrderedDict\ntype. This function always succeeds.\n-\nPyTypeObject PyODictKeys_Type\u00b6\nAnalogous to\nPyDictKeys_Type\nfor ordered dictionaries.\n-\nPyTypeObject PyODictValues_Type\u00b6\nAnalogous to\nPyDictValues_Type\nfor ordered dictionaries.\n-\nPyTypeObject PyODictItems_Type\u00b6\nAnalogous to\nPyDictItems_Type\nfor ordered dictionaries.\n-\nPyObject *PyODict_New(void)\u00b6\nReturn a new empty ordered dictionary, or\nNULL\non failure.This is analogous to\nPyDict_New()\n.\n-\nint PyODict_SetItem(PyObject *od, PyObject *key, PyObject *value)\u00b6\nInsert value into the ordered dictionary od with a key of key. Return\n0\non success or-1\nwith an exception set on failure.This is analogous to\nPyDict_SetItem()\n.\n-\nint PyODict_DelItem(PyObject *od, PyObject *key)\u00b6\nRemove the entry in the ordered dictionary od with key key. Return\n0\non success or-1\nwith an exception set on failure.This is analogous to\nPyDict_DelItem()\n.\nThese are soft deprecated aliases to PyDict\nAPIs:\n|\n|\n|---|---|\n|\n|\n|\n|\n|\n|\n|\n|\n|\n|\n|", "code_snippets": [], "language": "Python", "source": "python.org", "token_count": 4384} +{"url": "https://docs.python.org/3/reference/expressions.html", "title": "Expressions", "content": "6. Expressions\u00b6\nThis chapter explains the meaning of the elements of expressions in Python.\nSyntax Notes: In this and the following chapters, grammar notation will be used to describe syntax, not lexical analysis.\nWhen (one alternative of) a syntax rule has the form:\nname: othername\nand no semantics are given, the semantics of this form of name\nare the same\nas for othername\n.\n6.1. Arithmetic conversions\u00b6\nWhen a description of an arithmetic operator below uses the phrase \u201cthe numeric arguments are converted to a common real type\u201d, this means that the operator implementation for built-in numeric types works as described in the Numeric Types section of the standard library documentation.\nSome additional rules apply for certain operators and non-numeric operands\n(for example, a string as a left argument to the %\noperator).\nExtensions must define their own conversion behavior.\n6.2. Atoms\u00b6\nAtoms are the most basic elements of expressions. The simplest atoms are names or literals. Forms enclosed in parentheses, brackets or braces are also categorized syntactically as atoms.\nFormally, the syntax for atoms is:\natom: | 'True' | 'False' | 'None' | '...' |identifier\n|literal\n|enclosure\nenclosure: |parenth_form\n|list_display\n|dict_display\n|set_display\n|generator_expression\n|yield_atom\n6.2.1. Built-in constants\u00b6\nThe keywords True\n, False\n, and None\nname\nbuilt-in constants.\nThe token ...\nnames the Ellipsis\nconstant.\nEvaluation of these atoms yields the corresponding value.\nNote\nSeveral more built-in constants are available as global variables, but only the ones mentioned here are keywords. In particular, these names cannot be reassigned or used as attributes:\n>>> False = 123\nFile \"\", line 1\nFalse = 123\n^^^^^\nSyntaxError: cannot assign to False\n6.2.2. Identifiers (Names)\u00b6\nAn identifier occurring as an atom is a name. See section Names (identifiers and keywords) for lexical definition and section Naming and binding for documentation of naming and binding.\nWhen the name is bound to an object, evaluation of the atom yields that object.\nWhen a name is not bound, an attempt to evaluate it raises a NameError\nexception.\n6.2.2.1. Private name mangling\u00b6\nWhen an identifier that textually occurs in a class definition begins with two or more underscore characters and does not end in two or more underscores, it is considered a private name of that class.\nSee also\nThe class specifications.\nMore precisely, private names are transformed to a longer form before code is generated for them. If the transformed name is longer than 255 characters, implementation-defined truncation may happen.\nThe transformation is independent of the syntactical context in which the identifier is used but only the following private identifiers are mangled:\nAny name used as the name of a variable that is assigned or read or any name of an attribute being accessed.\nThe\n__name__\nattribute of nested functions, classes, and type aliases is however not mangled.The name of imported modules, e.g.,\n__spam\ninimport __spam\n. If the module is part of a package (i.e., its name contains a dot), the name is not mangled, e.g., the__foo\ninimport __foo.bar\nis not mangled.The name of an imported member, e.g.,\n__f\ninfrom spam import __f\n.\nThe transformation rule is defined as follows:\nThe class name, with leading underscores removed and a single leading underscore inserted, is inserted in front of the identifier, e.g., the identifier\n__spam\noccurring in a class namedFoo\n,_Foo\nor__Foo\nis transformed to_Foo__spam\n.If the class name consists only of underscores, the transformation is the identity, e.g., the identifier\n__spam\noccurring in a class named_\nor__\nis left as is.\n6.2.3. Literals\u00b6\nA literal is a textual representation of a value. Python supports numeric, string and bytes literals. Format strings and template strings are treated as string literals.\nNumeric literals consist of a single NUMBER\ntoken, which names an integer, floating-point number, or an imaginary number.\nSee the Numeric literals section in Lexical analysis documentation for details.\nString and bytes literals may consist of several tokens. See section String literal concatenation for details.\nNote that negative and complex numbers, like -3\nor 3+4.2j\n,\nare syntactically not literals, but unary or\nbinary arithmetic operations involving the -\nor +\noperator.\nEvaluation of a literal yields an object of the given type\n(int\n, float\n, complex\n, str\n,\nbytes\n, or Template\n) with the given value.\nThe value may be approximated in the case of floating-point\nand imaginary literals.\nThe formal grammar for literals is:\nliteral:strings\n|NUMBER\n6.2.3.1. Literals and object identity\u00b6\nAll literals correspond to immutable data types, and hence the object\u2019s identity is less important than its value. Multiple evaluations of literals with the same value (either the same occurrence in the program text or a different occurrence) may obtain the same object or a different object with the same value.\nCPython implementation detail\nFor example, in CPython, small integers with the same value evaluate to the same object:\n>>> x = 7\n>>> y = 7\n>>> x is y\nTrue\nHowever, large integers evaluate to different objects:\n>>> x = 123456789\n>>> y = 123456789\n>>> x is y\nFalse\nThis behavior may change in future versions of CPython. In particular, the boundary between \u201csmall\u201d and \u201clarge\u201d integers has already changed in the past.\nCPython will emit a SyntaxWarning\nwhen you compare literals\nusing is\n:\n>>> x = 7\n>>> x is 7\n:1: SyntaxWarning: \"is\" with 'int' literal. Did you mean \"==\"?\nTrue\nSee When can I rely on identity tests with the is operator? for more information.\nTemplate strings are immutable but may reference mutable\nobjects as Interpolation\nvalues.\nFor the purposes of this section, two t-strings have the \u201csame value\u201d if\nboth their structure and the identity of the values match.\nCPython implementation detail: Currently, each evaluation of a template string results in a different object.\n6.2.3.2. String literal concatenation\u00b6\nMultiple adjacent string or bytes literals, possibly using different quoting conventions, are allowed, and their meaning is the same as their concatenation:\n>>> \"hello\" 'world'\n\"helloworld\"\nThis feature is defined at the syntactical level, so it only works with literals. To concatenate string expressions at run time, the \u2018+\u2019 operator may be used:\n>>> greeting = \"Hello\"\n>>> space = \" \"\n>>> name = \"Blaise\"\n>>> print(greeting + space + name) # not: print(greeting space name)\nHello Blaise\nLiteral concatenation can freely mix raw strings, triple-quoted strings, and formatted string literals. For example:\n>>> \"Hello\" r', ' f\"{name}!\"\n\"Hello, Blaise!\"\nThis feature can be used to reduce the number of backslashes needed, to split long strings conveniently across long lines, or even to add comments to parts of strings. For example:\nre.compile(\"[A-Za-z_]\" # letter or underscore\n\"[A-Za-z0-9_]*\" # letter, digit or underscore\n)\nHowever, bytes literals may only be combined with other byte literals; not with string literals of any kind. Also, template string literals may only be combined with other template string literals:\n>>> t\"Hello\" t\"{name}!\"\nTemplate(strings=('Hello', '!'), interpolations=(...))\nFormally:\nstrings: (STRING\n|fstring\n)+ |tstring\n+\n6.2.4. Parenthesized forms\u00b6\nA parenthesized form is an optional expression list enclosed in parentheses:\nparenth_form: \"(\" [starred_expression\n] \")\"\nA parenthesized expression list yields whatever that expression list yields: if the list contains at least one comma, it yields a tuple; otherwise, it yields the single expression that makes up the expression list.\nAn empty pair of parentheses yields an empty tuple object. Since tuples are immutable, the same rules as for literals apply (i.e., two occurrences of the empty tuple may or may not yield the same object).\nNote that tuples are not formed by the parentheses, but rather by use of the comma. The exception is the empty tuple, for which parentheses are required \u2014 allowing unparenthesized \u201cnothing\u201d in expressions would cause ambiguities and allow common typos to pass uncaught.\n6.2.5. Displays for lists, sets and dictionaries\u00b6\nFor constructing a list, a set or a dictionary Python provides special syntax called \u201cdisplays\u201d, each of them in two flavors:\neither the container contents are listed explicitly, or\nthey are computed via a set of looping and filtering instructions, called a comprehension.\nCommon syntax elements for comprehensions are:\ncomprehension:assignment_expression\ncomp_for\ncomp_for: [\"async\"] \"for\"target_list\n\"in\"or_test\n[comp_iter\n] comp_iter:comp_for\n|comp_if\ncomp_if: \"if\"or_test\n[comp_iter\n]\nThe comprehension consists of a single expression followed by at least one\nfor\nclause and zero or more for\nor if\nclauses.\nIn this case, the elements of the new container are those that would be produced\nby considering each of the for\nor if\nclauses a block,\nnesting from left to right, and evaluating the expression to produce an element\neach time the innermost block is reached.\nHowever, aside from the iterable expression in the leftmost for\nclause,\nthe comprehension is executed in a separate implicitly nested scope. This ensures\nthat names assigned to in the target list don\u2019t \u201cleak\u201d into the enclosing scope.\nThe iterable expression in the leftmost for\nclause is evaluated\ndirectly in the enclosing scope and then passed as an argument to the implicitly\nnested scope. Subsequent for\nclauses and any filter condition in the\nleftmost for\nclause cannot be evaluated in the enclosing scope as\nthey may depend on the values obtained from the leftmost iterable. For example:\n[x*y for x in range(10) for y in range(x, x+10)]\n.\nTo ensure the comprehension always results in a container of the appropriate\ntype, yield\nand yield from\nexpressions are prohibited in the implicitly\nnested scope.\nSince Python 3.6, in an async def\nfunction, an async for\nclause may be used to iterate over a asynchronous iterator.\nA comprehension in an async def\nfunction may consist of either a\nfor\nor async for\nclause following the leading\nexpression, may contain additional for\nor async for\nclauses, and may also use await\nexpressions.\nIf a comprehension contains async for\nclauses, or if it contains\nawait\nexpressions or other asynchronous comprehensions anywhere except\nthe iterable expression in the leftmost for\nclause, it is called an\nasynchronous comprehension. An asynchronous comprehension may suspend the\nexecution of the coroutine function in which it appears.\nSee also PEP 530.\nAdded in version 3.6: Asynchronous comprehensions were introduced.\nChanged in version 3.8: yield\nand yield from\nprohibited in the implicitly nested scope.\nChanged in version 3.11: Asynchronous comprehensions are now allowed inside comprehensions in asynchronous functions. Outer comprehensions implicitly become asynchronous.\n6.2.6. List displays\u00b6\nA list display is a possibly empty series of expressions enclosed in square brackets:\nlist_display: \"[\" [flexible_expression_list\n|comprehension\n] \"]\"\nA list display yields a new list object, the contents being specified by either a list of expressions or a comprehension. When a comma-separated list of expressions is supplied, its elements are evaluated from left to right and placed into the list object in that order. When a comprehension is supplied, the list is constructed from the elements resulting from the comprehension.\n6.2.7. Set displays\u00b6\nA set display is denoted by curly braces and distinguishable from dictionary displays by the lack of colons separating keys and values:\nset_display: \"{\" (flexible_expression_list\n|comprehension\n) \"}\"\nA set display yields a new mutable set object, the contents being specified by either a sequence of expressions or a comprehension. When a comma-separated list of expressions is supplied, its elements are evaluated from left to right and added to the set object. When a comprehension is supplied, the set is constructed from the elements resulting from the comprehension.\nAn empty set cannot be constructed with {}\n; this literal constructs an empty\ndictionary.\n6.2.8. Dictionary displays\u00b6\nA dictionary display is a possibly empty series of dict items (key/value pairs) enclosed in curly braces:\ndict_display: \"{\" [dict_item_list\n|dict_comprehension\n] \"}\" dict_item_list:dict_item\n(\",\"dict_item\n)* [\",\"] dict_item:expression\n\":\"expression\n| \"**\"or_expr\ndict_comprehension:expression\n\":\"expression\ncomp_for\nA dictionary display yields a new dictionary object.\nIf a comma-separated sequence of dict items is given, they are evaluated from left to right to define the entries of the dictionary: each key object is used as a key into the dictionary to store the corresponding value. This means that you can specify the same key multiple times in the dict item list, and the final dictionary\u2019s value for that key will be the last one given.\nA double asterisk **\ndenotes dictionary unpacking.\nIts operand must be a mapping. Each mapping item is added\nto the new dictionary. Later values replace values already set by\nearlier dict items and earlier dictionary unpackings.\nAdded in version 3.5: Unpacking into dictionary displays, originally proposed by PEP 448.\nA dict comprehension, in contrast to list and set comprehensions, needs two expressions separated with a colon followed by the usual \u201cfor\u201d and \u201cif\u201d clauses. When the comprehension is run, the resulting key and value elements are inserted in the new dictionary in the order they are produced.\nRestrictions on the types of the key values are listed earlier in section The standard type hierarchy. (To summarize, the key type should be hashable, which excludes all mutable objects.) Clashes between duplicate keys are not detected; the last value (textually rightmost in the display) stored for a given key value prevails.\nChanged in version 3.8: Prior to Python 3.8, in dict comprehensions, the evaluation order of key and value was not well-defined. In CPython, the value was evaluated before the key. Starting with 3.8, the key is evaluated before the value, as proposed by PEP 572.\n6.2.9. Generator expressions\u00b6\nA generator expression is a compact generator notation in parentheses:\ngenerator_expression: \"(\"expression\ncomp_for\n\")\"\nA generator expression yields a new generator object. Its syntax is the same as for comprehensions, except that it is enclosed in parentheses instead of brackets or curly braces.\nVariables used in the generator expression are evaluated lazily when the\n__next__()\nmethod is called for the generator object (in the same\nfashion as normal generators). However, the iterable expression in the\nleftmost for\nclause is immediately evaluated, and the\niterator is immediately created for that iterable, so that an error\nproduced while creating the iterator will be emitted at the point where the generator expression\nis defined, rather than at the point where the first value is retrieved.\nSubsequent for\nclauses and any filter condition in the leftmost\nfor\nclause cannot be evaluated in the enclosing scope as they may\ndepend on the values obtained from the leftmost iterable. For example:\n(x*y for x in range(10) for y in range(x, x+10))\n.\nThe parentheses can be omitted on calls with only one argument. See section Calls for details.\nTo avoid interfering with the expected operation of the generator expression\nitself, yield\nand yield from\nexpressions are prohibited in the\nimplicitly defined generator.\nIf a generator expression contains either async for\nclauses or await\nexpressions it is called an\nasynchronous generator expression. An asynchronous generator\nexpression returns a new asynchronous generator object,\nwhich is an asynchronous iterator (see Asynchronous Iterators).\nAdded in version 3.6: Asynchronous generator expressions were introduced.\nChanged in version 3.7: Prior to Python 3.7, asynchronous generator expressions could\nonly appear in async def\ncoroutines. Starting\nwith 3.7, any function can use asynchronous generator expressions.\nChanged in version 3.8: yield\nand yield from\nprohibited in the implicitly nested scope.\n6.2.10. Yield expressions\u00b6\nyield_atom: \"(\"yield_expression\n\")\" yield_from: \"yield\" \"from\"expression\nyield_expression: \"yield\"yield_list\n|yield_from\nThe yield expression is used when defining a generator function\nor an asynchronous generator function and\nthus can only be used in the body of a function definition. Using a yield\nexpression in a function\u2019s body causes that function to be a generator function,\nand using it in an async def\nfunction\u2019s body causes that\ncoroutine function to be an asynchronous generator function. For example:\ndef gen(): # defines a generator function\nyield 123\nasync def agen(): # defines an asynchronous generator function\nyield 123\nDue to their side effects on the containing scope, yield\nexpressions\nare not permitted as part of the implicitly defined scopes used to\nimplement comprehensions and generator expressions.\nChanged in version 3.8: Yield expressions prohibited in the implicitly nested scopes used to implement comprehensions and generator expressions.\nGenerator functions are described below, while asynchronous generator functions are described separately in section Asynchronous generator functions.\nWhen a generator function is called, it returns an iterator known as a\ngenerator. That generator then controls the execution of the generator\nfunction. The execution starts when one of the generator\u2019s methods is called.\nAt that time, the execution proceeds to the first yield expression, where it is\nsuspended again, returning the value of yield_list\nto the generator\u2019s caller,\nor None\nif yield_list\nis omitted.\nBy suspended, we mean that all local state is\nretained, including the current bindings of local variables, the instruction\npointer, the internal evaluation stack, and the state of any exception handling.\nWhen the execution is resumed by calling one of the generator\u2019s methods, the\nfunction can proceed exactly as if the yield expression were just another\nexternal call. The value of the yield expression after resuming depends on the\nmethod which resumed the execution. If __next__()\nis used\n(typically via either a for\nor the next()\nbuiltin) then the\nresult is None\n. Otherwise, if send()\nis used, then\nthe result will be the value passed in to that method.\nAll of this makes generator functions quite similar to coroutines; they yield multiple times, they have more than one entry point and their execution can be suspended. The only difference is that a generator function cannot control where the execution should continue after it yields; the control is always transferred to the generator\u2019s caller.\nYield expressions are allowed anywhere in a try\nconstruct. If the\ngenerator is not resumed before it is\nfinalized (by reaching a zero reference count or by being garbage collected),\nthe generator-iterator\u2019s close()\nmethod will be called,\nallowing any pending finally\nclauses to execute.\nWhen yield from \nis used, the supplied expression must be an\niterable. The values produced by iterating that iterable are passed directly\nto the caller of the current generator\u2019s methods. Any values passed in with\nsend()\nand any exceptions passed in with\nthrow()\nare passed to the underlying iterator if it has the\nappropriate methods. If this is not the case, then send()\nwill raise AttributeError\nor TypeError\n, while\nthrow()\nwill just raise the passed in exception immediately.\nWhen the underlying iterator is complete, the value\nattribute of the raised StopIteration\ninstance becomes the value of\nthe yield expression. It can be either set explicitly when raising\nStopIteration\n, or automatically when the subiterator is a generator\n(by returning a value from the subgenerator).\nChanged in version 3.3: Added yield from \nto delegate control flow to a subiterator.\nThe parentheses may be omitted when the yield expression is the sole expression on the right hand side of an assignment statement.\nSee also\n- PEP 255 - Simple Generators\nThe proposal for adding generators and the\nyield\nstatement to Python.- PEP 342 - Coroutines via Enhanced Generators\nThe proposal to enhance the API and syntax of generators, making them usable as simple coroutines.\n- PEP 380 - Syntax for Delegating to a Subgenerator\nThe proposal to introduce the\nyield_from\nsyntax, making delegation to subgenerators easy.- PEP 525 - Asynchronous Generators\nThe proposal that expanded on PEP 492 by adding generator capabilities to coroutine functions.\n6.2.10.1. Generator-iterator methods\u00b6\nThis subsection describes the methods of a generator iterator. They can be used to control the execution of a generator function.\nNote that calling any of the generator methods below when the generator\nis already executing raises a ValueError\nexception.\n- generator.__next__()\u00b6\nStarts the execution of a generator function or resumes it at the last executed yield expression. When a generator function is resumed with a\n__next__()\nmethod, the current yield expression always evaluates toNone\n. The execution then continues to the next yield expression, where the generator is suspended again, and the value of theyield_list\nis returned to__next__()\n\u2019s caller. If the generator exits without yielding another value, aStopIteration\nexception is raised.This method is normally called implicitly, e.g. by a\nfor\nloop, or by the built-innext()\nfunction.\n- generator.send(value)\u00b6\nResumes the execution and \u201csends\u201d a value into the generator function. The value argument becomes the result of the current yield expression. The\nsend()\nmethod returns the next value yielded by the generator, or raisesStopIteration\nif the generator exits without yielding another value. Whensend()\nis called to start the generator, it must be called withNone\nas the argument, because there is no yield expression that could receive the value.\n- generator.throw(value)\u00b6\n- generator.throw(type[, value[, traceback]])\nRaises an exception at the point where the generator was paused, and returns the next value yielded by the generator function. If the generator exits without yielding another value, a\nStopIteration\nexception is raised. If the generator function does not catch the passed-in exception, or raises a different exception, then that exception propagates to the caller.In typical use, this is called with a single exception instance similar to the way the\nraise\nkeyword is used.For backwards compatibility, however, the second signature is supported, following a convention from older versions of Python. The type argument should be an exception class, and value should be an exception instance. If the value is not provided, the type constructor is called to get an instance. If traceback is provided, it is set on the exception, otherwise any existing\n__traceback__\nattribute stored in value may be cleared.Changed in version 3.12: The second signature (type[, value[, traceback]]) is deprecated and may be removed in a future version of Python.\n- generator.close()\u00b6\nRaises a\nGeneratorExit\nexception at the point where the generator function was paused (equivalent to callingthrow(GeneratorExit)\n). The exception is raised by the yield expression where the generator was paused. If the generator function catches the exception and returns a value, this value is returned fromclose()\n. If the generator function is already closed, or raisesGeneratorExit\n(by not catching the exception),close()\nreturnsNone\n. If the generator yields a value, aRuntimeError\nis raised. If the generator raises any other exception, it is propagated to the caller. If the generator has already exited due to an exception or normal exit,close()\nreturnsNone\nand has no other effect.Changed in version 3.13: If a generator returns a value upon being closed, the value is returned by\nclose()\n.\n6.2.10.2. Examples\u00b6\nHere is a simple example that demonstrates the behavior of generators and generator functions:\n>>> def echo(value=None):\n... print(\"Execution starts when 'next()' is called for the first time.\")\n... try:\n... while True:\n... try:\n... value = (yield value)\n... except Exception as e:\n... value = e\n... finally:\n... print(\"Don't forget to clean up when 'close()' is called.\")\n...\n>>> generator = echo(1)\n>>> print(next(generator))\nExecution starts when 'next()' is called for the first time.\n1\n>>> print(next(generator))\nNone\n>>> print(generator.send(2))\n2\n>>> generator.throw(TypeError, \"spam\")\nTypeError('spam',)\n>>> generator.close()\nDon't forget to clean up when 'close()' is called.\nFor examples using yield from\n, see PEP 380: Syntax for Delegating to a Subgenerator in \u201cWhat\u2019s New in\nPython.\u201d\n6.2.10.3. Asynchronous generator functions\u00b6\nThe presence of a yield expression in a function or method defined using\nasync def\nfurther defines the function as an\nasynchronous generator function.\nWhen an asynchronous generator function is called, it returns an\nasynchronous iterator known as an asynchronous generator object.\nThat object then controls the execution of the generator function.\nAn asynchronous generator object is typically used in an\nasync for\nstatement in a coroutine function analogously to\nhow a generator object would be used in a for\nstatement.\nCalling one of the asynchronous generator\u2019s methods returns an awaitable\nobject, and the execution starts when this object is awaited on. At that time,\nthe execution proceeds to the first yield expression, where it is suspended\nagain, returning the value of yield_list\nto the\nawaiting coroutine. As with a generator, suspension means that all local state\nis retained, including the current bindings of local variables, the instruction\npointer, the internal evaluation stack, and the state of any exception handling.\nWhen the execution is resumed by awaiting on the next object returned by the\nasynchronous generator\u2019s methods, the function can proceed exactly as if the\nyield expression were just another external call. The value of the yield\nexpression after resuming depends on the method which resumed the execution. If\n__anext__()\nis used then the result is None\n. Otherwise, if\nasend()\nis used, then the result will be the value passed in to that\nmethod.\nIf an asynchronous generator happens to exit early by break\n, the caller\ntask being cancelled, or other exceptions, the generator\u2019s async cleanup code\nwill run and possibly raise exceptions or access context variables in an\nunexpected context\u2013perhaps after the lifetime of tasks it depends, or\nduring the event loop shutdown when the async-generator garbage collection hook\nis called.\nTo prevent this, the caller must explicitly close the async generator by calling\naclose()\nmethod to finalize the generator and ultimately detach it\nfrom the event loop.\nIn an asynchronous generator function, yield expressions are allowed anywhere\nin a try\nconstruct. However, if an asynchronous generator is not\nresumed before it is finalized (by reaching a zero reference count or by\nbeing garbage collected), then a yield expression within a try\nconstruct could result in a failure to execute pending finally\nclauses. In this case, it is the responsibility of the event loop or\nscheduler running the asynchronous generator to call the asynchronous\ngenerator-iterator\u2019s aclose()\nmethod and run the resulting\ncoroutine object, thus allowing any pending finally\nclauses\nto execute.\nTo take care of finalization upon event loop termination, an event loop should\ndefine a finalizer function which takes an asynchronous generator-iterator and\npresumably calls aclose()\nand executes the coroutine.\nThis finalizer may be registered by calling sys.set_asyncgen_hooks()\n.\nWhen first iterated over, an asynchronous generator-iterator will store the\nregistered finalizer to be called upon finalization. For a reference example\nof a finalizer method see the implementation of\nasyncio.Loop.shutdown_asyncgens\nin Lib/asyncio/base_events.py.\nThe expression yield from \nis a syntax error when used in an\nasynchronous generator function.\n6.2.10.4. Asynchronous generator-iterator methods\u00b6\nThis subsection describes the methods of an asynchronous generator iterator, which are used to control the execution of a generator function.\n- async agen.__anext__()\u00b6\nReturns an awaitable which when run starts to execute the asynchronous generator or resumes it at the last executed yield expression. When an asynchronous generator function is resumed with an\n__anext__()\nmethod, the current yield expression always evaluates toNone\nin the returned awaitable, which when run will continue to the next yield expression. The value of theyield_list\nof the yield expression is the value of theStopIteration\nexception raised by the completing coroutine. If the asynchronous generator exits without yielding another value, the awaitable instead raises aStopAsyncIteration\nexception, signalling that the asynchronous iteration has completed.This method is normally called implicitly by a\nasync for\nloop.\n- async agen.asend(value)\u00b6\nReturns an awaitable which when run resumes the execution of the asynchronous generator. As with the\nsend()\nmethod for a generator, this \u201csends\u201d a value into the asynchronous generator function, and the value argument becomes the result of the current yield expression. The awaitable returned by theasend()\nmethod will return the next value yielded by the generator as the value of the raisedStopIteration\n, or raisesStopAsyncIteration\nif the asynchronous generator exits without yielding another value. Whenasend()\nis called to start the asynchronous generator, it must be called withNone\nas the argument, because there is no yield expression that could receive the value.\n- async agen.athrow(value)\u00b6\n- async agen.athrow(type[, value[, traceback]])\nReturns an awaitable that raises an exception of type\ntype\nat the point where the asynchronous generator was paused, and returns the next value yielded by the generator function as the value of the raisedStopIteration\nexception. If the asynchronous generator exits without yielding another value, aStopAsyncIteration\nexception is raised by the awaitable. If the generator function does not catch the passed-in exception, or raises a different exception, then when the awaitable is run that exception propagates to the caller of the awaitable.Changed in version 3.12: The second signature (type[, value[, traceback]]) is deprecated and may be removed in a future version of Python.\n- async agen.aclose()\u00b6\nReturns an awaitable that when run will throw a\nGeneratorExit\ninto the asynchronous generator function at the point where it was paused. If the asynchronous generator function then exits gracefully, is already closed, or raisesGeneratorExit\n(by not catching the exception), then the returned awaitable will raise aStopIteration\nexception. Any further awaitables returned by subsequent calls to the asynchronous generator will raise aStopAsyncIteration\nexception. If the asynchronous generator yields a value, aRuntimeError\nis raised by the awaitable. If the asynchronous generator raises any other exception, it is propagated to the caller of the awaitable. If the asynchronous generator has already exited due to an exception or normal exit, then further calls toaclose()\nwill return an awaitable that does nothing.\n6.3. Primaries\u00b6\nPrimaries represent the most tightly bound operations of the language. Their syntax is:\nprimary:atom\n|attributeref\n|subscription\n|call\n6.3.1. Attribute references\u00b6\nAn attribute reference is a primary followed by a period and a name:\nattributeref:primary\n\".\"identifier\nThe primary must evaluate to an object of a type that supports attribute references, which most objects do. This object is then asked to produce the attribute whose name is the identifier. The type and value produced is determined by the object. Multiple evaluations of the same attribute reference may yield different objects.\nThis production can be customized by overriding the\n__getattribute__()\nmethod or the __getattr__()\nmethod. The __getattribute__()\nmethod is called first and either\nreturns a value or raises AttributeError\nif the attribute is not\navailable.\nIf an AttributeError\nis raised and the object has a __getattr__()\nmethod, that method is called as a fallback.\n6.3.2. Subscriptions and slicings\u00b6\nThe subscription syntax is usually used for selecting an element from a\ncontainer \u2013 for example, to get a value from\na dict\n:\n>>> digits_by_name = {'one': 1, 'two': 2}\n>>> digits_by_name['two'] # Subscripting a dictionary using the key 'two'\n2\nIn the subscription syntax, the object being subscribed \u2013 a primary \u2013 is followed by a subscript in square brackets. In the simplest case, the subscript is a single expression.\nDepending on the type of the object being subscribed, the subscript is sometimes called a key (for mappings), index (for sequences), or type argument (for generic types). Syntactically, these are all equivalent:\n>>> colors = ['red', 'blue', 'green', 'black']\n>>> colors[3] # Subscripting a list using the index 3\n'black'\n>>> list[str] # Parameterizing the list type using the type argument str\nlist[str]\nAt runtime, the interpreter will evaluate the primary and\nthe subscript, and call the primary\u2019s __getitem__()\nor\n__class_getitem__()\nspecial method with the subscript\nas argument.\nFor more details on which of these methods is called, see\n__class_getitem__ versus __getitem__.\nTo show how subscription works, we can define a custom object that\nimplements __getitem__()\nand prints out the value of\nthe subscript:\n>>> class SubscriptionDemo:\n... def __getitem__(self, key):\n... print(f'subscripted with: {key!r}')\n...\n>>> demo = SubscriptionDemo()\n>>> demo[1]\nsubscripted with: 1\n>>> demo['a' * 3]\nsubscripted with: 'aaa'\nSee __getitem__()\ndocumentation for how built-in types handle\nsubscription.\nSubscriptions may also be used as targets in assignment or\ndeletion statements.\nIn these cases, the interpreter will call the subscripted object\u2019s\n__setitem__()\nor __delitem__()\nspecial method, respectively, instead of __getitem__()\n.\n>>> colors = ['red', 'blue', 'green', 'black']\n>>> colors[3] = 'white' # Setting item at index\n>>> colors\n['red', 'blue', 'green', 'white']\n>>> del colors[3] # Deleting item at index 3\n>>> colors\n['red', 'blue', 'green']\nAll advanced forms of subscript documented in the following sections are also usable for assignment and deletion.\n6.3.2.1. Slicings\u00b6\nA more advanced form of subscription, slicing, is commonly used to extract a portion of a sequence. In this form, the subscript is a slice: up to three expressions separated by colons. Any of the expressions may be omitted, but a slice must contain at least one colon:\n>>> number_names = ['zero', 'one', 'two', 'three', 'four', 'five']\n>>> number_names[1:3]\n['one', 'two']\n>>> number_names[1:]\n['one', 'two', 'three', 'four', 'five']\n>>> number_names[:3]\n['zero', 'one', 'two']\n>>> number_names[:]\n['zero', 'one', 'two', 'three', 'four', 'five']\n>>> number_names[::2]\n['zero', 'two', 'four']\n>>> number_names[:-3]\n['zero', 'one', 'two']\n>>> del number_names[4:]\n>>> number_names\n['zero', 'one', 'two', 'three']\nWhen a slice is evaluated, the interpreter constructs a slice\nobject\nwhose start\n, stop\nand\nstep\nattributes, respectively, are the results of the\nexpressions between the colons.\nAny missing expression evaluates to None\n.\nThis slice\nobject is then passed to the __getitem__()\nor __class_getitem__()\nspecial method, as above.\n# continuing with the SubscriptionDemo instance defined above:\n>>> demo[2:3]\nsubscripted with: slice(2, 3, None)\n>>> demo[::'spam']\nsubscripted with: slice(None, None, 'spam')\n6.3.2.2. Comma-separated subscripts\u00b6\nThe subscript can also be given as two or more comma-separated expressions or slices:\n# continuing with the SubscriptionDemo instance defined above:\n>>> demo[1, 2, 3]\nsubscripted with: (1, 2, 3)\n>>> demo[1:2, 3]\nsubscripted with: (slice(1, 2, None), 3)\nThis form is commonly used with numerical libraries for slicing\nmulti-dimensional data.\nIn this case, the interpreter constructs a tuple\nof the results of the\nexpressions or slices, and passes this tuple to the __getitem__()\nor __class_getitem__()\nspecial method, as above.\nThe subscript may also be given as a single expression or slice followed by a comma, to specify a one-element tuple:\n>>> demo['spam',]\nsubscripted with: ('spam',)\n6.3.2.3. \u201cStarred\u201d subscriptions\u00b6\nAdded in version 3.11: Expressions in tuple_slices may be starred. See PEP 646.\nThe subscript can also contain a starred expression.\nIn this case, the interpreter unpacks the result into a tuple, and passes\nthis tuple to __getitem__()\nor __class_getitem__()\n:\n# continuing with the SubscriptionDemo instance defined above:\n>>> demo[*range(10)]\nsubscripted with: (0, 1, 2, 3, 4, 5, 6, 7, 8, 9)\nStarred expressions may be combined with comma-separated expressions and slices:\n>>> demo['a', 'b', *range(3), 'c']\nsubscripted with: ('a', 'b', 0, 1, 2, 'c')\n6.3.2.4. Formal subscription grammar\u00b6\nsubscription:primary\n'['subscript\n']' subscript:single_subscript\n|tuple_subscript\nsingle_subscript:proper_slice\n|assignment_expression\nproper_slice: [expression\n] \":\" [expression\n] [ \":\" [expression\n] ] tuple_subscript: ','.(single_subscript\n|starred_expression\n)+ [',']\nRecall that the |\noperator denotes ordered choice.\nSpecifically, in subscript\n, if both alternatives would match, the\nfirst (single_subscript\n) has priority.\n6.3.3. Calls\u00b6\nA call calls a callable object (e.g., a function) with a possibly empty series of arguments:\ncall:primary\n\"(\" [argument_list\n[\",\"] |comprehension\n] \")\" argument_list:positional_arguments\n[\",\"starred_and_keywords\n] [\",\"keywords_arguments\n] |starred_and_keywords\n[\",\"keywords_arguments\n] |keywords_arguments\npositional_arguments:positional_item\n(\",\"positional_item\n)* positional_item:assignment_expression\n| \"*\"expression\nstarred_and_keywords: (\"*\"expression\n|keyword_item\n) (\",\" \"*\"expression\n| \",\"keyword_item\n)* keywords_arguments: (keyword_item\n| \"**\"expression\n) (\",\"keyword_item\n| \",\" \"**\"expression\n)* keyword_item:identifier\n\"=\"expression\nAn optional trailing comma may be present after the positional and keyword arguments but does not affect the semantics.\nThe primary must evaluate to a callable object (user-defined functions, built-in\nfunctions, methods of built-in objects, class objects, methods of class\ninstances, and all objects having a __call__()\nmethod are callable). All\nargument expressions are evaluated before the call is attempted. Please refer\nto section Function definitions for the syntax of formal parameter lists.\nIf keyword arguments are present, they are first converted to positional\narguments, as follows. First, a list of unfilled slots is created for the\nformal parameters. If there are N positional arguments, they are placed in the\nfirst N slots. Next, for each keyword argument, the identifier is used to\ndetermine the corresponding slot (if the identifier is the same as the first\nformal parameter name, the first slot is used, and so on). If the slot is\nalready filled, a TypeError\nexception is raised. Otherwise, the\nargument is placed in the slot, filling it (even if the expression is\nNone\n, it fills the slot). When all arguments have been processed, the slots\nthat are still unfilled are filled with the corresponding default value from the\nfunction definition. (Default values are calculated, once, when the function is\ndefined; thus, a mutable object such as a list or dictionary used as default\nvalue will be shared by all calls that don\u2019t specify an argument value for the\ncorresponding slot; this should usually be avoided.) If there are any unfilled\nslots for which no default value is specified, a TypeError\nexception is\nraised. Otherwise, the list of filled slots is used as the argument list for\nthe call.\nCPython implementation detail: An implementation may provide built-in functions whose positional parameters\ndo not have names, even if they are \u2018named\u2019 for the purpose of documentation,\nand which therefore cannot be supplied by keyword. In CPython, this is the\ncase for functions implemented in C that use PyArg_ParseTuple()\nto\nparse their arguments.\nIf there are more positional arguments than there are formal parameter slots, a\nTypeError\nexception is raised, unless a formal parameter using the syntax\n*identifier\nis present; in this case, that formal parameter receives a tuple\ncontaining the excess positional arguments (or an empty tuple if there were no\nexcess positional arguments).\nIf any keyword argument does not correspond to a formal parameter name, a\nTypeError\nexception is raised, unless a formal parameter using the syntax\n**identifier\nis present; in this case, that formal parameter receives a\ndictionary containing the excess keyword arguments (using the keywords as keys\nand the argument values as corresponding values), or a (new) empty dictionary if\nthere were no excess keyword arguments.\nIf the syntax *expression\nappears in the function call, expression\nmust\nevaluate to an iterable. Elements from these iterables are\ntreated as if they were additional positional arguments. For the call\nf(x1, x2, *y, x3, x4)\n, if y evaluates to a sequence y1, \u2026, yM,\nthis is equivalent to a call with M+4 positional arguments x1, x2,\ny1, \u2026, yM, x3, x4.\nA consequence of this is that although the *expression\nsyntax may appear\nafter explicit keyword arguments, it is processed before the\nkeyword arguments (and any **expression\narguments \u2013 see below). So:\n>>> def f(a, b):\n... print(a, b)\n...\n>>> f(b=1, *(2,))\n2 1\n>>> f(a=1, *(2,))\nTraceback (most recent call last):\nFile \"\", line 1, in \nTypeError: f() got multiple values for keyword argument 'a'\n>>> f(1, *(2,))\n1 2\nIt is unusual for both keyword arguments and the *expression\nsyntax to be\nused in the same call, so in practice this confusion does not often arise.\nIf the syntax **expression\nappears in the function call, expression\nmust\nevaluate to a mapping, the contents of which are treated as\nadditional keyword arguments. If a parameter matching a key has already been\ngiven a value (by an explicit keyword argument, or from another unpacking),\na TypeError\nexception is raised.\nWhen **expression\nis used, each key in this mapping must be\na string.\nEach value from the mapping is assigned to the first formal parameter\neligible for keyword assignment whose name is equal to the key.\nA key need not be a Python identifier (e.g. \"max-temp \u00b0F\"\nis acceptable,\nalthough it will not match any formal parameter that could be declared).\nIf there is no match to a formal parameter\nthe key-value pair is collected by the **\nparameter, if there is one,\nor if there is not, a TypeError\nexception is raised.\nFormal parameters using the syntax *identifier\nor **identifier\ncannot be\nused as positional argument slots or as keyword argument names.\nChanged in version 3.5: Function calls accept any number of *\nand **\nunpackings,\npositional arguments may follow iterable unpackings (*\n),\nand keyword arguments may follow dictionary unpackings (**\n).\nOriginally proposed by PEP 448.\nA call always returns some value, possibly None\n, unless it raises an\nexception. How this value is computed depends on the type of the callable\nobject.\nIf it is\u2014\n- a user-defined function:\nThe code block for the function is executed, passing it the argument list. The first thing the code block will do is bind the formal parameters to the arguments; this is described in section Function definitions. When the code block executes a\nreturn\nstatement, this specifies the return value of the function call. If execution reaches the end of the code block without executing areturn\nstatement, the return value isNone\n.- a built-in function or method:\nThe result is up to the interpreter; see Built-in Functions for the descriptions of built-in functions and methods.\n- a class object:\nA new instance of that class is returned.\n- a class instance method:\nThe corresponding user-defined function is called, with an argument list that is one longer than the argument list of the call: the instance becomes the first argument.\n- a class instance:\nThe class must define a\n__call__()\nmethod; the effect is then the same as if that method was called.\n6.4. Await expression\u00b6\nSuspend the execution of coroutine on an awaitable object. Can only be used inside a coroutine function.\nawait_expr: \"await\" primary\nAdded in version 3.5.\n6.5. The power operator\u00b6\nThe power operator binds more tightly than unary operators on its left; it binds less tightly than unary operators on its right. The syntax is:\npower: (await_expr\n|primary\n) [\"**\"u_expr\n]\nThus, in an unparenthesized sequence of power and unary operators, the operators\nare evaluated from right to left (this does not constrain the evaluation order\nfor the operands): -1**2\nresults in -1\n.\nThe power operator has the same semantics as the built-in pow()\nfunction,\nwhen called with two arguments: it yields its left argument raised to the power\nof its right argument.\nNumeric arguments are first converted to a common type,\nand the result is of that type.\nFor int operands, the result has the same type as the operands unless the second\nargument is negative; in that case, all arguments are converted to float and a\nfloat result is delivered. For example, 10**2\nreturns 100\n, but\n10**-2\nreturns 0.01\n.\nRaising 0.0\nto a negative power results in a ZeroDivisionError\n.\nRaising a negative number to a fractional power results in a complex\nnumber. (In earlier versions it raised a ValueError\n.)\nThis operation can be customized using the special __pow__()\nand\n__rpow__()\nmethods.\n6.6. Unary arithmetic and bitwise operations\u00b6\nAll unary arithmetic and bitwise operations have the same priority:\nu_expr:power\n| \"-\"u_expr\n| \"+\"u_expr\n| \"~\"u_expr\nThe unary -\n(minus) operator yields the negation of its numeric argument; the\noperation can be overridden with the __neg__()\nspecial method.\nThe unary +\n(plus) operator yields its numeric argument unchanged; the\noperation can be overridden with the __pos__()\nspecial method.\nThe unary ~\n(invert) operator yields the bitwise inversion of its integer\nargument. The bitwise inversion of x\nis defined as -(x+1)\n. It only\napplies to integral numbers or to custom objects that override the\n__invert__()\nspecial method.\nIn all three cases, if the argument does not have the proper type, a\nTypeError\nexception is raised.\n6.7. Binary arithmetic operations\u00b6\nThe binary arithmetic operations have the conventional priority levels. Note that some of these operations also apply to certain non-numeric types. Apart from the power operator, there are only two levels, one for multiplicative operators and one for additive operators:\nm_expr:u_expr\n|m_expr\n\"*\"u_expr\n|m_expr\n\"@\"m_expr\n|m_expr\n\"//\"u_expr\n|m_expr\n\"/\"u_expr\n|m_expr\n\"%\"u_expr\na_expr:m_expr\n|a_expr\n\"+\"m_expr\n|a_expr\n\"-\"m_expr\nThe *\n(multiplication) operator yields the product of its arguments. The\narguments must either both be numbers, or one argument must be an integer and\nthe other must be a sequence. In the former case, the numbers are\nconverted to a common real type and then\nmultiplied together. In the latter case, sequence repetition is performed;\na negative repetition factor yields an empty sequence.\nThis operation can be customized using the special __mul__()\nand\n__rmul__()\nmethods.\nChanged in version 3.14: If only one operand is a complex number, the other operand is converted to a floating-point number.\nThe @\n(at) operator is intended to be used for matrix multiplication. No\nbuiltin Python types implement this operator.\nThis operation can be customized using the special __matmul__()\nand\n__rmatmul__()\nmethods.\nAdded in version 3.5.\nThe /\n(division) and //\n(floor division) operators yield the quotient of\ntheir arguments. The numeric arguments are first\nconverted to a common type.\nDivision of integers yields a float, while floor division of integers results in an\ninteger; the result is that of mathematical division with the \u2018floor\u2019 function\napplied to the result. Division by zero raises the ZeroDivisionError\nexception.\nThe division operation can be customized using the special __truediv__()\nand __rtruediv__()\nmethods.\nThe floor division operation can be customized using the special\n__floordiv__()\nand __rfloordiv__()\nmethods.\nThe %\n(modulo) operator yields the remainder from the division of the first\nargument by the second. The numeric arguments are first\nconverted to a common type.\nA zero right argument raises the ZeroDivisionError\nexception. The\narguments may be floating-point numbers, e.g., 3.14%0.7\nequals 0.34\n(since 3.14\nequals 4*0.7 + 0.34\n.) The modulo operator always yields a\nresult with the same sign as its second operand (or zero); the absolute value of\nthe result is strictly smaller than the absolute value of the second operand\n[1].\nThe floor division and modulo operators are connected by the following\nidentity: x == (x//y)*y + (x%y)\n. Floor division and modulo are also\nconnected with the built-in function divmod()\n: divmod(x, y) == (x//y,\nx%y)\n. [2].\nIn addition to performing the modulo operation on numbers, the %\noperator is\nalso overloaded by string objects to perform old-style string formatting (also\nknown as interpolation). The syntax for string formatting is described in the\nPython Library Reference, section printf-style String Formatting.\nThe modulo operation can be customized using the special __mod__()\nand __rmod__()\nmethods.\nThe floor division operator, the modulo operator, and the divmod()\nfunction are not defined for complex numbers. Instead, convert to a\nfloating-point number using the abs()\nfunction if appropriate.\nThe +\n(addition) operator yields the sum of its arguments. The arguments\nmust either both be numbers or both be sequences of the same type. In the\nformer case, the numbers are\nconverted to a common real type and then\nadded together.\nIn the latter case, the sequences are concatenated.\nThis operation can be customized using the special __add__()\nand\n__radd__()\nmethods.\nChanged in version 3.14: If only one operand is a complex number, the other operand is converted to a floating-point number.\nThe -\n(subtraction) operator yields the difference of its arguments.\nThe numeric arguments are first\nconverted to a common real type.\nThis operation can be customized using the special __sub__()\nand\n__rsub__()\nmethods.\nChanged in version 3.14: If only one operand is a complex number, the other operand is converted to a floating-point number.\n6.8. Shifting operations\u00b6\nThe shifting operations have lower priority than the arithmetic operations:\nshift_expr:a_expr\n|shift_expr\n(\"<<\" | \">>\")a_expr\nThese operators accept integers as arguments. They shift the first argument to the left or right by the number of bits given by the second argument.\nThe left shift operation can be customized using the special __lshift__()\nand __rlshift__()\nmethods.\nThe right shift operation can be customized using the special __rshift__()\nand __rrshift__()\nmethods.\nA right shift by n bits is defined as floor division by pow(2,n)\n. A left\nshift by n bits is defined as multiplication with pow(2,n)\n.\n6.9. Binary bitwise operations\u00b6\nEach of the three bitwise operations has a different priority level:\nand_expr:shift_expr\n|and_expr\n\"&\"shift_expr\nxor_expr:and_expr\n|xor_expr\n\"^\"and_expr\nor_expr:xor_expr\n|or_expr\n\"|\"xor_expr\nThe &\noperator yields the bitwise AND of its arguments, which must be\nintegers or one of them must be a custom object overriding __and__()\nor\n__rand__()\nspecial methods.\nThe ^\noperator yields the bitwise XOR (exclusive OR) of its arguments, which\nmust be integers or one of them must be a custom object overriding __xor__()\nor\n__rxor__()\nspecial methods.\nThe |\noperator yields the bitwise (inclusive) OR of its arguments, which\nmust be integers or one of them must be a custom object overriding __or__()\nor\n__ror__()\nspecial methods.\n6.10. Comparisons\u00b6\nUnlike C, all comparison operations in Python have the same priority, which is\nlower than that of any arithmetic, shifting or bitwise operation. Also unlike\nC, expressions like a < b < c\nhave the interpretation that is conventional\nin mathematics:\ncomparison:or_expr\n(comp_operator\nor_expr\n)* comp_operator: \"<\" | \">\" | \"==\" | \">=\" | \"<=\" | \"!=\" | \"is\" [\"not\"] | [\"not\"] \"in\"\nComparisons yield boolean values: True\nor False\n. Custom\nrich comparison methods may return non-boolean values. In this case\nPython will call bool()\non such value in boolean contexts.\nComparisons can be chained arbitrarily, e.g., x < y <= z\nis equivalent to\nx < y and y <= z\n, except that y\nis evaluated only once (but in both\ncases z\nis not evaluated at all when x < y\nis found to be false).\nFormally, if a, b, c, \u2026, y, z are expressions and op1, op2, \u2026,\nopN are comparison operators, then a op1 b op2 c ... y opN z\nis equivalent\nto a op1 b and b op2 c and ... y opN z\n, except that each expression is\nevaluated at most once.\nNote that a op1 b op2 c\ndoesn\u2019t imply any kind of comparison between a and\nc, so that, e.g., x < y > z\nis perfectly legal (though perhaps not\npretty).\n6.10.1. Value comparisons\u00b6\nThe operators <\n, >\n, ==\n, >=\n, <=\n, and !=\ncompare the\nvalues of two objects. The objects do not need to have the same type.\nChapter Objects, values and types states that objects have a value (in addition to type and identity). The value of an object is a rather abstract notion in Python: For example, there is no canonical access method for an object\u2019s value. Also, there is no requirement that the value of an object should be constructed in a particular way, e.g. comprised of all its data attributes. Comparison operators implement a particular notion of what the value of an object is. One can think of them as defining the value of an object indirectly, by means of their comparison implementation.\nBecause all types are (direct or indirect) subtypes of object\n, they\ninherit the default comparison behavior from object\n. Types can\ncustomize their comparison behavior by implementing\nrich comparison methods like __lt__()\n, described in\nBasic customization.\nThe default behavior for equality comparison (==\nand !=\n) is based on\nthe identity of the objects. Hence, equality comparison of instances with the\nsame identity results in equality, and equality comparison of instances with\ndifferent identities results in inequality. A motivation for this default\nbehavior is the desire that all objects should be reflexive (i.e. x is y\nimplies x == y\n).\nA default order comparison (<\n, >\n, <=\n, and >=\n) is not provided;\nan attempt raises TypeError\n. A motivation for this default behavior is\nthe lack of a similar invariant as for equality.\nThe behavior of the default equality comparison, that instances with different identities are always unequal, may be in contrast to what types will need that have a sensible definition of object value and value-based equality. Such types will need to customize their comparison behavior, and in fact, a number of built-in types have done that.\nThe following list describes the comparison behavior of the most important built-in types.\nNumbers of built-in numeric types (Numeric Types \u2014 int, float, complex) and of the standard library types\nfractions.Fraction\nanddecimal.Decimal\ncan be compared within and across their types, with the restriction that complex numbers do not support order comparison. Within the limits of the types involved, they compare mathematically (algorithmically) correct without loss of precision.The not-a-number values\nfloat('NaN')\nanddecimal.Decimal('NaN')\nare special. Any ordered comparison of a number to a not-a-number value is false. A counter-intuitive implication is that not-a-number values are not equal to themselves. For example, ifx = float('NaN')\n,3 < x\n,x < 3\nandx == x\nare all false, whilex != x\nis true. This behavior is compliant with IEEE 754.None\nandNotImplemented\nare singletons. PEP 8 advises that comparisons for singletons should always be done withis\noris not\n, never the equality operators.Binary sequences (instances of\nbytes\norbytearray\n) can be compared within and across their types. They compare lexicographically using the numeric values of their elements.Strings (instances of\nstr\n) compare lexicographically using the numerical Unicode code points (the result of the built-in functionord()\n) of their characters. [3]Strings and binary sequences cannot be directly compared.\nSequences (instances of\ntuple\n,list\n, orrange\n) can be compared only within each of their types, with the restriction that ranges do not support order comparison. Equality comparison across these types results in inequality, and ordering comparison across these types raisesTypeError\n.Sequences compare lexicographically using comparison of corresponding elements. The built-in containers typically assume identical objects are equal to themselves. That lets them bypass equality tests for identical objects to improve performance and to maintain their internal invariants.\nLexicographical comparison between built-in collections works as follows:\nFor two collections to compare equal, they must be of the same type, have the same length, and each pair of corresponding elements must compare equal (for example,\n[1,2] == (1,2)\nis false because the type is not the same).Collections that support order comparison are ordered the same as their first unequal elements (for example,\n[1,2,x] <= [1,2,y]\nhas the same value asx <= y\n). If a corresponding element does not exist, the shorter collection is ordered first (for example,[1,2] < [1,2,3]\nis true).\nMappings (instances of\ndict\n) compare equal if and only if they have equal(key, value)\npairs. Equality comparison of the keys and values enforces reflexivity.Order comparisons (\n<\n,>\n,<=\n, and>=\n) raiseTypeError\n.Sets (instances of\nset\norfrozenset\n) can be compared within and across their types.They define order comparison operators to mean subset and superset tests. Those relations do not define total orderings (for example, the two sets\n{1,2}\nand{2,3}\nare not equal, nor subsets of one another, nor supersets of one another). Accordingly, sets are not appropriate arguments for functions which depend on total ordering (for example,min()\n,max()\n, andsorted()\nproduce undefined results given a list of sets as inputs).Comparison of sets enforces reflexivity of its elements.\nMost other built-in types have no comparison methods implemented, so they inherit the default comparison behavior.\nUser-defined classes that customize their comparison behavior should follow some consistency rules, if possible:\nEquality comparison should be reflexive. In other words, identical objects should compare equal:\nx is y\nimpliesx == y\nComparison should be symmetric. In other words, the following expressions should have the same result:\nx == y\nandy == x\nx != y\nandy != x\nx < y\nandy > x\nx <= y\nandy >= x\nComparison should be transitive. The following (non-exhaustive) examples illustrate that:\nx > y and y > z\nimpliesx > z\nx < y and y <= z\nimpliesx < z\nInverse comparison should result in the boolean negation. In other words, the following expressions should have the same result:\nx == y\nandnot x != y\nx < y\nandnot x >= y\n(for total ordering)x > y\nandnot x <= y\n(for total ordering)The last two expressions apply to totally ordered collections (e.g. to sequences, but not to sets or mappings). See also the\ntotal_ordering()\ndecorator.The\nhash()\nresult should be consistent with equality. Objects that are equal should either have the same hash value, or be marked as unhashable.\nPython does not enforce these consistency rules. In fact, the not-a-number values are an example for not following these rules.\n6.10.2. Membership test operations\u00b6\nThe operators in\nand not in\ntest for membership. x in\ns\nevaluates to True\nif x is a member of s, and False\notherwise.\nx not in s\nreturns the negation of x in s\n. All built-in sequences and\nset types support this as well as dictionary, for which in\ntests\nwhether the dictionary has a given key. For container types such as list, tuple,\nset, frozenset, dict, or collections.deque, the expression x in y\nis equivalent\nto any(x is e or x == e for e in y)\n.\nFor the string and bytes types, x in y\nis True\nif and only if x is a\nsubstring of y. An equivalent test is y.find(x) != -1\n. Empty strings are\nalways considered to be a substring of any other string, so \"\" in \"abc\"\nwill\nreturn True\n.\nFor user-defined classes which define the __contains__()\nmethod, x in\ny\nreturns True\nif y.__contains__(x)\nreturns a true value, and\nFalse\notherwise.\nFor user-defined classes which do not define __contains__()\nbut do define\n__iter__()\n, x in y\nis True\nif some value z\n, for which the\nexpression x is z or x == z\nis true, is produced while iterating over y\n.\nIf an exception is raised during the iteration, it is as if in\nraised\nthat exception.\nLastly, the old-style iteration protocol is tried: if a class defines\n__getitem__()\n, x in y\nis True\nif and only if there is a non-negative\ninteger index i such that x is y[i] or x == y[i]\n, and no lower integer index\nraises the IndexError\nexception. (If any other exception is raised, it is as\nif in\nraised that exception).\nThe operator not in\nis defined to have the inverse truth value of\nin\n.\n6.10.3. Identity comparisons\u00b6\nThe operators is\nand is not\ntest for an object\u2019s identity: x\nis y\nis true if and only if x and y are the same object. An Object\u2019s identity\nis determined using the id()\nfunction. x is not y\nyields the inverse\ntruth value. [4]\n6.11. Boolean operations\u00b6\nor_test:and_test\n|or_test\n\"or\"and_test\nand_test:not_test\n|and_test\n\"and\"not_test\nnot_test:comparison\n| \"not\"not_test\nIn the context of Boolean operations, and also when expressions are used by\ncontrol flow statements, the following values are interpreted as false:\nFalse\n, None\n, numeric zero of all types, and empty strings and containers\n(including strings, tuples, lists, dictionaries, sets and frozensets). All\nother values are interpreted as true. User-defined objects can customize their\ntruth value by providing a __bool__()\nmethod.\nThe operator not\nyields True\nif its argument is false, False\notherwise.\nThe expression x and y\nfirst evaluates x; if x is false, its value is\nreturned; otherwise, y is evaluated and the resulting value is returned.\nThe expression x or y\nfirst evaluates x; if x is true, its value is\nreturned; otherwise, y is evaluated and the resulting value is returned.\nNote that neither and\nnor or\nrestrict the value and type\nthey return to False\nand True\n, but rather return the last evaluated\nargument. This is sometimes useful, e.g., if s\nis a string that should be\nreplaced by a default value if it is empty, the expression s or 'foo'\nyields\nthe desired value. Because not\nhas to create a new value, it\nreturns a boolean value regardless of the type of its argument\n(for example, not 'foo'\nproduces False\nrather than ''\n.)\n6.12. Assignment expressions\u00b6\nassignment_expression: [identifier\n\":=\"]expression\nAn assignment expression (sometimes also called a \u201cnamed expression\u201d or\n\u201cwalrus\u201d) assigns an expression\nto an\nidentifier\n, while also returning the value of the\nexpression\n.\nOne common use case is when handling matched regular expressions:\nif matching := pattern.search(data):\ndo_something(matching)\nOr, when processing a file stream in chunks:\nwhile chunk := file.read(9000):\nprocess(chunk)\nAssignment expressions must be surrounded by parentheses when\nused as expression statements and when used as sub-expressions in\nslicing, conditional, lambda,\nkeyword-argument, and comprehension-if expressions and\nin assert\n, with\n, and assignment\nstatements.\nIn all other places where they can be used, parentheses are not required,\nincluding in if\nand while\nstatements.\nAdded in version 3.8: See PEP 572 for more details about assignment expressions.\n6.13. Conditional expressions\u00b6\nconditional_expression:or_test\n[\"if\"or_test\n\"else\"expression\n] expression:conditional_expression\n|lambda_expr\nA conditional expression (sometimes called a \u201cternary operator\u201d) is an alternative to the if-else statement. As it is an expression, it returns a value and can appear as a sub-expression.\nThe expression x if C else y\nfirst evaluates the condition, C rather than x.\nIf C is true, x is evaluated and its value is returned; otherwise, y is\nevaluated and its value is returned.\nSee PEP 308 for more details about conditional expressions.\n6.14. Lambdas\u00b6\nlambda_expr: \"lambda\" [parameter_list\n] \":\"expression\nLambda expressions (sometimes called lambda forms) are used to create anonymous\nfunctions. The expression lambda parameters: expression\nyields a function\nobject. The unnamed object behaves like a function object defined with:\ndef (parameters):\nreturn expression\nSee section Function definitions for the syntax of parameter lists. Note that functions created with lambda expressions cannot contain statements or annotations.\n6.15. Expression lists\u00b6\nstarred_expression: \"*\"or_expr\n|expression\nflexible_expression:assignment_expression\n|starred_expression\nflexible_expression_list:flexible_expression\n(\",\"flexible_expression\n)* [\",\"] starred_expression_list:starred_expression\n(\",\"starred_expression\n)* [\",\"] expression_list:expression\n(\",\"expression\n)* [\",\"] yield_list:expression_list\n|starred_expression\n\",\" [starred_expression_list\n]\nExcept when part of a list or set display, an expression list containing at least one comma yields a tuple. The length of the tuple is the number of expressions in the list. The expressions are evaluated from left to right.\nAn asterisk *\ndenotes iterable unpacking. Its operand must be\nan iterable. The iterable is expanded into a sequence of items,\nwhich are included in the new tuple, list, or set, at the site of\nthe unpacking.\nAdded in version 3.5: Iterable unpacking in expression lists, originally proposed by PEP 448.\nAdded in version 3.11: Any item in an expression list may be starred. See PEP 646.\nA trailing comma is required only to create a one-item tuple,\nsuch as 1,\n; it is optional in all other cases.\nA single expression without a\ntrailing comma doesn\u2019t create a tuple, but rather yields the value of that\nexpression. (To create an empty tuple, use an empty pair of parentheses:\n()\n.)\n6.16. Evaluation order\u00b6\nPython evaluates expressions from left to right. Notice that while evaluating an assignment, the right-hand side is evaluated before the left-hand side.\nIn the following lines, expressions will be evaluated in the arithmetic order of their suffixes:\nexpr1, expr2, expr3, expr4\n(expr1, expr2, expr3, expr4)\n{expr1: expr2, expr3: expr4}\nexpr1 + expr2 * (expr3 - expr4)\nexpr1(expr2, expr3, *expr4, **expr5)\nexpr3, expr4 = expr1, expr2\n6.17. Operator precedence\u00b6\nThe following table summarizes the operator precedence in Python, from highest precedence (most binding) to lowest precedence (least binding). Operators in the same box have the same precedence. Unless the syntax is explicitly given, operators are binary. Operators in the same box group left to right (except for exponentiation and conditional expressions, which group from right to left).\nNote that comparisons, membership tests, and identity tests, all have the same precedence and have a left-to-right chaining feature as described in the Comparisons section.\nOperator |\nDescription |\n|---|---|\n|\nBinding or parenthesized expression, list display, dictionary display, set display |\n|\nSubscription (including slicing), call, attribute reference |\nAwait expression |\n|\n|\nExponentiation [5] |\n|\nPositive, negative, bitwise NOT |\n|\nMultiplication, matrix multiplication, division, floor division, remainder [6] |\n|\nAddition and subtraction |\n|\nShifts |\n|\nBitwise AND |\n|\nBitwise XOR |\n|\nBitwise OR |\nComparisons, including membership tests and identity tests |\n|\nBoolean NOT |\n|\nBoolean AND |\n|\nBoolean OR |\n|\n|\nConditional expression |\nLambda expression |\n|\n|\nAssignment expression |\nFootnotes", "code_snippets": [" ", " ", "\n", " ", " ", "\n", " ", " ", "\n", "\n", " ", " ", "\n", " ", " ", "\n", " ", " ", "\n", "\n", " ", " ", "\n", " ", " ", "\n", "\n", "\n", " ", "\n", "\n", " ", " ", "\n", " ", " ", "\n", " ", " ", "\n", " ", " ", " ", " ", " ", "\n", "\n", " ", " ", "\n", "\n", " ", "\n ", " ", "\n ", "\n", " ", "\n", "\n", " ", "\n ", " ", "\n\n", " ", " ", "\n ", " ", "\n", "\n", " ", "\n", " ", "\n", " ", " ", "\n", " ", "\n", " ", " ", " ", " ", "\n", " ", " ", " ", " ", "\n", " ", " ", " ", "\n", " ", "\n", " ", "\n", "\n", " ", " ", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", " ", "\n", "\n", "\n", "\n", " ", " ", " ", " ", " ", "\n", " ", "\n", "\n", " ", " ", " ", " ", " ", "\n", " ", "\n", "\n\n", " ", "\n", "\n", "\n", " ", " ", "\n", " ", "\n", "\n", " ", " ", "\n", "\n", "\n", " ", " ", "\n", "\n", " ", " ", " ", " ", " ", "\n", " ", " ", " ", "\n", "\n", "\n", " ", " ", "\n", "\n", "\n", " ", " ", " ", " ", " ", " ", " ", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", " ", "\n", "\n", "\n", "\n", " ", "\n", " ", " ", " ", " ", "\n", " ", "\n", " ", " ", " ", " ", "\n", "\n", " ", " ", " ", "\n", " ", " ", " ", " ", "\n", " ", " ", "\n", " ", " ", " ", " ", " ", "\n", "\n", "\n", "\n", " ", "\n", " ", " ", " ", " ", " ", " ", " ", " ", " ", " ", " ", "\n", " ", " ", " ", "\n", "\n", " ", "\n", " ", " ", "\n", "\n", " ", "\n", "\n", " ", "\n", "\n File ", ", line ", ", in ", "\n", ": ", "\n", " ", "\n", "\n", " ", " ", " ", "\n", " ", " ", " ", "\n", " ", " ", " ", "\n", " ", " ", " ", " ", " ", " ", "\n", " ", " ", " ", "\n", " ", " ", " ", " ", "\n"], "language": "Python", "source": "python.org", "token_count": 16880} +{"url": "https://docs.python.org/3/library/winreg.html", "title": " \u2014 Windows registry access", "content": "winreg\n\u2014 Windows registry access\u00b6\nThese functions expose the Windows registry API to Python. Instead of using an integer as the registry handle, a handle object is used to ensure that the handles are closed correctly, even if the programmer neglects to explicitly close them.\nAvailability: Windows.\nChanged in version 3.3: Several functions in this module used to raise a\nWindowsError\n, which is now an alias of OSError\n.\nFunctions\u00b6\nThis module offers the following functions:\n- winreg.CloseKey(hkey)\u00b6\nCloses a previously opened registry key. The hkey argument specifies a previously opened key.\nNote\nIf hkey is not closed using this method (or via\nhkey.Close()\n), it is closed when the hkey object is destroyed by Python.\n- winreg.ConnectRegistry(computer_name, key)\u00b6\nEstablishes a connection to a predefined registry handle on another computer, and returns a handle object.\ncomputer_name is the name of the remote computer, of the form\nr\"\\\\computername\"\n. IfNone\n, the local computer is used.key is the predefined handle to connect to.\nThe return value is the handle of the opened key. If the function fails, an\nOSError\nexception is raised.Raises an auditing event\nwinreg.ConnectRegistry\nwith argumentscomputer_name\n,key\n.Changed in version 3.3: See above.\n- winreg.CreateKey(key, sub_key)\u00b6\nCreates or opens the specified key, returning a handle object.\nkey is an already open key, or one of the predefined HKEY_* constants.\nsub_key is a string that names the key this method opens or creates.\nIf key is one of the predefined keys, sub_key may be\nNone\n. In that case, the handle returned is the same key handle passed in to the function.If the key already exists, this function opens the existing key.\nThe return value is the handle of the opened key. If the function fails, an\nOSError\nexception is raised.Raises an auditing event\nwinreg.CreateKey\nwith argumentskey\n,sub_key\n,access\n.Raises an auditing event\nwinreg.OpenKey/result\nwith argumentkey\n.Changed in version 3.3: See above.\n- winreg.CreateKeyEx(key, sub_key, reserved=0, access=KEY_WRITE)\u00b6\nCreates or opens the specified key, returning a handle object.\nkey is an already open key, or one of the predefined HKEY_* constants.\nsub_key is a string that names the key this method opens or creates.\nreserved is a reserved integer, and must be zero. The default is zero.\naccess is an integer that specifies an access mask that describes the desired security access for the key. Default is\nKEY_WRITE\n. See Access Rights for other allowed values.If key is one of the predefined keys, sub_key may be\nNone\n. In that case, the handle returned is the same key handle passed in to the function.If the key already exists, this function opens the existing key.\nThe return value is the handle of the opened key. If the function fails, an\nOSError\nexception is raised.Raises an auditing event\nwinreg.CreateKey\nwith argumentskey\n,sub_key\n,access\n.Raises an auditing event\nwinreg.OpenKey/result\nwith argumentkey\n.Added in version 3.2.\nChanged in version 3.3: See above.\n- winreg.DeleteKey(key, sub_key)\u00b6\nDeletes the specified key.\nkey is an already open key, or one of the predefined HKEY_* constants.\nsub_key is a string that must be a subkey of the key identified by the key parameter. This value must not be\nNone\n, and the key may not have subkeys.This method can not delete keys with subkeys.\nIf the method succeeds, the entire key, including all of its values, is removed. If the method fails, an\nOSError\nexception is raised.Raises an auditing event\nwinreg.DeleteKey\nwith argumentskey\n,sub_key\n,access\n.Changed in version 3.3: See above.\n- winreg.DeleteKeyEx(key, sub_key, access=KEY_WOW64_64KEY, reserved=0)\u00b6\nDeletes the specified key.\nkey is an already open key, or one of the predefined HKEY_* constants.\nsub_key is a string that must be a subkey of the key identified by the key parameter. This value must not be\nNone\n, and the key may not have subkeys.reserved is a reserved integer, and must be zero. The default is zero.\naccess is an integer that specifies an access mask that describes the desired security access for the key. Default is\nKEY_WOW64_64KEY\n. On 32-bit Windows, the WOW64 constants are ignored. See Access Rights for other allowed values.This method can not delete keys with subkeys.\nIf the method succeeds, the entire key, including all of its values, is removed. If the method fails, an\nOSError\nexception is raised.On unsupported Windows versions,\nNotImplementedError\nis raised.Raises an auditing event\nwinreg.DeleteKey\nwith argumentskey\n,sub_key\n,access\n.Added in version 3.2.\nChanged in version 3.3: See above.\n- winreg.DeleteValue(key, value)\u00b6\nRemoves a named value from a registry key.\nkey is an already open key, or one of the predefined HKEY_* constants.\nvalue is a string that identifies the value to remove.\nRaises an auditing event\nwinreg.DeleteValue\nwith argumentskey\n,value\n.\n- winreg.EnumKey(key, index)\u00b6\nEnumerates subkeys of an open registry key, returning a string.\nkey is an already open key, or one of the predefined HKEY_* constants.\nindex is an integer that identifies the index of the key to retrieve.\nThe function retrieves the name of one subkey each time it is called. It is typically called repeatedly until an\nOSError\nexception is raised, indicating, no more values are available.Raises an auditing event\nwinreg.EnumKey\nwith argumentskey\n,index\n.Changed in version 3.3: See above.\n- winreg.EnumValue(key, index)\u00b6\nEnumerates values of an open registry key, returning a tuple.\nkey is an already open key, or one of the predefined HKEY_* constants.\nindex is an integer that identifies the index of the value to retrieve.\nThe function retrieves the name of one subkey each time it is called. It is typically called repeatedly, until an\nOSError\nexception is raised, indicating no more values.The result is a tuple of 3 items:\nIndex\nMeaning\n0\nA string that identifies the value name\n1\nAn object that holds the value data, and whose type depends on the underlying registry type\n2\nAn integer that identifies the type of the value data (see table in docs for\nSetValueEx()\n)Raises an auditing event\nwinreg.EnumValue\nwith argumentskey\n,index\n.Changed in version 3.3: See above.\n- winreg.ExpandEnvironmentStrings(str)\u00b6\nExpands environment variable placeholders\n%NAME%\nin strings likeREG_EXPAND_SZ\n:>>> ExpandEnvironmentStrings('%windir%') 'C:\\\\Windows'\nRaises an auditing event\nwinreg.ExpandEnvironmentStrings\nwith argumentstr\n.\n- winreg.FlushKey(key)\u00b6\nWrites all the attributes of a key to the registry.\nkey is an already open key, or one of the predefined HKEY_* constants.\nIt is not necessary to call\nFlushKey()\nto change a key. Registry changes are flushed to disk by the registry using its lazy flusher. Registry changes are also flushed to disk at system shutdown. UnlikeCloseKey()\n, theFlushKey()\nmethod returns only when all the data has been written to the registry. An application should only callFlushKey()\nif it requires absolute certainty that registry changes are on disk.Note\nIf you don\u2019t know whether a\nFlushKey()\ncall is required, it probably isn\u2019t.\n- winreg.LoadKey(key, sub_key, file_name)\u00b6\nCreates a subkey under the specified key and stores registration information from a specified file into that subkey.\nkey is a handle returned by\nConnectRegistry()\nor one of the constantsHKEY_USERS\norHKEY_LOCAL_MACHINE\n.sub_key is a string that identifies the subkey to load.\nfile_name is the name of the file to load registry data from. This file must have been created with the\nSaveKey()\nfunction. Under the file allocation table (FAT) file system, the filename may not have an extension.A call to\nLoadKey()\nfails if the calling process does not have theSE_RESTORE_PRIVILEGE\nprivilege. Note that privileges are different from permissions \u2013 see the RegLoadKey documentation for more details.If key is a handle returned by\nConnectRegistry()\n, then the path specified in file_name is relative to the remote computer.Raises an auditing event\nwinreg.LoadKey\nwith argumentskey\n,sub_key\n,file_name\n.\n- winreg.OpenKey(key, sub_key, reserved=0, access=KEY_READ)\u00b6\n- winreg.OpenKeyEx(key, sub_key, reserved=0, access=KEY_READ)\u00b6\nOpens the specified key, returning a handle object.\nkey is an already open key, or one of the predefined HKEY_* constants.\nsub_key is a string that identifies the sub_key to open.\nreserved is a reserved integer, and must be zero. The default is zero.\naccess is an integer that specifies an access mask that describes the desired security access for the key. Default is\nKEY_READ\n. See Access Rights for other allowed values.The result is a new handle to the specified key.\nIf the function fails,\nOSError\nis raised.Raises an auditing event\nwinreg.OpenKey\nwith argumentskey\n,sub_key\n,access\n.Raises an auditing event\nwinreg.OpenKey/result\nwith argumentkey\n.Changed in version 3.2: Allow the use of named arguments.\nChanged in version 3.3: See above.\n- winreg.QueryInfoKey(key)\u00b6\nReturns information about a key, as a tuple.\nkey is an already open key, or one of the predefined HKEY_* constants.\nThe result is a tuple of 3 items:\nIndex\nMeaning\n0\nAn integer giving the number of sub keys this key has.\n1\nAn integer giving the number of values this key has.\n2\nAn integer giving when the key was last modified (if available) as 100\u2019s of nanoseconds since Jan 1, 1601.\nRaises an auditing event\nwinreg.QueryInfoKey\nwith argumentkey\n.\n- winreg.QueryValue(key, sub_key)\u00b6\nRetrieves the unnamed value for a key, as a string.\nkey is an already open key, or one of the predefined HKEY_* constants.\nsub_key is a string that holds the name of the subkey with which the value is associated. If this parameter is\nNone\nor empty, the function retrieves the value set by theSetValue()\nmethod for the key identified by key.Values in the registry have name, type, and data components. This method retrieves the data for a key\u2019s first value that has a\nNULL\nname. But the underlying API call doesn\u2019t return the type, so always useQueryValueEx()\nif possible.Raises an auditing event\nwinreg.QueryValue\nwith argumentskey\n,sub_key\n,value_name\n.\n- winreg.QueryValueEx(key, value_name)\u00b6\nRetrieves the type and data for a specified value name associated with an open registry key.\nkey is an already open key, or one of the predefined HKEY_* constants.\nvalue_name is a string indicating the value to query.\nThe result is a tuple of 2 items:\nIndex\nMeaning\n0\nThe value of the registry item.\n1\nAn integer giving the registry type for this value (see table in docs for\nSetValueEx()\n)Raises an auditing event\nwinreg.QueryValue\nwith argumentskey\n,sub_key\n,value_name\n.\n- winreg.SaveKey(key, file_name)\u00b6\nSaves the specified key, and all its subkeys to the specified file.\nkey is an already open key, or one of the predefined HKEY_* constants.\nfile_name is the name of the file to save registry data to. This file cannot already exist. If this filename includes an extension, it cannot be used on file allocation table (FAT) file systems by the\nLoadKey()\nmethod.If key represents a key on a remote computer, the path described by file_name is relative to the remote computer. The caller of this method must possess the SeBackupPrivilege security privilege. Note that privileges are different than permissions \u2013 see the Conflicts Between User Rights and Permissions documentation for more details.\nThis function passes\nNULL\nfor security_attributes to the API.Raises an auditing event\nwinreg.SaveKey\nwith argumentskey\n,file_name\n.\n- winreg.SetValue(key, sub_key, type, value)\u00b6\nAssociates a value with a specified key.\nkey is an already open key, or one of the predefined HKEY_* constants.\nsub_key is a string that names the subkey with which the value is associated.\ntype is an integer that specifies the type of the data. Currently this must be\nREG_SZ\n, meaning only strings are supported. Use theSetValueEx()\nfunction for support for other data types.value is a string that specifies the new value.\nIf the key specified by the sub_key parameter does not exist, the SetValue function creates it.\nValue lengths are limited by available memory. Long values (more than 2048 bytes) should be stored as files with the filenames stored in the configuration registry. This helps the registry perform efficiently.\nThe key identified by the key parameter must have been opened with\nKEY_SET_VALUE\naccess.Raises an auditing event\nwinreg.SetValue\nwith argumentskey\n,sub_key\n,type\n,value\n.\n- winreg.SetValueEx(key, value_name, reserved, type, value)\u00b6\nStores data in the value field of an open registry key.\nkey is an already open key, or one of the predefined HKEY_* constants.\nvalue_name is a string that names the subkey with which the value is associated.\nreserved can be anything \u2013 zero is always passed to the API.\ntype is an integer that specifies the type of the data. See Value Types for the available types.\nvalue is a string that specifies the new value.\nThis method can also set additional value and type information for the specified key. The key identified by the key parameter must have been opened with\nKEY_SET_VALUE\naccess.To open the key, use the\nCreateKey()\norOpenKey()\nmethods.Value lengths are limited by available memory. Long values (more than 2048 bytes) should be stored as files with the filenames stored in the configuration registry. This helps the registry perform efficiently.\nRaises an auditing event\nwinreg.SetValue\nwith argumentskey\n,sub_key\n,type\n,value\n.\n- winreg.DisableReflectionKey(key)\u00b6\nDisables registry reflection for 32-bit processes running on a 64-bit operating system.\nkey is an already open key, or one of the predefined HKEY_* constants.\nWill generally raise\nNotImplementedError\nif executed on a 32-bit operating system.If the key is not on the reflection list, the function succeeds but has no effect. Disabling reflection for a key does not affect reflection of any subkeys.\nRaises an auditing event\nwinreg.DisableReflectionKey\nwith argumentkey\n.\n- winreg.EnableReflectionKey(key)\u00b6\nRestores registry reflection for the specified disabled key.\nkey is an already open key, or one of the predefined HKEY_* constants.\nWill generally raise\nNotImplementedError\nif executed on a 32-bit operating system.Restoring reflection for a key does not affect reflection of any subkeys.\nRaises an auditing event\nwinreg.EnableReflectionKey\nwith argumentkey\n.\n- winreg.QueryReflectionKey(key)\u00b6\nDetermines the reflection state for the specified key.\nkey is an already open key, or one of the predefined HKEY_* constants.\nReturns\nTrue\nif reflection is disabled.Will generally raise\nNotImplementedError\nif executed on a 32-bit operating system.Raises an auditing event\nwinreg.QueryReflectionKey\nwith argumentkey\n.\nConstants\u00b6\nThe following constants are defined for use in many winreg\nfunctions.\nHKEY_* Constants\u00b6\n- winreg.HKEY_CLASSES_ROOT\u00b6\nRegistry entries subordinate to this key define types (or classes) of documents and the properties associated with those types. Shell and COM applications use the information stored under this key.\n- winreg.HKEY_CURRENT_USER\u00b6\nRegistry entries subordinate to this key define the preferences of the current user. These preferences include the settings of environment variables, data about program groups, colors, printers, network connections, and application preferences.\n- winreg.HKEY_LOCAL_MACHINE\u00b6\nRegistry entries subordinate to this key define the physical state of the computer, including data about the bus type, system memory, and installed hardware and software.\n- winreg.HKEY_USERS\u00b6\nRegistry entries subordinate to this key define the default user configuration for new users on the local computer and the user configuration for the current user.\n- winreg.HKEY_PERFORMANCE_DATA\u00b6\nRegistry entries subordinate to this key allow you to access performance data. The data is not actually stored in the registry; the registry functions cause the system to collect the data from its source.\n- winreg.HKEY_CURRENT_CONFIG\u00b6\nContains information about the current hardware profile of the local computer system.\n- winreg.HKEY_DYN_DATA\u00b6\nThis key is not used in versions of Windows after 98.\nAccess Rights\u00b6\nFor more information, see Registry Key Security and Access.\n- winreg.KEY_ALL_ACCESS\u00b6\nCombines the STANDARD_RIGHTS_REQUIRED,\nKEY_QUERY_VALUE\n,KEY_SET_VALUE\n,KEY_CREATE_SUB_KEY\n,KEY_ENUMERATE_SUB_KEYS\n,KEY_NOTIFY\n, andKEY_CREATE_LINK\naccess rights.\n- winreg.KEY_WRITE\u00b6\nCombines the STANDARD_RIGHTS_WRITE,\nKEY_SET_VALUE\n, andKEY_CREATE_SUB_KEY\naccess rights.\n- winreg.KEY_READ\u00b6\nCombines the STANDARD_RIGHTS_READ,\nKEY_QUERY_VALUE\n,KEY_ENUMERATE_SUB_KEYS\n, andKEY_NOTIFY\nvalues.\n- winreg.KEY_QUERY_VALUE\u00b6\nRequired to query the values of a registry key.\n- winreg.KEY_SET_VALUE\u00b6\nRequired to create, delete, or set a registry value.\n- winreg.KEY_CREATE_SUB_KEY\u00b6\nRequired to create a subkey of a registry key.\n- winreg.KEY_ENUMERATE_SUB_KEYS\u00b6\nRequired to enumerate the subkeys of a registry key.\n- winreg.KEY_NOTIFY\u00b6\nRequired to request change notifications for a registry key or for subkeys of a registry key.\n- winreg.KEY_CREATE_LINK\u00b6\nReserved for system use.\n64-bit Specific\u00b6\nFor more information, see Accessing an Alternate Registry View.\n- winreg.KEY_WOW64_64KEY\u00b6\nIndicates that an application on 64-bit Windows should operate on the 64-bit registry view. On 32-bit Windows, this constant is ignored.\n- winreg.KEY_WOW64_32KEY\u00b6\nIndicates that an application on 64-bit Windows should operate on the 32-bit registry view. On 32-bit Windows, this constant is ignored.\nValue Types\u00b6\nFor more information, see Registry Value Types.\n- winreg.REG_BINARY\u00b6\nBinary data in any form.\n- winreg.REG_DWORD\u00b6\n32-bit number.\n- winreg.REG_DWORD_BIG_ENDIAN\u00b6\nA 32-bit number in big-endian format.\n- winreg.REG_EXPAND_SZ\u00b6\nNull-terminated string containing references to environment variables (\n%PATH%\n).\n- winreg.REG_LINK\u00b6\nA Unicode symbolic link.\n- winreg.REG_MULTI_SZ\u00b6\nA sequence of null-terminated strings, terminated by two null characters. (Python handles this termination automatically.)\n- winreg.REG_NONE\u00b6\nNo defined value type.\n- winreg.REG_QWORD\u00b6\nA 64-bit number.\nAdded in version 3.6.\n- winreg.REG_QWORD_LITTLE_ENDIAN\u00b6\nA 64-bit number in little-endian format. Equivalent to\nREG_QWORD\n.Added in version 3.6.\n- winreg.REG_RESOURCE_LIST\u00b6\nA device-driver resource list.\n- winreg.REG_FULL_RESOURCE_DESCRIPTOR\u00b6\nA hardware setting.\n- winreg.REG_RESOURCE_REQUIREMENTS_LIST\u00b6\nA hardware resource list.\n- winreg.REG_SZ\u00b6\nA null-terminated string.\nRegistry Handle Objects\u00b6\nThis object wraps a Windows HKEY object, automatically closing it when the\nobject is destroyed. To guarantee cleanup, you can call either the\nClose()\nmethod on the object, or the CloseKey()\nfunction.\nAll registry functions in this module return one of these objects.\nAll registry functions in this module which accept a handle object also accept an integer, however, use of the handle object is encouraged.\nHandle objects provide semantics for __bool__()\n\u2013 thus\nif handle:\nprint(\"Yes\")\nwill print Yes\nif the handle is currently valid (has not been closed or\ndetached).\nHandle objects can be converted to an integer (e.g., using the built-in\nint()\nfunction), in which case the underlying Windows handle value is\nreturned. You can also use the Detach()\nmethod to return the\ninteger handle, and also disconnect the Windows handle from the handle object.\n- PyHKEY.Close()\u00b6\nCloses the underlying Windows handle.\nIf the handle is already closed, no error is raised.\n- PyHKEY.Detach()\u00b6\nDetaches the Windows handle from the handle object.\nThe result is an integer that holds the value of the handle before it is detached. If the handle is already detached or closed, this will return zero.\nAfter calling this function, the handle is effectively invalidated, but the handle is not closed. You would call this function when you need the underlying Win32 handle to exist beyond the lifetime of the handle object.\nRaises an auditing event\nwinreg.PyHKEY.Detach\nwith argumentkey\n.\n- PyHKEY.__enter__()\u00b6\n- PyHKEY.__exit__(*exc_info)\u00b6\nThe HKEY object implements\n__enter__()\nand__exit__()\nand thus supports the context protocol for thewith\nstatement:with OpenKey(HKEY_LOCAL_MACHINE, \"foo\") as key: ... # work with key\nwill automatically close key when control leaves the\nwith\nblock.", "code_snippets": ["\n", "\n", " ", "\n ", "\n", " ", " ", " ", " ", "\n ", " ", "\n"], "language": "Python", "source": "python.org", "token_count": 5032} +{"url": "https://docs.python.org/3/library/telnetlib.html", "title": " \u2014 Telnet client", "content": "telnetlib\n\u2014 Telnet client\u00b6\nDeprecated since version 3.11, removed in version 3.13.\nThis module is no longer part of the Python standard library. It was removed in Python 3.13 after being deprecated in Python 3.11. The removal was decided in PEP 594.\nPossible replacements are third-party libraries from PyPI: telnetlib3 or Exscript. These are not supported or maintained by the Python core team.\nThe last version of Python that provided the telnetlib\nmodule was\nPython 3.12.", "code_snippets": [], "language": "Python", "source": "python.org", "token_count": 118} +{"url": "https://docs.python.org/3/c-api/bytes.html", "title": "Bytes Objects", "content": "Bytes Objects\u00b6\nThese functions raise TypeError\nwhen expecting a bytes parameter and\ncalled with a non-bytes parameter.\n-\nPyTypeObject PyBytes_Type\u00b6\n- Part of the Stable ABI.\nThis instance of\nPyTypeObject\nrepresents the Python bytes type; it is the same object asbytes\nin the Python layer.\n-\nint PyBytes_Check(PyObject *o)\u00b6\nReturn true if the object o is a bytes object or an instance of a subtype of the bytes type. This function always succeeds.\n-\nint PyBytes_CheckExact(PyObject *o)\u00b6\nReturn true if the object o is a bytes object, but not an instance of a subtype of the bytes type. This function always succeeds.\n-\nPyObject *PyBytes_FromString(const char *v)\u00b6\n- Return value: New reference. Part of the Stable ABI.\nReturn a new bytes object with a copy of the string v as value on success, and\nNULL\non failure. The parameter v must not beNULL\n; it will not be checked.\n-\nPyObject *PyBytes_FromStringAndSize(const char *v, Py_ssize_t len)\u00b6\n- Return value: New reference. Part of the Stable ABI.\nReturn a new bytes object with a copy of the string v as value and length len on success, and\nNULL\non failure. If v isNULL\n, the contents of the bytes object are uninitialized.\n-\nPyObject *PyBytes_FromFormat(const char *format, ...)\u00b6\n- Return value: New reference. Part of the Stable ABI.\nTake a C\nprintf()\n-style format string and a variable number of arguments, calculate the size of the resulting Python bytes object and return a bytes object with the values formatted into it. The variable arguments must be C types and must correspond exactly to the format characters in the format string. The following format characters are allowed:Format Characters\nType\nComment\n%%\nn/a\nThe literal % character.\n%c\nint\nA single byte, represented as a C int.\n%d\nint\nEquivalent to\nprintf(\"%d\")\n. [1]%u\nunsigned int\nEquivalent to\nprintf(\"%u\")\n. [1]%ld\nlong\nEquivalent to\nprintf(\"%ld\")\n. [1]%lu\nunsigned long\nEquivalent to\nprintf(\"%lu\")\n. [1]%zd\nEquivalent to\nprintf(\"%zd\")\n. [1]%zu\nsize_t\nEquivalent to\nprintf(\"%zu\")\n. [1]%i\nint\nEquivalent to\nprintf(\"%i\")\n. [1]%x\nint\nEquivalent to\nprintf(\"%x\")\n. [1]%s\nconst char*\nA null-terminated C character array.\n%p\nconst void*\nThe hex representation of a C pointer. Mostly equivalent to\nprintf(\"%p\")\nexcept that it is guaranteed to start with the literal0x\nregardless of what the platform\u2019sprintf\nyields.An unrecognized format character causes all the rest of the format string to be copied as-is to the result object, and any extra arguments discarded.\n-\nPyObject *PyBytes_FromFormatV(const char *format, va_list vargs)\u00b6\n- Return value: New reference. Part of the Stable ABI.\nIdentical to\nPyBytes_FromFormat()\nexcept that it takes exactly two arguments.\n-\nPyObject *PyBytes_FromObject(PyObject *o)\u00b6\n- Return value: New reference. Part of the Stable ABI.\nReturn the bytes representation of object o that implements the buffer protocol.\n-\nPy_ssize_t PyBytes_Size(PyObject *o)\u00b6\n- Part of the Stable ABI.\nReturn the length of the bytes in bytes object o.\n-\nPy_ssize_t PyBytes_GET_SIZE(PyObject *o)\u00b6\nSimilar to\nPyBytes_Size()\n, but without error checking.\n-\nchar *PyBytes_AsString(PyObject *o)\u00b6\n- Part of the Stable ABI.\nReturn a pointer to the contents of o. The pointer refers to the internal buffer of o, which consists of\nlen(o) + 1\nbytes. The last byte in the buffer is always null, regardless of whether there are any other null bytes. The data must not be modified in any way, unless the object was just created usingPyBytes_FromStringAndSize(NULL, size)\n. It must not be deallocated. If o is not a bytes object at all,PyBytes_AsString()\nreturnsNULL\nand raisesTypeError\n.\n-\nchar *PyBytes_AS_STRING(PyObject *string)\u00b6\nSimilar to\nPyBytes_AsString()\n, but without error checking.\n-\nint PyBytes_AsStringAndSize(PyObject *obj, char **buffer, Py_ssize_t *length)\u00b6\n- Part of the Stable ABI.\nReturn the null-terminated contents of the object obj through the output variables buffer and length. Returns\n0\non success.If length is\nNULL\n, the bytes object may not contain embedded null bytes; if it does, the function returns-1\nand aValueError\nis raised.The buffer refers to an internal buffer of obj, which includes an additional null byte at the end (not counted in length). The data must not be modified in any way, unless the object was just created using\nPyBytes_FromStringAndSize(NULL, size)\n. It must not be deallocated. If obj is not a bytes object at all,PyBytes_AsStringAndSize()\nreturns-1\nand raisesTypeError\n.Changed in version 3.5: Previously,\nTypeError\nwas raised when embedded null bytes were encountered in the bytes object.\n-\nvoid PyBytes_Concat(PyObject **bytes, PyObject *newpart)\u00b6\n- Part of the Stable ABI.\nCreate a new bytes object in *bytes containing the contents of newpart appended to bytes; the caller will own the new reference. The reference to the old value of bytes will be stolen. If the new object cannot be created, the old reference to bytes will still be discarded and the value of *bytes will be set to\nNULL\n; the appropriate exception will be set.\n-\nvoid PyBytes_ConcatAndDel(PyObject **bytes, PyObject *newpart)\u00b6\n- Part of the Stable ABI.\nCreate a new bytes object in *bytes containing the contents of newpart appended to bytes. This version releases the strong reference to newpart (i.e. decrements its reference count).\n-\nPyObject *PyBytes_Join(PyObject *sep, PyObject *iterable)\u00b6\nSimilar to\nsep.join(iterable)\nin Python.sep must be Python\nbytes\nobject. (Note thatPyUnicode_Join()\nacceptsNULL\nseparator and treats it as a space, whereasPyBytes_Join()\ndoesn\u2019t acceptNULL\nseparator.)iterable must be an iterable object yielding objects that implement the buffer protocol.\nOn success, return a new\nbytes\nobject. On error, set an exception and returnNULL\n.Added in version 3.14.\n-\nint _PyBytes_Resize(PyObject **bytes, Py_ssize_t newsize)\u00b6\nResize a bytes object. newsize will be the new length of the bytes object. You can think of it as creating a new bytes object and destroying the old one, only more efficiently. Pass the address of an existing bytes object as an lvalue (it may be written into), and the new size desired. On success, *bytes holds the resized bytes object and\n0\nis returned; the address in *bytes may differ from its input value. If the reallocation fails, the original bytes object at *bytes is deallocated, *bytes is set toNULL\n,MemoryError\nis set, and-1\nis returned.\n-\nPyObject *PyBytes_Repr(PyObject *bytes, int smartquotes)\u00b6\n- Part of the Stable ABI.\nGet the string representation of bytes. This function is currently used to implement\nbytes.__repr__()\nin Python.This function does not do type checking; it is undefined behavior to pass bytes as a non-bytes object or\nNULL\n.If smartquotes is true, the representation will use a double-quoted string instead of single-quoted string when single-quotes are present in bytes. For example, the byte string\n'Python'\nwould be represented asb\"'Python'\"\nwhen smartquotes is true, orb'\\'Python\\''\nwhen it is false.On success, this function returns a strong reference to a\nstr\nobject containing the representation. On failure, this returnsNULL\nwith an exception set.\n-\nPyObject *PyBytes_DecodeEscape(const char *s, Py_ssize_t len, const char *errors, Py_ssize_t unicode, const char *recode_encoding)\u00b6\n- Part of the Stable ABI.\nUnescape a backslash-escaped string s. s must not be\nNULL\n. len must be the size of s.errors must be one of\n\"strict\"\n,\"replace\"\n, or\"ignore\"\n. If errors isNULL\n, then\"strict\"\nis used by default.On success, this function returns a strong reference to a Python\nbytes\nobject containing the unescaped string. On failure, this function returnsNULL\nwith an exception set.Changed in version 3.9: unicode and recode_encoding are now unused.", "code_snippets": [], "language": "Python", "source": "python.org", "token_count": 1919} +{"url": "https://docs.python.org/3/c-api/capsule.html", "title": "Capsules", "content": "Capsules\u00b6\nRefer to Providing a C API for an Extension Module for more information on using these objects.\nAdded in version 3.1.\n-\ntype PyCapsule\u00b6\nThis subtype of\nPyObject\nrepresents an opaque value, useful for C extension modules which need to pass an opaque value (as a void* pointer) through Python code to other C code. It is often used to make a C function pointer defined in one module available to other modules, so the regular import mechanism can be used to access C APIs defined in dynamically loaded modules.\n-\nPyTypeObject PyCapsule_Type\u00b6\n- Part of the Stable ABI.\nThe type object corresponding to capsule objects. This is the same object as\ntypes.CapsuleType\nin the Python layer.\n-\ntype PyCapsule_Destructor\u00b6\n- Part of the Stable ABI.\nThe type of a destructor callback for a capsule. Defined as:\ntypedef void (*PyCapsule_Destructor)(PyObject *);\nSee\nPyCapsule_New()\nfor the semantics of PyCapsule_Destructor callbacks.\n-\nint PyCapsule_CheckExact(PyObject *p)\u00b6\nReturn true if its argument is a\nPyCapsule\n. This function always succeeds.\n-\nPyObject *PyCapsule_New(void *pointer, const char *name, PyCapsule_Destructor destructor)\u00b6\n- Return value: New reference. Part of the Stable ABI.\nCreate a\nPyCapsule\nencapsulating the pointer. The pointer argument may not beNULL\n.On failure, set an exception and return\nNULL\n.The name string may either be\nNULL\nor a pointer to a valid C string. If non-NULL\n, this string must outlive the capsule. (Though it is permitted to free it inside the destructor.)If the destructor argument is not\nNULL\n, it will be called with the capsule as its argument when it is destroyed.If this capsule will be stored as an attribute of a module, the name should be specified as\nmodulename.attributename\n. This will enable other modules to import the capsule usingPyCapsule_Import()\n.\n-\nvoid *PyCapsule_GetPointer(PyObject *capsule, const char *name)\u00b6\n- Part of the Stable ABI.\nRetrieve the pointer stored in the capsule. On failure, set an exception and return\nNULL\n.The name parameter must compare exactly to the name stored in the capsule. If the name stored in the capsule is\nNULL\n, the name passed in must also beNULL\n. Python uses the C functionstrcmp()\nto compare capsule names.\n-\nPyCapsule_Destructor PyCapsule_GetDestructor(PyObject *capsule)\u00b6\n- Part of the Stable ABI.\nReturn the current destructor stored in the capsule. On failure, set an exception and return\nNULL\n.It is legal for a capsule to have a\nNULL\ndestructor. This makes aNULL\nreturn code somewhat ambiguous; usePyCapsule_IsValid()\norPyErr_Occurred()\nto disambiguate.\n-\nvoid *PyCapsule_GetContext(PyObject *capsule)\u00b6\n- Part of the Stable ABI.\nReturn the current context stored in the capsule. On failure, set an exception and return\nNULL\n.It is legal for a capsule to have a\nNULL\ncontext. This makes aNULL\nreturn code somewhat ambiguous; usePyCapsule_IsValid()\norPyErr_Occurred()\nto disambiguate.\n-\nconst char *PyCapsule_GetName(PyObject *capsule)\u00b6\n- Part of the Stable ABI.\nReturn the current name stored in the capsule. On failure, set an exception and return\nNULL\n.It is legal for a capsule to have a\nNULL\nname. This makes aNULL\nreturn code somewhat ambiguous; usePyCapsule_IsValid()\norPyErr_Occurred()\nto disambiguate.\n-\nvoid *PyCapsule_Import(const char *name, int no_block)\u00b6\n- Part of the Stable ABI.\nImport a pointer to a C object from a capsule attribute in a module. The name parameter should specify the full name to the attribute, as in\nmodule.attribute\n. The name stored in the capsule must match this string exactly.This function splits name on the\n.\ncharacter, and imports the first element. It then processes further elements using attribute lookups.Return the capsule\u2019s internal pointer on success. On failure, set an exception and return\nNULL\n.Note\nIf name points to an attribute of some submodule or subpackage, this submodule or subpackage must be previously imported using other means (for example, by using\nPyImport_ImportModule()\n) for the attribute lookups to succeed.Changed in version 3.3: no_block has no effect anymore.\n-\nint PyCapsule_IsValid(PyObject *capsule, const char *name)\u00b6\n- Part of the Stable ABI.\nDetermines whether or not capsule is a valid capsule. A valid capsule is non-\nNULL\n, passesPyCapsule_CheckExact()\n, has a non-NULL\npointer stored in it, and its internal name matches the name parameter. (SeePyCapsule_GetPointer()\nfor information on how capsule names are compared.)In other words, if\nPyCapsule_IsValid()\nreturns a true value, calls to any of the accessors (any function starting withPyCapsule_Get\n) are guaranteed to succeed.Return a nonzero value if the object is valid and matches the name passed in. Return\n0\notherwise. This function will not fail.\n-\nint PyCapsule_SetContext(PyObject *capsule, void *context)\u00b6\n- Part of the Stable ABI.\nSet the context pointer inside capsule to context.\nReturn\n0\non success. Return nonzero and set an exception on failure.\n-\nint PyCapsule_SetDestructor(PyObject *capsule, PyCapsule_Destructor destructor)\u00b6\n- Part of the Stable ABI.\nSet the destructor inside capsule to destructor.\nReturn\n0\non success. Return nonzero and set an exception on failure.\n-\nint PyCapsule_SetName(PyObject *capsule, const char *name)\u00b6\n- Part of the Stable ABI.\nSet the name inside capsule to name. If non-\nNULL\n, the name must outlive the capsule. If the previous name stored in the capsule was notNULL\n, no attempt is made to free it.Return\n0\non success. Return nonzero and set an exception on failure.\n-\nint PyCapsule_SetPointer(PyObject *capsule, void *pointer)\u00b6\n- Part of the Stable ABI.\nSet the void pointer inside capsule to pointer. The pointer may not be\nNULL\n.Return\n0\non success. Return nonzero and set an exception on failure.", "code_snippets": [], "language": "Python", "source": "python.org", "token_count": 1426} +{"url": "https://docs.python.org/3/library/asyncio-stream.html", "title": "Streams", "content": "Streams\u00b6\nSource code: Lib/asyncio/streams.py\nStreams are high-level async/await-ready primitives to work with network connections. Streams allow sending and receiving data without using callbacks or low-level protocols and transports.\nHere is an example of a TCP echo client written using asyncio streams:\nimport asyncio\nasync def tcp_echo_client(message):\nreader, writer = await asyncio.open_connection(\n'127.0.0.1', 8888)\nprint(f'Send: {message!r}')\nwriter.write(message.encode())\nawait writer.drain()\ndata = await reader.read(100)\nprint(f'Received: {data.decode()!r}')\nprint('Close the connection')\nwriter.close()\nawait writer.wait_closed()\nasyncio.run(tcp_echo_client('Hello World!'))\nSee also the Examples section below.\nStream Functions\nThe following top-level asyncio functions can be used to create and work with streams:\n- async asyncio.open_connection(host=None, port=None, *, limit=None, ssl=None, family=0, proto=0, flags=0, sock=None, local_addr=None, server_hostname=None, ssl_handshake_timeout=None, ssl_shutdown_timeout=None, happy_eyeballs_delay=None, interleave=None)\u00b6\nEstablish a network connection and return a pair of\n(reader, writer)\nobjects.The returned reader and writer objects are instances of\nStreamReader\nandStreamWriter\nclasses.limit determines the buffer size limit used by the returned\nStreamReader\ninstance. By default the limit is set to 64 KiB.The rest of the arguments are passed directly to\nloop.create_connection()\n.Note\nThe sock argument transfers ownership of the socket to the\nStreamWriter\ncreated. To close the socket, call itsclose()\nmethod.Changed in version 3.7: Added the ssl_handshake_timeout parameter.\nChanged in version 3.8: Added the happy_eyeballs_delay and interleave parameters.\nChanged in version 3.10: Removed the loop parameter.\nChanged in version 3.11: Added the ssl_shutdown_timeout parameter.\n- async asyncio.start_server(client_connected_cb, host=None, port=None, *, limit=None, family=socket.AF_UNSPEC, flags=socket.AI_PASSIVE, sock=None, backlog=100, ssl=None, reuse_address=None, reuse_port=None, keep_alive=None, ssl_handshake_timeout=None, ssl_shutdown_timeout=None, start_serving=True)\u00b6\nStart a socket server.\nThe client_connected_cb callback is called whenever a new client connection is established. It receives a\n(reader, writer)\npair as two arguments, instances of theStreamReader\nandStreamWriter\nclasses.client_connected_cb can be a plain callable or a coroutine function; if it is a coroutine function, it will be automatically scheduled as a\nTask\n.limit determines the buffer size limit used by the returned\nStreamReader\ninstance. By default the limit is set to 64 KiB.The rest of the arguments are passed directly to\nloop.create_server()\n.Note\nThe sock argument transfers ownership of the socket to the server created. To close the socket, call the server\u2019s\nclose()\nmethod.Changed in version 3.7: Added the ssl_handshake_timeout and start_serving parameters.\nChanged in version 3.10: Removed the loop parameter.\nChanged in version 3.11: Added the ssl_shutdown_timeout parameter.\nChanged in version 3.13: Added the keep_alive parameter.\nUnix Sockets\n- async asyncio.open_unix_connection(path=None, *, limit=None, ssl=None, sock=None, server_hostname=None, ssl_handshake_timeout=None, ssl_shutdown_timeout=None)\u00b6\nEstablish a Unix socket connection and return a pair of\n(reader, writer)\n.Similar to\nopen_connection()\nbut operates on Unix sockets.See also the documentation of\nloop.create_unix_connection()\n.Note\nThe sock argument transfers ownership of the socket to the\nStreamWriter\ncreated. To close the socket, call itsclose()\nmethod.Availability: Unix.\nChanged in version 3.7: Added the ssl_handshake_timeout parameter. The path parameter can now be a path-like object\nChanged in version 3.10: Removed the loop parameter.\nChanged in version 3.11: Added the ssl_shutdown_timeout parameter.\n- async asyncio.start_unix_server(client_connected_cb, path=None, *, limit=None, sock=None, backlog=100, ssl=None, ssl_handshake_timeout=None, ssl_shutdown_timeout=None, start_serving=True, cleanup_socket=True)\u00b6\nStart a Unix socket server.\nSimilar to\nstart_server()\nbut works with Unix sockets.If cleanup_socket is true then the Unix socket will automatically be removed from the filesystem when the server is closed, unless the socket has been replaced after the server has been created.\nSee also the documentation of\nloop.create_unix_server()\n.Note\nThe sock argument transfers ownership of the socket to the server created. To close the socket, call the server\u2019s\nclose()\nmethod.Availability: Unix.\nChanged in version 3.7: Added the ssl_handshake_timeout and start_serving parameters. The path parameter can now be a path-like object.\nChanged in version 3.10: Removed the loop parameter.\nChanged in version 3.11: Added the ssl_shutdown_timeout parameter.\nChanged in version 3.13: Added the cleanup_socket parameter.\nStreamReader\u00b6\n- class asyncio.StreamReader\u00b6\nRepresents a reader object that provides APIs to read data from the IO stream. As an asynchronous iterable, the object supports the\nasync for\nstatement.It is not recommended to instantiate StreamReader objects directly; use\nopen_connection()\nandstart_server()\ninstead.- feed_eof()\u00b6\nAcknowledge the EOF.\n- async read(n=-1)\u00b6\nRead up to n bytes from the stream.\nIf n is not provided or set to\n-1\n, read until EOF, then return all readbytes\n. If EOF was received and the internal buffer is empty, return an emptybytes\nobject.If n is\n0\n, return an emptybytes\nobject immediately.If n is positive, return at most n available\nbytes\nas soon as at least 1 byte is available in the internal buffer. If EOF is received before any byte is read, return an emptybytes\nobject.\n- async readline()\u00b6\nRead one line, where \u201cline\u201d is a sequence of bytes ending with\n\\n\n.If EOF is received and\n\\n\nwas not found, the method returns partially read data.If EOF is received and the internal buffer is empty, return an empty\nbytes\nobject.\n- async readexactly(n)\u00b6\nRead exactly n bytes.\nRaise an\nIncompleteReadError\nif EOF is reached before n can be read. Use theIncompleteReadError.partial\nattribute to get the partially read data.\n- async readuntil(separator=b'\\n')\u00b6\nRead data from the stream until separator is found.\nOn success, the data and separator will be removed from the internal buffer (consumed). Returned data will include the separator at the end.\nIf the amount of data read exceeds the configured stream limit, a\nLimitOverrunError\nexception is raised, and the data is left in the internal buffer and can be read again.If EOF is reached before the complete separator is found, an\nIncompleteReadError\nexception is raised, and the internal buffer is reset. TheIncompleteReadError.partial\nattribute may contain a portion of the separator.The separator may also be a tuple of separators. In this case the return value will be the shortest possible that has any separator as the suffix. For the purposes of\nLimitOverrunError\n, the shortest possible separator is considered to be the one that matched.Added in version 3.5.2.\nChanged in version 3.13: The separator parameter may now be a\ntuple\nof separators.\n- at_eof()\u00b6\nReturn\nTrue\nif the buffer is empty andfeed_eof()\nwas called.\nStreamWriter\u00b6\n- class asyncio.StreamWriter\u00b6\nRepresents a writer object that provides APIs to write data to the IO stream.\nIt is not recommended to instantiate StreamWriter objects directly; use\nopen_connection()\nandstart_server()\ninstead.- write(data)\u00b6\nThe method attempts to write the data to the underlying socket immediately. If that fails, the data is queued in an internal write buffer until it can be sent.\nThe data buffer should be a bytes, bytearray, or C-contiguous one-dimensional memoryview object.\nThe method should be used along with the\ndrain()\nmethod:stream.write(data) await stream.drain()\n- writelines(data)\u00b6\nThe method writes a list (or any iterable) of bytes to the underlying socket immediately. If that fails, the data is queued in an internal write buffer until it can be sent.\nThe method should be used along with the\ndrain()\nmethod:stream.writelines(lines) await stream.drain()\n- close()\u00b6\nThe method closes the stream and the underlying socket.\nThe method should be used, though not mandatory, along with the\nwait_closed()\nmethod:stream.close() await stream.wait_closed()\n- can_write_eof()\u00b6\nReturn\nTrue\nif the underlying transport supports thewrite_eof()\nmethod,False\notherwise.\n- write_eof()\u00b6\nClose the write end of the stream after the buffered write data is flushed.\n- transport\u00b6\nReturn the underlying asyncio transport.\n- get_extra_info(name, default=None)\u00b6\nAccess optional transport information; see\nBaseTransport.get_extra_info()\nfor details.\n- async drain()\u00b6\nWait until it is appropriate to resume writing to the stream. Example:\nwriter.write(data) await writer.drain()\nThis is a flow control method that interacts with the underlying IO write buffer. When the size of the buffer reaches the high watermark, drain() blocks until the size of the buffer is drained down to the low watermark and writing can be resumed. When there is nothing to wait for, the\ndrain()\nreturns immediately.\n- async start_tls(sslcontext, *, server_hostname=None, ssl_handshake_timeout=None, ssl_shutdown_timeout=None)\u00b6\nUpgrade an existing stream-based connection to TLS.\nParameters:\nsslcontext: a configured instance of\nSSLContext\n.server_hostname: sets or overrides the host name that the target server\u2019s certificate will be matched against.\nssl_handshake_timeout is the time in seconds to wait for the TLS handshake to complete before aborting the connection.\n60.0\nseconds ifNone\n(default).ssl_shutdown_timeout is the time in seconds to wait for the SSL shutdown to complete before aborting the connection.\n30.0\nseconds ifNone\n(default).\nAdded in version 3.11.\nChanged in version 3.12: Added the ssl_shutdown_timeout parameter.\n- is_closing()\u00b6\nReturn\nTrue\nif the stream is closed or in the process of being closed.Added in version 3.7.\nExamples\u00b6\nTCP echo client using streams\u00b6\nTCP echo client using the asyncio.open_connection()\nfunction:\nimport asyncio\nasync def tcp_echo_client(message):\nreader, writer = await asyncio.open_connection(\n'127.0.0.1', 8888)\nprint(f'Send: {message!r}')\nwriter.write(message.encode())\nawait writer.drain()\ndata = await reader.read(100)\nprint(f'Received: {data.decode()!r}')\nprint('Close the connection')\nwriter.close()\nawait writer.wait_closed()\nasyncio.run(tcp_echo_client('Hello World!'))\nSee also\nThe TCP echo client protocol\nexample uses the low-level loop.create_connection()\nmethod.\nTCP echo server using streams\u00b6\nTCP echo server using the asyncio.start_server()\nfunction:\nimport asyncio\nasync def handle_echo(reader, writer):\ndata = await reader.read(100)\nmessage = data.decode()\naddr = writer.get_extra_info('peername')\nprint(f\"Received {message!r} from {addr!r}\")\nprint(f\"Send: {message!r}\")\nwriter.write(data)\nawait writer.drain()\nprint(\"Close the connection\")\nwriter.close()\nawait writer.wait_closed()\nasync def main():\nserver = await asyncio.start_server(\nhandle_echo, '127.0.0.1', 8888)\naddrs = ', '.join(str(sock.getsockname()) for sock in server.sockets)\nprint(f'Serving on {addrs}')\nasync with server:\nawait server.serve_forever()\nasyncio.run(main())\nSee also\nThe TCP echo server protocol\nexample uses the loop.create_server()\nmethod.\nGet HTTP headers\u00b6\nSimple example querying HTTP headers of the URL passed on the command line:\nimport asyncio\nimport urllib.parse\nimport sys\nasync def print_http_headers(url):\nurl = urllib.parse.urlsplit(url)\nif url.scheme == 'https':\nreader, writer = await asyncio.open_connection(\nurl.hostname, 443, ssl=True)\nelse:\nreader, writer = await asyncio.open_connection(\nurl.hostname, 80)\nquery = (\nf\"HEAD {url.path or '/'} HTTP/1.0\\r\\n\"\nf\"Host: {url.hostname}\\r\\n\"\nf\"\\r\\n\"\n)\nwriter.write(query.encode('latin-1'))\nwhile True:\nline = await reader.readline()\nif not line:\nbreak\nline = line.decode('latin1').rstrip()\nif line:\nprint(f'HTTP header> {line}')\n# Ignore the body, close the socket\nwriter.close()\nawait writer.wait_closed()\nurl = sys.argv[1]\nasyncio.run(print_http_headers(url))\nUsage:\npython example.py http://example.com/path/page.html\nor with HTTPS:\npython example.py https://example.com/path/page.html\nRegister an open socket to wait for data using streams\u00b6\nCoroutine waiting until a socket receives data using the\nopen_connection()\nfunction:\nimport asyncio\nimport socket\nasync def wait_for_data():\n# Get a reference to the current event loop because\n# we want to access low-level APIs.\nloop = asyncio.get_running_loop()\n# Create a pair of connected sockets.\nrsock, wsock = socket.socketpair()\n# Register the open socket to wait for data.\nreader, writer = await asyncio.open_connection(sock=rsock)\n# Simulate the reception of data from the network\nloop.call_soon(wsock.send, 'abc'.encode())\n# Wait for data\ndata = await reader.read(100)\n# Got data, we are done: close the socket\nprint(\"Received:\", data.decode())\nwriter.close()\nawait writer.wait_closed()\n# Close the second socket\nwsock.close()\nasyncio.run(wait_for_data())\nSee also\nThe register an open socket to wait for data using a protocol example uses a low-level protocol and\nthe loop.create_connection()\nmethod.\nThe watch a file descriptor for read events example uses the low-level\nloop.add_reader()\nmethod to watch a file descriptor.", "code_snippets": ["\n\n", " ", "\n ", " ", " ", " ", " ", "\n ", " ", "\n\n ", "\n ", "\n ", " ", "\n\n ", " ", " ", " ", "\n ", "\n\n ", "\n ", "\n ", " ", "\n\n", "\n", "\n", " ", "\n", "\n", " ", "\n", "\n", " ", "\n", "\n", " ", "\n", "\n\n", " ", "\n ", " ", " ", " ", " ", "\n ", " ", "\n\n ", "\n ", "\n ", " ", "\n\n ", " ", " ", " ", "\n ", "\n\n ", "\n ", "\n ", " ", "\n\n", "\n", "\n\n", " ", " ", "\n ", " ", " ", " ", "\n ", " ", " ", "\n ", " ", " ", "\n\n ", "\n\n ", "\n ", "\n ", " ", "\n\n ", "\n ", "\n ", " ", "\n\n", " ", "\n ", " ", " ", " ", "\n ", " ", " ", "\n\n ", " ", " ", " ", " ", " ", " ", "\n ", "\n\n ", " ", " ", "\n ", " ", "\n\n", "\n", "\n", "\n", "\n\n", " ", "\n ", " ", " ", "\n ", " ", " ", " ", "\n ", " ", " ", " ", " ", "\n ", " ", " ", "\n ", "\n ", " ", " ", " ", " ", "\n ", " ", "\n\n ", " ", " ", "\n ", "\n ", "\n ", "\n ", "\n\n ", "\n ", " ", "\n ", " ", " ", " ", "\n ", " ", " ", "\n ", "\n\n ", " ", " ", "\n ", " ", "\n ", "\n\n ", "\n ", "\n ", " ", "\n\n", " ", " ", "\n", "\n", " ", " ", "\n", " ", " ", "\n", "\n", "\n\n", " ", "\n ", "\n ", "\n ", " ", " ", "\n\n ", "\n ", " ", " ", " ", "\n\n ", "\n ", " ", " ", " ", " ", "\n\n ", "\n ", " ", "\n\n ", "\n ", " ", " ", " ", "\n\n ", "\n ", " ", "\n ", "\n ", " ", "\n\n ", "\n ", "\n\n", "\n"], "language": "Python", "source": "python.org", "token_count": 3318} +{"url": "https://docs.python.org/3/tutorial/controlflow.html", "title": "More Control Flow Tools", "content": "4. More Control Flow Tools\u00b6\nAs well as the while\nstatement just introduced, Python uses a few more\nthat we will encounter in this chapter.\n4.1. if\nStatements\u00b6\nPerhaps the most well-known statement type is the if\nstatement. For\nexample:\n>>> x = int(input(\"Please enter an integer: \"))\nPlease enter an integer: 42\n>>> if x < 0:\n... x = 0\n... print('Negative changed to zero')\n... elif x == 0:\n... print('Zero')\n... elif x == 1:\n... print('Single')\n... else:\n... print('More')\n...\nMore\nThere can be zero or more elif\nparts, and the else\npart is\noptional. The keyword \u2018elif\n\u2019 is short for \u2018else if\u2019, and is useful\nto avoid excessive indentation. An if\n\u2026 elif\n\u2026\nelif\n\u2026 sequence is a substitute for the switch\nor\ncase\nstatements found in other languages.\nIf you\u2019re comparing the same value to several constants, or checking for specific types or\nattributes, you may also find the match\nstatement useful. For more\ndetails see match Statements.\n4.2. for\nStatements\u00b6\nThe for\nstatement in Python differs a bit from what you may be used\nto in C or Pascal. Rather than always iterating over an arithmetic progression\nof numbers (like in Pascal), or giving the user the ability to define both the\niteration step and halting condition (as C), Python\u2019s for\nstatement\niterates over the items of any sequence (a list or a string), in the order that\nthey appear in the sequence. For example (no pun intended):\n>>> # Measure some strings:\n>>> words = ['cat', 'window', 'defenestrate']\n>>> for w in words:\n... print(w, len(w))\n...\ncat 3\nwindow 6\ndefenestrate 12\nCode that modifies a collection while iterating over that same collection can be tricky to get right. Instead, it is usually more straight-forward to loop over a copy of the collection or to create a new collection:\n# Create a sample collection\nusers = {'Hans': 'active', '\u00c9l\u00e9onore': 'inactive', '\u666f\u592a\u90ce': 'active'}\n# Strategy: Iterate over a copy\nfor user, status in users.copy().items():\nif status == 'inactive':\ndel users[user]\n# Strategy: Create a new collection\nactive_users = {}\nfor user, status in users.items():\nif status == 'active':\nactive_users[user] = status\n4.3. The range()\nFunction\u00b6\nIf you do need to iterate over a sequence of numbers, the built-in function\nrange()\ncomes in handy. It generates arithmetic progressions:\n>>> for i in range(5):\n... print(i)\n...\n0\n1\n2\n3\n4\nThe given end point is never part of the generated sequence; range(10)\ngenerates\n10 values, the legal indices for items of a sequence of length 10. It\nis possible to let the range start at another number, or to specify a different\nincrement (even negative; sometimes this is called the \u2018step\u2019):\n>>> list(range(5, 10))\n[5, 6, 7, 8, 9]\n>>> list(range(0, 10, 3))\n[0, 3, 6, 9]\n>>> list(range(-10, -100, -30))\n[-10, -40, -70]\nTo iterate over the indices of a sequence, you can combine range()\nand\nlen()\nas follows:\n>>> a = ['Mary', 'had', 'a', 'little', 'lamb']\n>>> for i in range(len(a)):\n... print(i, a[i])\n...\n0 Mary\n1 had\n2 a\n3 little\n4 lamb\nIn most such cases, however, it is convenient to use the enumerate()\nfunction, see Looping Techniques.\nA strange thing happens if you just print a range:\n>>> range(10)\nrange(0, 10)\nIn many ways the object returned by range()\nbehaves as if it is a list,\nbut in fact it isn\u2019t. It is an object which returns the successive items of\nthe desired sequence when you iterate over it, but it doesn\u2019t really make\nthe list, thus saving space.\nWe say such an object is iterable, that is, suitable as a target for\nfunctions and constructs that expect something from which they can\nobtain successive items until the supply is exhausted. We have seen that\nthe for\nstatement is such a construct, while an example of a function\nthat takes an iterable is sum()\n:\n>>> sum(range(4)) # 0 + 1 + 2 + 3\n6\nLater we will see more functions that return iterables and take iterables as\narguments. In chapter Data Structures, we will discuss list()\nin more\ndetail.\n4.4. break\nand continue\nStatements\u00b6\nThe break\nstatement breaks out of the innermost enclosing\nfor\nor while\nloop:\n>>> for n in range(2, 10):\n... for x in range(2, n):\n... if n % x == 0:\n... print(f\"{n} equals {x} * {n//x}\")\n... break\n...\n4 equals 2 * 2\n6 equals 2 * 3\n8 equals 2 * 4\n9 equals 3 * 3\nThe continue\nstatement continues with the next\niteration of the loop:\n>>> for num in range(2, 10):\n... if num % 2 == 0:\n... print(f\"Found an even number {num}\")\n... continue\n... print(f\"Found an odd number {num}\")\n...\nFound an even number 2\nFound an odd number 3\nFound an even number 4\nFound an odd number 5\nFound an even number 6\nFound an odd number 7\nFound an even number 8\nFound an odd number 9\n4.5. else\nClauses on Loops\u00b6\nIn a for\nor while\nloop the break\nstatement\nmay be paired with an else\nclause. If the loop finishes without\nexecuting the break\n, the else\nclause executes.\nIn a for\nloop, the else\nclause is executed\nafter the loop finishes its final iteration, that is, if no break occurred.\nIn a while\nloop, it\u2019s executed after the loop\u2019s condition becomes false.\nIn either kind of loop, the else\nclause is not executed if the\nloop was terminated by a break\n. Of course, other ways of ending the\nloop early, such as a return\nor a raised exception, will also skip\nexecution of the else\nclause.\nThis is exemplified in the following for\nloop,\nwhich searches for prime numbers:\n>>> for n in range(2, 10):\n... for x in range(2, n):\n... if n % x == 0:\n... print(n, 'equals', x, '*', n//x)\n... break\n... else:\n... # loop fell through without finding a factor\n... print(n, 'is a prime number')\n...\n2 is a prime number\n3 is a prime number\n4 equals 2 * 2\n5 is a prime number\n6 equals 2 * 3\n7 is a prime number\n8 equals 2 * 4\n9 equals 3 * 3\n(Yes, this is the correct code. Look closely: the else\nclause belongs to\nthe for\nloop, not the if\nstatement.)\nOne way to think of the else clause is to imagine it paired with the if\ninside the loop. As the loop executes, it will run a sequence like\nif/if/if/else. The if\nis inside the loop, encountered a number of times. If\nthe condition is ever true, a break\nwill happen. If the condition is never\ntrue, the else\nclause outside the loop will execute.\nWhen used with a loop, the else\nclause has more in common with the else\nclause of a try\nstatement than it does with that of if\nstatements: a try\nstatement\u2019s else\nclause runs when no exception\noccurs, and a loop\u2019s else\nclause runs when no break\noccurs. For more on\nthe try\nstatement and exceptions, see Handling Exceptions.\n4.6. pass\nStatements\u00b6\nThe pass\nstatement does nothing. It can be used when a statement is\nrequired syntactically but the program requires no action. For example:\n>>> while True:\n... pass # Busy-wait for keyboard interrupt (Ctrl+C)\n...\nThis is commonly used for creating minimal classes:\n>>> class MyEmptyClass:\n... pass\n...\nAnother place pass\ncan be used is as a place-holder for a function or\nconditional body when you are working on new code, allowing you to keep thinking\nat a more abstract level. The pass\nis silently ignored:\n>>> def initlog(*args):\n... pass # Remember to implement this!\n...\nFor this last case, many people use the ellipsis literal ...\ninstead of\npass\n. This use has no special meaning to Python, and is not part of\nthe language definition (you could use any constant expression here), but\n...\nis used conventionally as a placeholder body as well.\nSee The Ellipsis Object.\n4.7. match\nStatements\u00b6\nA match\nstatement takes an expression and compares its value to successive\npatterns given as one or more case blocks. This is superficially\nsimilar to a switch statement in C, Java or JavaScript (and many\nother languages), but it\u2019s more similar to pattern matching in\nlanguages like Rust or Haskell. Only the first pattern that matches\ngets executed and it can also extract components (sequence elements\nor object attributes) from the value into variables. If no case matches,\nnone of the branches is executed.\nThe simplest form compares a subject value against one or more literals:\ndef http_error(status):\nmatch status:\ncase 400:\nreturn \"Bad request\"\ncase 404:\nreturn \"Not found\"\ncase 418:\nreturn \"I'm a teapot\"\ncase _:\nreturn \"Something's wrong with the internet\"\nNote the last block: the \u201cvariable name\u201d _\nacts as a wildcard and\nnever fails to match.\nYou can combine several literals in a single pattern using |\n(\u201cor\u201d):\ncase 401 | 403 | 404:\nreturn \"Not allowed\"\nPatterns can look like unpacking assignments, and can be used to bind variables:\n# point is an (x, y) tuple\nmatch point:\ncase (0, 0):\nprint(\"Origin\")\ncase (0, y):\nprint(f\"Y={y}\")\ncase (x, 0):\nprint(f\"X={x}\")\ncase (x, y):\nprint(f\"X={x}, Y={y}\")\ncase _:\nraise ValueError(\"Not a point\")\nStudy that one carefully! The first pattern has two literals, and can\nbe thought of as an extension of the literal pattern shown above. But\nthe next two patterns combine a literal and a variable, and the\nvariable binds a value from the subject (point\n). The fourth\npattern captures two values, which makes it conceptually similar to\nthe unpacking assignment (x, y) = point\n.\nIf you are using classes to structure your data you can use the class name followed by an argument list resembling a constructor, but with the ability to capture attributes into variables:\nclass Point:\ndef __init__(self, x, y):\nself.x = x\nself.y = y\ndef where_is(point):\nmatch point:\ncase Point(x=0, y=0):\nprint(\"Origin\")\ncase Point(x=0, y=y):\nprint(f\"Y={y}\")\ncase Point(x=x, y=0):\nprint(f\"X={x}\")\ncase Point():\nprint(\"Somewhere else\")\ncase _:\nprint(\"Not a point\")\nYou can use positional parameters with some builtin classes that provide an\nordering for their attributes (e.g. dataclasses). You can also define a specific\nposition for attributes in patterns by setting the __match_args__\nspecial\nattribute in your classes. If it\u2019s set to (\u201cx\u201d, \u201cy\u201d), the following patterns are all\nequivalent (and all bind the y\nattribute to the var\nvariable):\nPoint(1, var)\nPoint(1, y=var)\nPoint(x=1, y=var)\nPoint(y=var, x=1)\nA recommended way to read patterns is to look at them as an extended form of what you\nwould put on the left of an assignment, to understand which variables would be set to\nwhat.\nOnly the standalone names (like var\nabove) are assigned to by a match statement.\nDotted names (like foo.bar\n), attribute names (the x=\nand y=\nabove) or class names\n(recognized by the \u201c(\u2026)\u201d next to them like Point\nabove) are never assigned to.\nPatterns can be arbitrarily nested. For example, if we have a short\nlist of Points, with __match_args__\nadded, we could match it like this:\nclass Point:\n__match_args__ = ('x', 'y')\ndef __init__(self, x, y):\nself.x = x\nself.y = y\nmatch points:\ncase []:\nprint(\"No points\")\ncase [Point(0, 0)]:\nprint(\"The origin\")\ncase [Point(x, y)]:\nprint(f\"Single point {x}, {y}\")\ncase [Point(0, y1), Point(0, y2)]:\nprint(f\"Two on the Y axis at {y1}, {y2}\")\ncase _:\nprint(\"Something else\")\nWe can add an if\nclause to a pattern, known as a \u201cguard\u201d. If the\nguard is false, match\ngoes on to try the next case block. Note\nthat value capture happens before the guard is evaluated:\nmatch point:\ncase Point(x, y) if x == y:\nprint(f\"Y=X at {x}\")\ncase Point(x, y):\nprint(f\"Not on the diagonal\")\nSeveral other key features of this statement:\nLike unpacking assignments, tuple and list patterns have exactly the same meaning and actually match arbitrary sequences. An important exception is that they don\u2019t match iterators or strings.\nSequence patterns support extended unpacking:\n[x, y, *rest]\nand(x, y, *rest)\nwork similar to unpacking assignments. The name after*\nmay also be_\n, so(x, y, *_)\nmatches a sequence of at least two items without binding the remaining items.Mapping patterns:\n{\"bandwidth\": b, \"latency\": l}\ncaptures the\"bandwidth\"\nand\"latency\"\nvalues from a dictionary. Unlike sequence patterns, extra keys are ignored. An unpacking like**rest\nis also supported. (But**_\nwould be redundant, so it is not allowed.)Subpatterns may be captured using the\nas\nkeyword:case (Point(x1, y1), Point(x2, y2) as p2): ...\nwill capture the second element of the input as\np2\n(as long as the input is a sequence of two points)Most literals are compared by equality, however the singletons\nTrue\n,False\nandNone\nare compared by identity.Patterns may use named constants. These must be dotted names to prevent them from being interpreted as capture variables:\nfrom enum import Enum class Color(Enum): RED = 'red' GREEN = 'green' BLUE = 'blue' color = Color(input(\"Enter your choice of 'red', 'blue' or 'green': \")) match color: case Color.RED: print(\"I see red!\") case Color.GREEN: print(\"Grass is green\") case Color.BLUE: print(\"I'm feeling the blues :(\")\nFor a more detailed explanation and additional examples, you can look into PEP 636 which is written in a tutorial format.\n4.8. Defining Functions\u00b6\nWe can create a function that writes the Fibonacci series to an arbitrary boundary:\n>>> def fib(n): # write Fibonacci series less than n\n... \"\"\"Print a Fibonacci series less than n.\"\"\"\n... a, b = 0, 1\n... while a < n:\n... print(a, end=' ')\n... a, b = b, a+b\n... print()\n...\n>>> # Now call the function we just defined:\n>>> fib(2000)\n0 1 1 2 3 5 8 13 21 34 55 89 144 233 377 610 987 1597\nThe keyword def\nintroduces a function definition. It must be\nfollowed by the function name and the parenthesized list of formal parameters.\nThe statements that form the body of the function start at the next line, and\nmust be indented.\nThe first statement of the function body can optionally be a string literal; this string literal is the function\u2019s documentation string, or docstring. (More about docstrings can be found in the section Documentation Strings.) There are tools which use docstrings to automatically produce online or printed documentation, or to let the user interactively browse through code; it\u2019s good practice to include docstrings in code that you write, so make a habit of it.\nThe execution of a function introduces a new symbol table used for the local\nvariables of the function. More precisely, all variable assignments in a\nfunction store the value in the local symbol table; whereas variable references\nfirst look in the local symbol table, then in the local symbol tables of\nenclosing functions, then in the global symbol table, and finally in the table\nof built-in names. Thus, global variables and variables of enclosing functions\ncannot be directly assigned a value within a function (unless, for global\nvariables, named in a global\nstatement, or, for variables of enclosing\nfunctions, named in a nonlocal\nstatement), although they may be\nreferenced.\nThe actual parameters (arguments) to a function call are introduced in the local symbol table of the called function when it is called; thus, arguments are passed using call by value (where the value is always an object reference, not the value of the object). [1] When a function calls another function, or calls itself recursively, a new local symbol table is created for that call.\nA function definition associates the function name with the function object in the current symbol table. The interpreter recognizes the object pointed to by that name as a user-defined function. Other names can also point to that same function object and can also be used to access the function:\n>>> fib\n\n>>> f = fib\n>>> f(100)\n0 1 1 2 3 5 8 13 21 34 55 89\nComing from other languages, you might object that fib\nis not a function but\na procedure since it doesn\u2019t return a value. In fact, even functions without a\nreturn\nstatement do return a value, albeit a rather boring one. This\nvalue is called None\n(it\u2019s a built-in name). Writing the value None\nis\nnormally suppressed by the interpreter if it would be the only value written.\nYou can see it if you really want to using print()\n:\n>>> fib(0)\n>>> print(fib(0))\nNone\nIt is simple to write a function that returns a list of the numbers of the Fibonacci series, instead of printing it:\n>>> def fib2(n): # return Fibonacci series up to n\n... \"\"\"Return a list containing the Fibonacci series up to n.\"\"\"\n... result = []\n... a, b = 0, 1\n... while a < n:\n... result.append(a) # see below\n... a, b = b, a+b\n... return result\n...\n>>> f100 = fib2(100) # call it\n>>> f100 # write the result\n[0, 1, 1, 2, 3, 5, 8, 13, 21, 34, 55, 89]\nThis example, as usual, demonstrates some new Python features:\nThe\nreturn\nstatement returns with a value from a function.return\nwithout an expression argument returnsNone\n. Falling off the end of a function also returnsNone\n.The statement\nresult.append(a)\ncalls a method of the list objectresult\n. A method is a function that \u2018belongs\u2019 to an object and is namedobj.methodname\n, whereobj\nis some object (this may be an expression), andmethodname\nis the name of a method that is defined by the object\u2019s type. Different types define different methods. Methods of different types may have the same name without causing ambiguity. (It is possible to define your own object types and methods, using classes, see Classes) The methodappend()\nshown in the example is defined for list objects; it adds a new element at the end of the list. In this example it is equivalent toresult = result + [a]\n, but more efficient.\n4.9. More on Defining Functions\u00b6\nIt is also possible to define functions with a variable number of arguments. There are three forms, which can be combined.\n4.9.1. Default Argument Values\u00b6\nThe most useful form is to specify a default value for one or more arguments. This creates a function that can be called with fewer arguments than it is defined to allow. For example:\ndef ask_ok(prompt, retries=4, reminder='Please try again!'):\nwhile True:\nreply = input(prompt)\nif reply in {'y', 'ye', 'yes'}:\nreturn True\nif reply in {'n', 'no', 'nop', 'nope'}:\nreturn False\nretries = retries - 1\nif retries < 0:\nraise ValueError('invalid user response')\nprint(reminder)\nThis function can be called in several ways:\ngiving only the mandatory argument:\nask_ok('Do you really want to quit?')\ngiving one of the optional arguments:\nask_ok('OK to overwrite the file?', 2)\nor even giving all arguments:\nask_ok('OK to overwrite the file?', 2, 'Come on, only yes or no!')\nThis example also introduces the in\nkeyword. This tests whether or\nnot a sequence contains a certain value.\nThe default values are evaluated at the point of function definition in the defining scope, so that\ni = 5\ndef f(arg=i):\nprint(arg)\ni = 6\nf()\nwill print 5\n.\nImportant warning: The default value is evaluated only once. This makes a difference when the default is a mutable object such as a list, dictionary, or instances of most classes. For example, the following function accumulates the arguments passed to it on subsequent calls:\ndef f(a, L=[]):\nL.append(a)\nreturn L\nprint(f(1))\nprint(f(2))\nprint(f(3))\nThis will print\n[1]\n[1, 2]\n[1, 2, 3]\nIf you don\u2019t want the default to be shared between subsequent calls, you can write the function like this instead:\ndef f(a, L=None):\nif L is None:\nL = []\nL.append(a)\nreturn L\n4.9.2. Keyword Arguments\u00b6\nFunctions can also be called using keyword arguments\nof the form kwarg=value\n. For instance, the following function:\ndef parrot(voltage, state='a stiff', action='voom', type='Norwegian Blue'):\nprint(\"-- This parrot wouldn't\", action, end=' ')\nprint(\"if you put\", voltage, \"volts through it.\")\nprint(\"-- Lovely plumage, the\", type)\nprint(\"-- It's\", state, \"!\")\naccepts one required argument (voltage\n) and three optional arguments\n(state\n, action\n, and type\n). This function can be called in any\nof the following ways:\nparrot(1000) # 1 positional argument\nparrot(voltage=1000) # 1 keyword argument\nparrot(voltage=1000000, action='VOOOOOM') # 2 keyword arguments\nparrot(action='VOOOOOM', voltage=1000000) # 2 keyword arguments\nparrot('a million', 'bereft of life', 'jump') # 3 positional arguments\nparrot('a thousand', state='pushing up the daisies') # 1 positional, 1 keyword\nbut all the following calls would be invalid:\nparrot() # required argument missing\nparrot(voltage=5.0, 'dead') # non-keyword argument after a keyword argument\nparrot(110, voltage=220) # duplicate value for the same argument\nparrot(actor='John Cleese') # unknown keyword argument\nIn a function call, keyword arguments must follow positional arguments.\nAll the keyword arguments passed must match one of the arguments\naccepted by the function (e.g. actor\nis not a valid argument for the\nparrot\nfunction), and their order is not important. This also includes\nnon-optional arguments (e.g. parrot(voltage=1000)\nis valid too).\nNo argument may receive a value more than once.\nHere\u2019s an example that fails due to this restriction:\n>>> def function(a):\n... pass\n...\n>>> function(0, a=0)\nTraceback (most recent call last):\nFile \"\", line 1, in \nTypeError: function() got multiple values for argument 'a'\nWhen a final formal parameter of the form **name\nis present, it receives a\ndictionary (see Mapping Types \u2014 dict) containing all keyword arguments except for\nthose corresponding to a formal parameter. This may be combined with a formal\nparameter of the form *name\n(described in the next subsection) which\nreceives a tuple containing the positional\narguments beyond the formal parameter list. (*name\nmust occur\nbefore **name\n.) For example, if we define a function like this:\ndef cheeseshop(kind, *arguments, **keywords):\nprint(\"-- Do you have any\", kind, \"?\")\nprint(\"-- I'm sorry, we're all out of\", kind)\nfor arg in arguments:\nprint(arg)\nprint(\"-\" * 40)\nfor kw in keywords:\nprint(kw, \":\", keywords[kw])\nIt could be called like this:\ncheeseshop(\"Limburger\", \"It's very runny, sir.\",\n\"It's really very, VERY runny, sir.\",\nshopkeeper=\"Michael Palin\",\nclient=\"John Cleese\",\nsketch=\"Cheese Shop Sketch\")\nand of course it would print:\n-- Do you have any Limburger ?\n-- I'm sorry, we're all out of Limburger\nIt's very runny, sir.\nIt's really very, VERY runny, sir.\n----------------------------------------\nshopkeeper : Michael Palin\nclient : John Cleese\nsketch : Cheese Shop Sketch\nNote that the order in which the keyword arguments are printed is guaranteed to match the order in which they were provided in the function call.\n4.9.3. Special parameters\u00b6\nBy default, arguments may be passed to a Python function either by position or explicitly by keyword. For readability and performance, it makes sense to restrict the way arguments can be passed so that a developer need only look at the function definition to determine if items are passed by position, by position or keyword, or by keyword.\nA function definition may look like:\ndef f(pos1, pos2, /, pos_or_kwd, *, kwd1, kwd2):\n----------- ---------- ----------\n| | |\n| Positional or keyword |\n| - Keyword only\n-- Positional only\nwhere /\nand *\nare optional. If used, these symbols indicate the kind of\nparameter by how the arguments may be passed to the function:\npositional-only, positional-or-keyword, and keyword-only. Keyword parameters\nare also referred to as named parameters.\n4.9.3.1. Positional-or-Keyword Arguments\u00b6\nIf /\nand *\nare not present in the function definition, arguments may\nbe passed to a function by position or by keyword.\n4.9.3.2. Positional-Only Parameters\u00b6\nLooking at this in a bit more detail, it is possible to mark certain parameters\nas positional-only. If positional-only, the parameters\u2019 order matters, and\nthe parameters cannot be passed by keyword. Positional-only parameters are\nplaced before a /\n(forward-slash). The /\nis used to logically\nseparate the positional-only parameters from the rest of the parameters.\nIf there is no /\nin the function definition, there are no positional-only\nparameters.\nParameters following the /\nmay be positional-or-keyword or keyword-only.\n4.9.3.3. Keyword-Only Arguments\u00b6\nTo mark parameters as keyword-only, indicating the parameters must be passed\nby keyword argument, place an *\nin the arguments list just before the first\nkeyword-only parameter.\n4.9.3.4. Function Examples\u00b6\nConsider the following example function definitions paying close attention to the\nmarkers /\nand *\n:\n>>> def standard_arg(arg):\n... print(arg)\n...\n>>> def pos_only_arg(arg, /):\n... print(arg)\n...\n>>> def kwd_only_arg(*, arg):\n... print(arg)\n...\n>>> def combined_example(pos_only, /, standard, *, kwd_only):\n... print(pos_only, standard, kwd_only)\nThe first function definition, standard_arg\n, the most familiar form,\nplaces no restrictions on the calling convention and arguments may be\npassed by position or keyword:\n>>> standard_arg(2)\n2\n>>> standard_arg(arg=2)\n2\nThe second function pos_only_arg\nis restricted to only use positional\nparameters as there is a /\nin the function definition:\n>>> pos_only_arg(1)\n1\n>>> pos_only_arg(arg=1)\nTraceback (most recent call last):\nFile \"\", line 1, in \nTypeError: pos_only_arg() got some positional-only arguments passed as keyword arguments: 'arg'\nThe third function kwd_only_arg\nonly allows keyword arguments as indicated\nby a *\nin the function definition:\n>>> kwd_only_arg(3)\nTraceback (most recent call last):\nFile \"\", line 1, in \nTypeError: kwd_only_arg() takes 0 positional arguments but 1 was given\n>>> kwd_only_arg(arg=3)\n3\nAnd the last uses all three calling conventions in the same function definition:\n>>> combined_example(1, 2, 3)\nTraceback (most recent call last):\nFile \"\", line 1, in \nTypeError: combined_example() takes 2 positional arguments but 3 were given\n>>> combined_example(1, 2, kwd_only=3)\n1 2 3\n>>> combined_example(1, standard=2, kwd_only=3)\n1 2 3\n>>> combined_example(pos_only=1, standard=2, kwd_only=3)\nTraceback (most recent call last):\nFile \"\", line 1, in \nTypeError: combined_example() got some positional-only arguments passed as keyword arguments: 'pos_only'\nFinally, consider this function definition which has a potential collision between the positional argument name\nand **kwds\nwhich has name\nas a key:\ndef foo(name, **kwds):\nreturn 'name' in kwds\nThere is no possible call that will make it return True\nas the keyword 'name'\nwill always bind to the first parameter. For example:\n>>> foo(1, **{'name': 2})\nTraceback (most recent call last):\nFile \"\", line 1, in \nTypeError: foo() got multiple values for argument 'name'\n>>>\nBut using /\n(positional only arguments), it is possible since it allows name\nas a positional argument and 'name'\nas a key in the keyword arguments:\n>>> def foo(name, /, **kwds):\n... return 'name' in kwds\n...\n>>> foo(1, **{'name': 2})\nTrue\nIn other words, the names of positional-only parameters can be used in\n**kwds\nwithout ambiguity.\n4.9.3.5. Recap\u00b6\nThe use case will determine which parameters to use in the function definition:\ndef f(pos1, pos2, /, pos_or_kwd, *, kwd1, kwd2):\nAs guidance:\nUse positional-only if you want the name of the parameters to not be available to the user. This is useful when parameter names have no real meaning, if you want to enforce the order of the arguments when the function is called or if you need to take some positional parameters and arbitrary keywords.\nUse keyword-only when names have meaning and the function definition is more understandable by being explicit with names or you want to prevent users relying on the position of the argument being passed.\nFor an API, use positional-only to prevent breaking API changes if the parameter\u2019s name is modified in the future.\n4.9.4. Arbitrary Argument Lists\u00b6\nFinally, the least frequently used option is to specify that a function can be called with an arbitrary number of arguments. These arguments will be wrapped up in a tuple (see Tuples and Sequences). Before the variable number of arguments, zero or more normal arguments may occur.\ndef write_multiple_items(file, separator, *args):\nfile.write(separator.join(args))\nNormally, these variadic arguments will be last in the list of formal\nparameters, because they scoop up all remaining input arguments that are\npassed to the function. Any formal parameters which occur after the *args\nparameter are \u2018keyword-only\u2019 arguments, meaning that they can only be used as\nkeywords rather than positional arguments.\n>>> def concat(*args, sep=\"/\"):\n... return sep.join(args)\n...\n>>> concat(\"earth\", \"mars\", \"venus\")\n'earth/mars/venus'\n>>> concat(\"earth\", \"mars\", \"venus\", sep=\".\")\n'earth.mars.venus'\n4.9.5. Unpacking Argument Lists\u00b6\nThe reverse situation occurs when the arguments are already in a list or tuple\nbut need to be unpacked for a function call requiring separate positional\narguments. For instance, the built-in range()\nfunction expects separate\nstart and stop arguments. If they are not available separately, write the\nfunction call with the *\n-operator to unpack the arguments out of a list\nor tuple:\n>>> list(range(3, 6)) # normal call with separate arguments\n[3, 4, 5]\n>>> args = [3, 6]\n>>> list(range(*args)) # call with arguments unpacked from a list\n[3, 4, 5]\nIn the same fashion, dictionaries can deliver keyword arguments with the\n**\n-operator:\n>>> def parrot(voltage, state='a stiff', action='voom'):\n... print(\"-- This parrot wouldn't\", action, end=' ')\n... print(\"if you put\", voltage, \"volts through it.\", end=' ')\n... print(\"E's\", state, \"!\")\n...\n>>> d = {\"voltage\": \"four million\", \"state\": \"bleedin' demised\", \"action\": \"VOOM\"}\n>>> parrot(**d)\n-- This parrot wouldn't VOOM if you put four million volts through it. E's bleedin' demised !\n4.9.6. Lambda Expressions\u00b6\nSmall anonymous functions can be created with the lambda\nkeyword.\nThis function returns the sum of its two arguments: lambda a, b: a+b\n.\nLambda functions can be used wherever function objects are required. They are\nsyntactically restricted to a single expression. Semantically, they are just\nsyntactic sugar for a normal function definition. Like nested function\ndefinitions, lambda functions can reference variables from the containing\nscope:\n>>> def make_incrementor(n):\n... return lambda x: x + n\n...\n>>> f = make_incrementor(42)\n>>> f(0)\n42\n>>> f(1)\n43\nThe above example uses a lambda expression to return a function. Another use\nis to pass a small function as an argument. For instance, list.sort()\ntakes a sorting key function key which can be a lambda function:\n>>> pairs = [(1, 'one'), (2, 'two'), (3, 'three'), (4, 'four')]\n>>> pairs.sort(key=lambda pair: pair[1])\n>>> pairs\n[(4, 'four'), (1, 'one'), (3, 'three'), (2, 'two')]\n4.9.7. Documentation Strings\u00b6\nHere are some conventions about the content and formatting of documentation strings.\nThe first line should always be a short, concise summary of the object\u2019s purpose. For brevity, it should not explicitly state the object\u2019s name or type, since these are available by other means (except if the name happens to be a verb describing a function\u2019s operation). This line should begin with a capital letter and end with a period.\nIf there are more lines in the documentation string, the second line should be blank, visually separating the summary from the rest of the description. The following lines should be one or more paragraphs describing the object\u2019s calling conventions, its side effects, etc.\nThe Python parser strips indentation from multi-line string literals when they serve as module, class, or function docstrings.\nHere is an example of a multi-line docstring:\n>>> def my_function():\n... \"\"\"Do nothing, but document it.\n...\n... No, really, it doesn't do anything:\n...\n... >>> my_function()\n... >>>\n... \"\"\"\n... pass\n...\n>>> print(my_function.__doc__)\nDo nothing, but document it.\nNo, really, it doesn't do anything:\n>>> my_function()\n>>>\n4.9.8. Function Annotations\u00b6\nFunction annotations are completely optional metadata information about the types used by user-defined functions (see PEP 3107 and PEP 484 for more information).\nAnnotations are stored in the __annotations__\nattribute of the function as a dictionary and have no effect on any other part of the\nfunction. Parameter annotations are defined by a colon after the parameter name, followed\nby an expression evaluating to the value of the annotation. Return annotations are\ndefined by a literal ->\n, followed by an expression, between the parameter\nlist and the colon denoting the end of the def\nstatement. The\nfollowing example has a required argument, an optional argument, and the return\nvalue annotated:\n>>> def f(ham: str, eggs: str = 'eggs') -> str:\n... print(\"Annotations:\", f.__annotations__)\n... print(\"Arguments:\", ham, eggs)\n... return ham + ' and ' + eggs\n...\n>>> f('spam')\nAnnotations: {'ham': , 'return': , 'eggs': }\nArguments: spam eggs\n'spam and eggs'\n4.10. Intermezzo: Coding Style\u00b6\nNow that you are about to write longer, more complex pieces of Python, it is a good time to talk about coding style. Most languages can be written (or more concisely, formatted) in different styles; some are more readable than others. Making it easy for others to read your code is always a good idea, and adopting a nice coding style helps tremendously for that.\nFor Python, PEP 8 has emerged as the style guide that most projects adhere to; it promotes a very readable and eye-pleasing coding style. Every Python developer should read it at some point; here are the most important points extracted for you:\nUse 4-space indentation, and no tabs.\n4 spaces are a good compromise between small indentation (allows greater nesting depth) and large indentation (easier to read). Tabs introduce confusion, and are best left out.\nWrap lines so that they don\u2019t exceed 79 characters.\nThis helps users with small displays and makes it possible to have several code files side-by-side on larger displays.\nUse blank lines to separate functions and classes, and larger blocks of code inside functions.\nWhen possible, put comments on a line of their own.\nUse docstrings.\nUse spaces around operators and after commas, but not directly inside bracketing constructs:\na = f(1, 2) + g(3, 4)\n.Name your classes and functions consistently; the convention is to use\nUpperCamelCase\nfor classes andlowercase_with_underscores\nfor functions and methods. Always useself\nas the name for the first method argument (see A First Look at Classes for more on classes and methods).Don\u2019t use fancy encodings if your code is meant to be used in international environments. Python\u2019s default, UTF-8, or even plain ASCII work best in any case.\nLikewise, don\u2019t use non-ASCII characters in identifiers if there is only the slightest chance people speaking a different language will read or maintain the code.\nFootnotes", "code_snippets": [" ", " ", "\n", "\n", " ", " ", " ", "\n", " ", " ", " ", "\n", " ", "\n", " ", " ", " ", "\n", " ", "\n", " ", " ", " ", "\n", " ", "\n", "\n", " ", "\n", "\n", "\n", "\n", " ", " ", " ", " ", "\n", " ", " ", " ", "\n", " ", " ", "\n", "\n", "\n", "\n", "\n", "\n", " ", " ", " ", " ", " ", " ", " ", "\n\n", "\n", " ", " ", " ", " ", "\n ", " ", " ", " ", "\n ", " ", "\n\n", "\n", " ", " ", "\n", " ", " ", " ", " ", "\n ", " ", " ", " ", "\n ", " ", " ", "\n", " ", " ", " ", "\n", " ", "\n", "\n", "\n", "\n", "\n", "\n", "\n", " ", "\n", "\n\n", " ", " ", "\n", "\n\n", " ", " ", "\n", "\n", " ", " ", " ", " ", " ", " ", "\n", " ", " ", " ", "\n", " ", " ", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", " ", "\n", "\n", " ", " ", " ", " ", "\n", " ", " ", " ", " ", " ", "\n", " ", " ", " ", " ", " ", " ", "\n", " ", "\n", " ", "\n", "\n", "\n", "\n", "\n", "\n", " ", " ", " ", " ", "\n", " ", " ", " ", " ", " ", " ", "\n", " ", "\n", " ", "\n", " ", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", " ", " ", " ", " ", "\n", " ", " ", " ", " ", " ", "\n", " ", " ", " ", " ", " ", " ", "\n", " ", " ", " ", " ", " ", "\n", " ", "\n", " ", "\n", " ", "\n", " ", " ", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", " ", "\n", " ", " ", "\n", "\n", "\n", " ", "\n", "\n", "\n", " ", " ", "\n", "\n", "\n ", " ", "\n ", " ", "\n ", " ", "\n ", " ", "\n ", " ", "\n ", " ", "\n ", " ", "\n ", "\n ", " ", "\n", " ", " ", " ", " ", " ", "\n ", " ", "\n", "\n", " ", "\n ", " ", " ", "\n ", "\n ", " ", " ", "\n ", "\n ", " ", " ", "\n ", "\n ", " ", " ", "\n ", "\n ", "\n ", " ", "\n", "\n ", " ", " ", "\n ", " ", " ", "\n ", " ", " ", "\n\n", "\n ", " ", "\n ", " ", " ", "\n ", "\n ", " ", " ", "\n ", "\n ", " ", " ", "\n ", "\n ", " ", "\n ", "\n ", "\n ", "\n", " ", "\n", " ", "\n", " ", "\n", " ", "\n", "\n ", " ", " ", " ", "\n ", " ", " ", "\n ", " ", " ", "\n ", " ", " ", "\n\n", " ", "\n ", " ", "\n ", "\n ", " ", " ", "\n ", "\n ", " ", " ", "\n ", "\n ", " ", " ", " ", " ", "\n ", "\n ", "\n ", "\n", " ", "\n ", " ", " ", " ", " ", " ", " ", "\n ", "\n ", " ", " ", "\n ", "\n", " ", " ", " ", " ", " ", " ", " ", "\n", " ", "\n", "\n ", " ", " ", "\n ", " ", " ", "\n ", " ", " ", "\n\n", " ", " ", "\n\n", " ", "\n ", " ", "\n ", "\n ", " ", "\n ", "\n ", " ", "\n ", "\n", " ", "\n", "\n", " ", " ", " ", " ", " ", "\n", " ", " ", " ", " ", "\n", " ", " ", "\n", " ", " ", " ", " ", " ", "\n", " ", "\n", "\n", "\n", "\n", "\n", "\n", "\n", " ", " ", "\n", "\n", "\n", "\n", "\n", "\n", " ", "\n", "\n", " ", " ", " ", "\n", " ", " ", " ", " ", " ", "\n", " ", " ", " ", " ", "\n", " ", " ", "\n", " ", " ", " ", " ", " ", "\n", " ", " ", "\n", "\n", " ", " ", " ", "\n", " ", "\n", "\n", " ", " ", "\n ", " ", "\n ", " ", " ", "\n ", " ", " ", " ", " ", " ", "\n ", " ", "\n ", " ", " ", " ", " ", " ", " ", "\n ", " ", "\n ", " ", " ", " ", " ", "\n ", " ", " ", " ", "\n ", " ", "\n ", "\n", " ", " ", "\n\n", "\n ", "\n\n", " ", " ", "\n", "\n", " ", "\n ", "\n ", " ", "\n\n", "\n", "\n", "\n", "\n", " ", "\n", " ", " ", "\n", " ", "\n ", " ", " ", " ", "\n ", " ", " ", "\n ", "\n ", " ", "\n", " ", " ", " ", "\n ", " ", " ", "\n ", " ", " ", "\n ", " ", "\n ", " ", " ", "\n", " ", "\n", " ", "\n", " ", " ", "\n", " ", " ", "\n", " ", " ", " ", "\n", " ", " ", "\n", " ", "\n", " ", " ", "\n", " ", " ", "\n", " ", "\n", "\n", " ", "\n", "\n", " ", "\n", "\n File ", ", line ", ", in ", "\n", ": ", "\n", " ", " ", "\n ", " ", " ", "\n ", " ", "\n ", " ", " ", " ", "\n ", "\n ", " ", " ", "\n ", " ", " ", " ", "\n ", " ", " ", "\n", " ", "\n ", "\n ", "\n ", "\n ", "\n", "\n", " ", "\n", "\n", " ", "\n", " ", "\n", "\n", " ", "\n", " ", "\n", "\n", " ", " ", " ", " ", "\n", " ", " ", " ", "\n", "\n", "\n\n", "\n", "\n", "\n", "\n\n", "\n", "\n File ", ", line ", ", in ", "\n", ": ", "\n", "\n", "\n File ", ", line ", ", in ", "\n", ": ", "\n\n", "\n", "\n", " ", " ", "\n", "\n File ", ", line ", ", in ", "\n", ": ", "\n\n", " ", " ", "\n", "\n\n", " ", " ", "\n", "\n\n", " ", " ", "\n", "\n File ", ", line ", ", in ", "\n", ": ", "\n", " ", "\n ", " ", " ", " ", "\n", " ", " ", "\n", "\n File ", ", line ", ", in ", "\n", ": ", "\n", "\n", " ", " ", "\n", " ", " ", " ", " ", "\n", "\n", " ", " ", "\n", "\n", " ", " ", " ", " ", " ", " ", "\n", " ", " ", "\n ", "\n", " ", "\n", " ", " ", "\n", "\n", " ", " ", "\n", "\n", " ", " ", " ", "\n", "\n", " ", " ", "\n", "\n", " ", " ", " ", "\n", " ", "\n", "\n", " ", " ", "\n", " ", " ", " ", "\n", " ", " ", " ", " ", "\n", " ", " ", " ", "\n", "\n", " ", " ", " ", " ", " ", " ", " ", "\n", "\n", "\n", "\n", " ", " ", " ", " ", " ", " ", "\n", "\n", " ", " ", "\n", "\n", "\n", "\n", "\n", " ", " ", " ", " ", " ", " ", " ", " ", " ", "\n", " ", " ", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", " ", "\n", "\n", "\n", "\n\n", "\n\n", "\n", "\n", " ", " ", " ", " ", " ", " ", " ", "\n", " ", " ", "\n", " ", " ", " ", "\n", " ", " ", " ", " ", " ", " ", "\n", "\n", "\n", "\n", "\n", "\n"], "language": "Python", "source": "python.org", "token_count": 8619} +{"url": "https://docs.python.org/3/c-api/module.html", "title": "Module Objects", "content": "Module Objects\u00b6\n-\nPyTypeObject PyModule_Type\u00b6\n- Part of the Stable ABI.\nThis instance of\nPyTypeObject\nrepresents the Python module type. This is exposed to Python programs astypes.ModuleType\n.\n-\nint PyModule_Check(PyObject *p)\u00b6\nReturn true if p is a module object, or a subtype of a module object. This function always succeeds.\n-\nint PyModule_CheckExact(PyObject *p)\u00b6\nReturn true if p is a module object, but not a subtype of\nPyModule_Type\n. This function always succeeds.\n-\nPyObject *PyModule_NewObject(PyObject *name)\u00b6\n- Return value: New reference. Part of the Stable ABI since version 3.7.\nReturn a new module object with\nmodule.__name__\nset to name. The module\u2019s__name__\n,__doc__\n,__package__\nand__loader__\nattributes are filled in (all but__name__\nare set toNone\n). The caller is responsible for setting a__file__\nattribute.Return\nNULL\nwith an exception set on error.Added in version 3.3.\nChanged in version 3.4:\n__package__\nand__loader__\nare now set toNone\n.\n-\nPyObject *PyModule_New(const char *name)\u00b6\n- Return value: New reference. Part of the Stable ABI.\nSimilar to\nPyModule_NewObject()\n, but the name is a UTF-8 encoded string instead of a Unicode object.\n-\nPyObject *PyModule_GetDict(PyObject *module)\u00b6\n- Return value: Borrowed reference. Part of the Stable ABI.\nReturn the dictionary object that implements module\u2019s namespace; this object is the same as the\n__dict__\nattribute of the module object. If module is not a module object (or a subtype of a module object),SystemError\nis raised andNULL\nis returned.It is recommended extensions use other\nPyModule_*\nandPyObject_*\nfunctions rather than directly manipulate a module\u2019s__dict__\n.The returned reference is borrowed from the module; it is valid until the module is destroyed.\n-\nPyObject *PyModule_GetNameObject(PyObject *module)\u00b6\n- Return value: New reference. Part of the Stable ABI since version 3.7.\nReturn module\u2019s\n__name__\nvalue. If the module does not provide one, or if it is not a string,SystemError\nis raised andNULL\nis returned.Added in version 3.3.\n-\nconst char *PyModule_GetName(PyObject *module)\u00b6\n- Part of the Stable ABI.\nSimilar to\nPyModule_GetNameObject()\nbut return the name encoded to'utf-8'\n.The returned buffer is only valid until the module is renamed or destroyed. Note that Python code may rename a module by setting its\n__name__\nattribute.\n-\nvoid *PyModule_GetState(PyObject *module)\u00b6\n- Part of the Stable ABI.\nReturn the \u201cstate\u201d of the module, that is, a pointer to the block of memory allocated at module creation time, or\nNULL\n. SeePyModuleDef.m_size\n.\n-\nPyModuleDef *PyModule_GetDef(PyObject *module)\u00b6\n- Part of the Stable ABI.\nReturn a pointer to the\nPyModuleDef\nstruct from which the module was created, orNULL\nif the module wasn\u2019t created from a definition.On error, return\nNULL\nwith an exception set. UsePyErr_Occurred()\nto tell this case apart from a missingPyModuleDef\n.\n-\nPyObject *PyModule_GetFilenameObject(PyObject *module)\u00b6\n- Return value: New reference. Part of the Stable ABI.\nReturn the name of the file from which module was loaded using module\u2019s\n__file__\nattribute. If this is not defined, or if it is not a string, raiseSystemError\nand returnNULL\n; otherwise return a reference to a Unicode object.Added in version 3.2.\n-\nconst char *PyModule_GetFilename(PyObject *module)\u00b6\n- Part of the Stable ABI.\nSimilar to\nPyModule_GetFilenameObject()\nbut return the filename encoded to \u2018utf-8\u2019.The returned buffer is only valid until the module\u2019s\n__file__\nattribute is reassigned or the module is destroyed.Deprecated since version 3.2:\nPyModule_GetFilename()\nraisesUnicodeEncodeError\non unencodable filenames, usePyModule_GetFilenameObject()\ninstead.\nModule definitions\u00b6\nThe functions in the previous section work on any module object, including modules imported from Python code.\nModules defined using the C API typically use a module definition,\nPyModuleDef\n\u2013 a statically allocated, constant \u201cdescription\u201d of\nhow a module should be created.\nThe definition is usually used to define an extension\u2019s \u201cmain\u201d module object (see Defining extension modules for details). It is also used to create extension modules dynamically.\nUnlike PyModule_New()\n, the definition allows management of\nmodule state \u2013 a piece of memory that is allocated and cleared together\nwith the module object.\nUnlike the module\u2019s Python attributes, Python code cannot replace or delete\ndata stored in module state.\n-\ntype PyModuleDef\u00b6\n- Part of the Stable ABI (including all members).\nThe module definition struct, which holds all information needed to create a module object. This structure must be statically allocated (or be otherwise guaranteed to be valid while any modules created from it exist). Usually, there is only one variable of this type for each extension module.\n-\nPyModuleDef_Base m_base\u00b6\nAlways initialize this member to\nPyModuleDef_HEAD_INIT\n.\n-\nconst char *m_name\u00b6\nName for the new module.\n-\nconst char *m_doc\u00b6\nDocstring for the module; usually a docstring variable created with\nPyDoc_STRVAR\nis used.\n-\nPy_ssize_t m_size\u00b6\nModule state may be kept in a per-module memory area that can be retrieved with\nPyModule_GetState()\n, rather than in static globals. This makes modules safe for use in multiple sub-interpreters.This memory area is allocated based on m_size on module creation, and freed when the module object is deallocated, after the\nm_free\nfunction has been called, if present.Setting it to a non-negative value means that the module can be re-initialized and specifies the additional amount of memory it requires for its state.\nSetting\nm_size\nto-1\nmeans that the module does not support sub-interpreters, because it has global state. Negativem_size\nis only allowed when using legacy single-phase initialization or when creating modules dynamically.See PEP 3121 for more details.\n-\nPyMethodDef *m_methods\u00b6\nA pointer to a table of module-level functions, described by\nPyMethodDef\nvalues. Can beNULL\nif no functions are present.\n-\nPyModuleDef_Slot *m_slots\u00b6\nAn array of slot definitions for multi-phase initialization, terminated by a\n{0, NULL}\nentry. When using legacy single-phase initialization, m_slots must beNULL\n.\n-\ntraverseproc m_traverse\u00b6\nA traversal function to call during GC traversal of the module object, or\nNULL\nif not needed.This function is not called if the module state was requested but is not allocated yet. This is the case immediately after the module is created and before the module is executed (\nPy_mod_exec\nfunction). More precisely, this function is not called ifm_size\nis greater than 0 and the module state (as returned byPyModule_GetState()\n) isNULL\n.Changed in version 3.9: No longer called before the module state is allocated.\n-\ninquiry m_clear\u00b6\nA clear function to call during GC clearing of the module object, or\nNULL\nif not needed.This function is not called if the module state was requested but is not allocated yet. This is the case immediately after the module is created and before the module is executed (\nPy_mod_exec\nfunction). More precisely, this function is not called ifm_size\nis greater than 0 and the module state (as returned byPyModule_GetState()\n) isNULL\n.Like\nPyTypeObject.tp_clear\n, this function is not always called before a module is deallocated. For example, when reference counting is enough to determine that an object is no longer used, the cyclic garbage collector is not involved andm_free\nis called directly.Changed in version 3.9: No longer called before the module state is allocated.\n-\nfreefunc m_free\u00b6\nA function to call during deallocation of the module object, or\nNULL\nif not needed.This function is not called if the module state was requested but is not allocated yet. This is the case immediately after the module is created and before the module is executed (\nPy_mod_exec\nfunction). More precisely, this function is not called ifm_size\nis greater than 0 and the module state (as returned byPyModule_GetState()\n) isNULL\n.Changed in version 3.9: No longer called before the module state is allocated.\n-\nPyModuleDef_Base m_base\u00b6\nModule slots\u00b6\n-\ntype PyModuleDef_Slot\u00b6\n- Part of the Stable ABI (including all members) since version 3.5.\n-\nint slot\u00b6\nA slot ID, chosen from the available values explained below.\n-\nvoid *value\u00b6\nValue of the slot, whose meaning depends on the slot ID.\nAdded in version 3.5.\n-\nint slot\u00b6\nThe available slot types are:\n-\nPy_mod_create\u00b6\n- Part of the Stable ABI since version 3.5.\nSpecifies a function that is called to create the module object itself. The value pointer of this slot must point to a function of the signature:\n-\nPyObject *create_module(PyObject *spec, PyModuleDef *def)\u00b6\nThe function receives a\nModuleSpec\ninstance, as defined in PEP 451, and the module definition. It should return a new module object, or set an error and returnNULL\n.This function should be kept minimal. In particular, it should not call arbitrary Python code, as trying to import the same module again may result in an infinite loop.\nMultiple\nPy_mod_create\nslots may not be specified in one module definition.If\nPy_mod_create\nis not specified, the import machinery will create a normal module object usingPyModule_New()\n. The name is taken from spec, not the definition, to allow extension modules to dynamically adjust to their place in the module hierarchy and be imported under different names through symlinks, all while sharing a single module definition.There is no requirement for the returned object to be an instance of\nPyModule_Type\n. Any type can be used, as long as it supports setting and getting import-related attributes. However, onlyPyModule_Type\ninstances may be returned if thePyModuleDef\nhas non-NULL\nm_traverse\n,m_clear\n,m_free\n; non-zerom_size\n; or slots other thanPy_mod_create\n.Added in version 3.5.\n-\nPyObject *create_module(PyObject *spec, PyModuleDef *def)\u00b6\n-\nPy_mod_exec\u00b6\n- Part of the Stable ABI since version 3.5.\nSpecifies a function that is called to execute the module. This is equivalent to executing the code of a Python module: typically, this function adds classes and constants to the module. The signature of the function is:\nIf multiple\nPy_mod_exec\nslots are specified, they are processed in the order they appear in the m_slots array.Added in version 3.5.\n-\nPy_mod_multiple_interpreters\u00b6\n- Part of the Stable ABI since version 3.12.\nSpecifies one of the following values:\n-\nPy_MOD_MULTIPLE_INTERPRETERS_NOT_SUPPORTED\u00b6\nThe module does not support being imported in subinterpreters.\n-\nPy_MOD_MULTIPLE_INTERPRETERS_SUPPORTED\u00b6\nThe module supports being imported in subinterpreters, but only when they share the main interpreter\u2019s GIL. (See Isolating Extension Modules.)\n-\nPy_MOD_PER_INTERPRETER_GIL_SUPPORTED\u00b6\nThe module supports being imported in subinterpreters, even when they have their own GIL. (See Isolating Extension Modules.)\nThis slot determines whether or not importing this module in a subinterpreter will fail.\nMultiple\nPy_mod_multiple_interpreters\nslots may not be specified in one module definition.If\nPy_mod_multiple_interpreters\nis not specified, the import machinery defaults toPy_MOD_MULTIPLE_INTERPRETERS_SUPPORTED\n.Added in version 3.12.\n-\nPy_MOD_MULTIPLE_INTERPRETERS_NOT_SUPPORTED\u00b6\n-\nPy_mod_gil\u00b6\n- Part of the Stable ABI since version 3.13.\nSpecifies one of the following values:\n-\nPy_MOD_GIL_USED\u00b6\nThe module depends on the presence of the global interpreter lock (GIL), and may access global state without synchronization.\n-\nPy_MOD_GIL_NOT_USED\u00b6\nThe module is safe to run without an active GIL.\nThis slot is ignored by Python builds not configured with\n--disable-gil\n. Otherwise, it determines whether or not importing this module will cause the GIL to be automatically enabled. See Free-threaded CPython for more detail.Multiple\nPy_mod_gil\nslots may not be specified in one module definition.If\nPy_mod_gil\nis not specified, the import machinery defaults toPy_MOD_GIL_USED\n.Added in version 3.13.\n-\nPy_MOD_GIL_USED\u00b6\nCreating extension modules dynamically\u00b6\nThe following functions may be used to create a module outside of an extension\u2019s initialization function. They are also used in single-phase initialization.\n-\nPyObject *PyModule_Create(PyModuleDef *def)\u00b6\n- Return value: New reference.\nCreate a new module object, given the definition in def. This is a macro that calls\nPyModule_Create2()\nwith module_api_version set toPYTHON_API_VERSION\n, or toPYTHON_ABI_VERSION\nif using the limited API.\n-\nPyObject *PyModule_Create2(PyModuleDef *def, int module_api_version)\u00b6\n- Return value: New reference. Part of the Stable ABI.\nCreate a new module object, given the definition in def, assuming the API version module_api_version. If that version does not match the version of the running interpreter, a\nRuntimeWarning\nis emitted.Return\nNULL\nwith an exception set on error.This function does not support slots. The\nm_slots\nmember of def must beNULL\n.Note\nMost uses of this function should be using\nPyModule_Create()\ninstead; only use this if you are sure you need it.\n-\nPyObject *PyModule_FromDefAndSpec(PyModuleDef *def, PyObject *spec)\u00b6\n- Return value: New reference.\nThis macro calls\nPyModule_FromDefAndSpec2()\nwith module_api_version set toPYTHON_API_VERSION\n, or toPYTHON_ABI_VERSION\nif using the limited API.Added in version 3.5.\n-\nPyObject *PyModule_FromDefAndSpec2(PyModuleDef *def, PyObject *spec, int module_api_version)\u00b6\n- Return value: New reference. Part of the Stable ABI since version 3.7.\nCreate a new module object, given the definition in def and the ModuleSpec spec, assuming the API version module_api_version. If that version does not match the version of the running interpreter, a\nRuntimeWarning\nis emitted.Return\nNULL\nwith an exception set on error.Note that this does not process execution slots (\nPy_mod_exec\n). BothPyModule_FromDefAndSpec\nandPyModule_ExecDef\nmust be called to fully initialize a module.Note\nMost uses of this function should be using\nPyModule_FromDefAndSpec()\ninstead; only use this if you are sure you need it.Added in version 3.5.\n-\nint PyModule_ExecDef(PyObject *module, PyModuleDef *def)\u00b6\n- Part of the Stable ABI since version 3.7.\nProcess any execution slots (\nPy_mod_exec\n) given in def.Added in version 3.5.\n-\nPYTHON_API_VERSION\u00b6\nThe C API version. Defined for backwards compatibility.\nCurrently, this constant is not updated in new Python versions, and is not useful for versioning. This may change in the future.\n-\nPYTHON_ABI_VERSION\u00b6\nDefined as\n3\nfor backwards compatibility.Currently, this constant is not updated in new Python versions, and is not useful for versioning. This may change in the future.\nSupport functions\u00b6\nThe following functions are provided to help initialize a module\nstate.\nThey are intended for a module\u2019s execution slots (Py_mod_exec\n),\nthe initialization function for legacy single-phase initialization,\nor code that creates modules dynamically.\n-\nint PyModule_AddObjectRef(PyObject *module, const char *name, PyObject *value)\u00b6\n- Part of the Stable ABI since version 3.10.\nAdd an object to module as name. This is a convenience function which can be used from the module\u2019s initialization function.\nOn success, return\n0\n. On error, raise an exception and return-1\n.Example usage:\nstatic int add_spam(PyObject *module, int value) { PyObject *obj = PyLong_FromLong(value); if (obj == NULL) { return -1; } int res = PyModule_AddObjectRef(module, \"spam\", obj); Py_DECREF(obj); return res; }\nTo be convenient, the function accepts\nNULL\nvalue with an exception set. In this case, return-1\nand just leave the raised exception unchanged.The example can also be written without checking explicitly if obj is\nNULL\n:static int add_spam(PyObject *module, int value) { PyObject *obj = PyLong_FromLong(value); int res = PyModule_AddObjectRef(module, \"spam\", obj); Py_XDECREF(obj); return res; }\nNote that\nPy_XDECREF()\nshould be used instead ofPy_DECREF()\nin this case, since obj can beNULL\n.The number of different name strings passed to this function should be kept small, usually by only using statically allocated strings as name. For names that aren\u2019t known at compile time, prefer calling\nPyUnicode_FromString()\nandPyObject_SetAttr()\ndirectly. For more details, seePyUnicode_InternFromString()\n, which may be used internally to create a key object.Added in version 3.10.\n-\nint PyModule_Add(PyObject *module, const char *name, PyObject *value)\u00b6\n- Part of the Stable ABI since version 3.13.\nSimilar to\nPyModule_AddObjectRef()\n, but \u201csteals\u201d a reference to value. It can be called with a result of function that returns a new reference without bothering to check its result or even saving it to a variable.Example usage:\nif (PyModule_Add(module, \"spam\", PyBytes_FromString(value)) < 0) { goto error; }\nAdded in version 3.13.\n-\nint PyModule_AddObject(PyObject *module, const char *name, PyObject *value)\u00b6\n- Part of the Stable ABI.\nSimilar to\nPyModule_AddObjectRef()\n, but steals a reference to value on success (if it returns0\n).The new\nPyModule_Add()\norPyModule_AddObjectRef()\nfunctions are recommended, since it is easy to introduce reference leaks by misusing thePyModule_AddObject()\nfunction.Note\nUnlike other functions that steal references,\nPyModule_AddObject()\nonly releases the reference to value on success.This means that its return value must be checked, and calling code must\nPy_XDECREF()\nvalue manually on error.Example usage:\nPyObject *obj = PyBytes_FromString(value); if (PyModule_AddObject(module, \"spam\", obj) < 0) { // If 'obj' is not NULL and PyModule_AddObject() failed, // 'obj' strong reference must be deleted with Py_XDECREF(). // If 'obj' is NULL, Py_XDECREF() does nothing. Py_XDECREF(obj); goto error; } // PyModule_AddObject() stole a reference to obj: // Py_XDECREF(obj) is not needed here.\nDeprecated since version 3.13:\nPyModule_AddObject()\nis soft deprecated.\n-\nint PyModule_AddIntConstant(PyObject *module, const char *name, long value)\u00b6\n- Part of the Stable ABI.\nAdd an integer constant to module as name. This convenience function can be used from the module\u2019s initialization function. Return\n-1\nwith an exception set on error,0\non success.This is a convenience function that calls\nPyLong_FromLong()\nandPyModule_AddObjectRef()\n; see their documentation for details.\n-\nint PyModule_AddStringConstant(PyObject *module, const char *name, const char *value)\u00b6\n- Part of the Stable ABI.\nAdd a string constant to module as name. This convenience function can be used from the module\u2019s initialization function. The string value must be\nNULL\n-terminated. Return-1\nwith an exception set on error,0\non success.This is a convenience function that calls\nPyUnicode_InternFromString()\nandPyModule_AddObjectRef()\n; see their documentation for details.\n-\nPyModule_AddIntMacro(module, macro)\u00b6\nAdd an int constant to module. The name and the value are taken from macro. For example\nPyModule_AddIntMacro(module, AF_INET)\nadds the int constant AF_INET with the value of AF_INET to module. Return-1\nwith an exception set on error,0\non success.\n-\nPyModule_AddStringMacro(module, macro)\u00b6\nAdd a string constant to module.\n-\nint PyModule_AddType(PyObject *module, PyTypeObject *type)\u00b6\n- Part of the Stable ABI since version 3.10.\nAdd a type object to module. The type object is finalized by calling internally\nPyType_Ready()\n. The name of the type object is taken from the last component oftp_name\nafter dot. Return-1\nwith an exception set on error,0\non success.Added in version 3.9.\n-\nint PyModule_AddFunctions(PyObject *module, PyMethodDef *functions)\u00b6\n- Part of the Stable ABI since version 3.7.\nAdd the functions from the\nNULL\nterminated functions array to module. Refer to thePyMethodDef\ndocumentation for details on individual entries (due to the lack of a shared module namespace, module level \u201cfunctions\u201d implemented in C typically receive the module as their first parameter, making them similar to instance methods on Python classes).This function is called automatically when creating a module from\nPyModuleDef\n(such as when using Multi-phase initialization,PyModule_Create\n, orPyModule_FromDefAndSpec\n). Some module authors may prefer defining functions in multiplePyMethodDef\narrays; in that case they should call this function directly.The functions array must be statically allocated (or otherwise guaranteed to outlive the module object).\nAdded in version 3.5.\n-\nint PyModule_SetDocString(PyObject *module, const char *docstring)\u00b6\n- Part of the Stable ABI since version 3.7.\nSet the docstring for module to docstring. This function is called automatically when creating a module from\nPyModuleDef\n(such as when using Multi-phase initialization,PyModule_Create\n, orPyModule_FromDefAndSpec\n).Return\n0\non success. Return-1\nwith an exception set on error.Added in version 3.5.\n-\nint PyUnstable_Module_SetGIL(PyObject *module, void *gil)\u00b6\n- This is Unstable API. It may change without warning in minor releases.\nIndicate that module does or does not support running without the global interpreter lock (GIL), using one of the values from\nPy_mod_gil\n. It must be called during module\u2019s initialization function when using Legacy single-phase initialization. If this function is not called during module initialization, the import machinery assumes the module does not support running without the GIL. This function is only available in Python builds configured with--disable-gil\n. Return-1\nwith an exception set on error,0\non success.Added in version 3.13.\nModule lookup (single-phase initialization)\u00b6\nThe legacy single-phase initialization initialization scheme creates singleton modules that can be looked up in the context of the current interpreter. This allows the module object to be retrieved later with only a reference to the module definition.\nThese functions will not work on modules created using multi-phase initialization, since multiple such modules can be created from a single definition.\n-\nPyObject *PyState_FindModule(PyModuleDef *def)\u00b6\n- Return value: Borrowed reference. Part of the Stable ABI.\nReturns the module object that was created from def for the current interpreter. This method requires that the module object has been attached to the interpreter state with\nPyState_AddModule()\nbeforehand. In case the corresponding module object is not found or has not been attached to the interpreter state yet, it returnsNULL\n.\n-\nint PyState_AddModule(PyObject *module, PyModuleDef *def)\u00b6\n- Part of the Stable ABI since version 3.3.\nAttaches the module object passed to the function to the interpreter state. This allows the module object to be accessible via\nPyState_FindModule()\n.Only effective on modules created using single-phase initialization.\nPython calls\nPyState_AddModule\nautomatically after importing a module that uses single-phase initialization, so it is unnecessary (but harmless) to call it from module initialization code. An explicit call is needed only if the module\u2019s own init code subsequently callsPyState_FindModule\n. The function is mainly intended for implementing alternative import mechanisms (either by calling it directly, or by referring to its implementation for details of the required state updates).If a module was attached previously using the same def, it is replaced by the new module.\nThe caller must have an attached thread state.\nReturn\n-1\nwith an exception set on error,0\non success.Added in version 3.3.\n-\nint PyState_RemoveModule(PyModuleDef *def)\u00b6\n- Part of the Stable ABI since version 3.3.\nRemoves the module object created from def from the interpreter state. Return\n-1\nwith an exception set on error,0\non success.The caller must have an attached thread state.\nAdded in version 3.3.", "code_snippets": [], "language": "Python", "source": "python.org", "token_count": 5867} +{"url": "https://docs.python.org/3/whatsnew/3.11.html", "title": "What\u2019s New In Python 3.11", "content": "What\u2019s New In Python 3.11\u00b6\n- Editor:\nPablo Galindo Salgado\nThis article explains the new features in Python 3.11, compared to 3.10. Python 3.11 was released on October 24, 2022. For full details, see the changelog.\nSummary \u2013 Release highlights\u00b6\nPython 3.11 is between 10-60% faster than Python 3.10. On average, we measured a 1.25x speedup on the standard benchmark suite. See Faster CPython for details.\nNew syntax features:\nNew built-in features:\nNew standard library modules:\nInterpreter improvements:\nNew\n-P\ncommand line option andPYTHONSAFEPATH\nenvironment variable to disable automatically prepending potentially unsafe paths tosys.path\nNew typing features:\nImportant deprecations, removals and restrictions:\nPEP 594: Many legacy standard library modules have been deprecated and will be removed in Python 3.13\nNew Features\u00b6\nPEP 657: Fine-grained error locations in tracebacks\u00b6\nWhen printing tracebacks, the interpreter will now point to the exact expression that caused the error, instead of just the line. For example:\nTraceback (most recent call last):\nFile \"distance.py\", line 11, in \nprint(manhattan_distance(p1, p2))\n^^^^^^^^^^^^^^^^^^^^^^^^^^\nFile \"distance.py\", line 6, in manhattan_distance\nreturn abs(point_1.x - point_2.x) + abs(point_1.y - point_2.y)\n^^^^^^^^^\nAttributeError: 'NoneType' object has no attribute 'x'\nPrevious versions of the interpreter would point to just the line, making it\nambiguous which object was None\n. These enhanced errors can also be helpful\nwhen dealing with deeply nested dict\nobjects and multiple function calls:\nTraceback (most recent call last):\nFile \"query.py\", line 37, in \nmagic_arithmetic('foo')\nFile \"query.py\", line 18, in magic_arithmetic\nreturn add_counts(x) / 25\n^^^^^^^^^^^^^\nFile \"query.py\", line 24, in add_counts\nreturn 25 + query_user(user1) + query_user(user2)\n^^^^^^^^^^^^^^^^^\nFile \"query.py\", line 32, in query_user\nreturn 1 + query_count(db, response['a']['b']['c']['user'], retry=True)\n~~~~~~~~~~~~~~~~~~^^^^^\nTypeError: 'NoneType' object is not subscriptable\nAs well as complex arithmetic expressions:\nTraceback (most recent call last):\nFile \"calculation.py\", line 54, in \nresult = (x / y / z) * (a / b / c)\n~~~~~~^~~\nZeroDivisionError: division by zero\nAdditionally, the information used by the enhanced traceback feature is made available via a general API, that can be used to correlate bytecode instructions with source code location. This information can be retrieved using:\nThe\ncodeobject.co_positions()\nmethod in Python.The\nPyCode_Addr2Location()\nfunction in the C API.\nSee PEP 657 for more details. (Contributed by Pablo Galindo, Batuhan Taskaya and Ammar Askar in bpo-43950.)\nNote\nThis feature requires storing column positions in Code Objects,\nwhich may result in a small increase in interpreter memory usage\nand disk usage for compiled Python files.\nTo avoid storing the extra information\nand deactivate printing the extra traceback information,\nuse the -X no_debug_ranges\ncommand line option\nor the PYTHONNODEBUGRANGES\nenvironment variable.\nPEP 654: Exception Groups and except*\n\u00b6\nPEP 654 introduces language features that enable a program\nto raise and handle multiple unrelated exceptions simultaneously.\nThe builtin types ExceptionGroup\nand BaseExceptionGroup\nmake it possible to group exceptions and raise them together,\nand the new except*\nsyntax generalizes\nexcept\nto match subgroups of exception groups.\nSee PEP 654 for more details.\n(Contributed by Irit Katriel in bpo-45292. PEP written by Irit Katriel, Yury Selivanov and Guido van Rossum.)\nPEP 678: Exceptions can be enriched with notes\u00b6\nThe add_note()\nmethod is added to BaseException\n.\nIt can be used to enrich exceptions with context information\nthat is not available at the time when the exception is raised.\nThe added notes appear in the default traceback.\nSee PEP 678 for more details.\n(Contributed by Irit Katriel in bpo-45607. PEP written by Zac Hatfield-Dodds.)\nWindows py.exe\nlauncher improvements\u00b6\nThe copy of the Python install manager included with Python 3.11 has been significantly\nupdated. It now supports company/tag syntax as defined in PEP 514 using the\n-V:/\nargument instead of the limited -.\n.\nThis allows launching distributions other than PythonCore\n,\nthe one hosted on python.org.\nWhen using -V:\nselectors, either company or tag can be omitted, but all\ninstalls will be searched. For example, -V:OtherPython/\nwill select the\n\u201cbest\u201d tag registered for OtherPython\n, while -V:3.11\nor -V:/3.11\nwill select the \u201cbest\u201d distribution with tag 3.11\n.\nWhen using the legacy -\n, -.\n,\n--\nor -.-\narguments,\nall existing behaviour should be preserved from past versions,\nand only releases from PythonCore\nwill be selected.\nHowever, the -64\nsuffix now implies \u201cnot 32-bit\u201d (not necessarily x86-64),\nas there are multiple supported 64-bit platforms.\n32-bit runtimes are detected by checking the runtime\u2019s tag for a -32\nsuffix.\nAll releases of Python since 3.5 have included this in their 32-bit builds.\nOther Language Changes\u00b6\nStarred unpacking expressions can now be used in\nfor\nstatements. (See bpo-46725 for more details.)Asynchronous comprehensions are now allowed inside comprehensions in asynchronous functions. Outer comprehensions implicitly become asynchronous in this case. (Contributed by Serhiy Storchaka in bpo-33346.)\nA\nTypeError\nis now raised instead of anAttributeError\ninwith\nstatements andcontextlib.ExitStack.enter_context()\nfor objects that do not support the context manager protocol, and inasync with\nstatements andcontextlib.AsyncExitStack.enter_async_context()\nfor objects not supporting the asynchronous context manager protocol. (Contributed by Serhiy Storchaka in bpo-12022 and bpo-44471.)Added\nobject.__getstate__()\n, which provides the default implementation of the__getstate__()\nmethod.copy\ning andpickle\ning instances of subclasses of builtin typesbytearray\n,set\n,frozenset\n,collections.OrderedDict\n,collections.deque\n,weakref.WeakSet\n, anddatetime.tzinfo\nnow copies and pickles instance attributes implemented as slots. This change has an unintended side effect: It trips up a small minority of existing Python projects not expectingobject.__getstate__()\nto exist. See the later comments on gh-70766 for discussions of what workarounds such code may need. (Contributed by Serhiy Storchaka in bpo-26579.)\nAdded a\n-P\ncommand line option and aPYTHONSAFEPATH\nenvironment variable, which disable the automatic prepending tosys.path\nof the script\u2019s directory when running a script, or the current directory when using-c\nand-m\n. This ensures only stdlib and installed modules are picked up byimport\n, and avoids unintentionally or maliciously shadowing modules with those in a local (and typically user-writable) directory. (Contributed by Victor Stinner in gh-57684.)A\n\"z\"\noption was added to the Format Specification Mini-Language that coerces negative to positive zero after rounding to the format precision. See PEP 682 for more details. (Contributed by John Belmonte in gh-90153.)Bytes are no longer accepted on\nsys.path\n. Support broke sometime between Python 3.2 and 3.6, with no one noticing until after Python 3.10.0 was released. In addition, bringing back support would be problematic due to interactions between-b\nandsys.path_importer_cache\nwhen there is a mixture ofstr\nandbytes\nkeys. (Contributed by Thomas Grainger in gh-91181.)\nOther CPython Implementation Changes\u00b6\nThe special methods\n__complex__()\nforcomplex\nand__bytes__()\nforbytes\nare implemented to support thetyping.SupportsComplex\nandtyping.SupportsBytes\nprotocols. (Contributed by Mark Dickinson and Donghee Na in bpo-24234.)siphash13\nis added as a new internal hashing algorithm. It has similar security properties assiphash24\n, but it is slightly faster for long inputs.str\n,bytes\n, and some other types now use it as the default algorithm forhash()\n. PEP 552 hash-based .pyc files now usesiphash13\ntoo. (Contributed by Inada Naoki in bpo-29410.)When an active exception is re-raised by a\nraise\nstatement with no parameters, the traceback attached to this exception is now alwayssys.exc_info()[1].__traceback__\n. This means that changes made to the traceback in the currentexcept\nclause are reflected in the re-raised exception. (Contributed by Irit Katriel in bpo-45711.)The interpreter state\u2019s representation of handled exceptions (aka\nexc_info\nor_PyErr_StackItem\n) now only has theexc_value\nfield;exc_type\nandexc_traceback\nhave been removed, as they can be derived fromexc_value\n. (Contributed by Irit Katriel in bpo-45711.)A new command line option,\nAppendPath\n, has been added for the Windows installer. It behaves similarly toPrependPath\n, but appends the install and scripts directories instead of prepending them. (Contributed by Bastian Neuburger in bpo-44934.)The\nPyConfig.module_search_paths_set\nfield must now be set to1\nfor initialization to usePyConfig.module_search_paths\nto initializesys.path\n. Otherwise, initialization will recalculate the path and replace any values added tomodule_search_paths\n.The output of the\n--help\noption now fits in 50 lines/80 columns. Information about Python environment variables and-X\noptions is now available using the respective--help-env\nand--help-xoptions\nflags, and with the new--help-all\n. (Contributed by \u00c9ric Araujo in bpo-46142.)Converting between\nint\nandstr\nin bases other than 2 (binary), 4, 8 (octal), 16 (hexadecimal), or 32 such as base 10 (decimal) now raises aValueError\nif the number of digits in string form is above a limit to avoid potential denial of service attacks due to the algorithmic complexity. This is a mitigation for CVE 2020-10735. This limit can be configured or disabled by environment variable, command line flag, orsys\nAPIs. See the integer string conversion length limitation documentation. The default limit is 4300 digits in string form.\nNew Modules\u00b6\nImproved Modules\u00b6\nasyncio\u00b6\nAdded the\nTaskGroup\nclass, an asynchronous context manager holding a group of tasks that will wait for all of them upon exit. For new code this is recommended over usingcreate_task()\nandgather()\ndirectly. (Contributed by Yury Selivanov and others in gh-90908.)Added\ntimeout()\n, an asynchronous context manager for setting a timeout on asynchronous operations. For new code this is recommended over usingwait_for()\ndirectly. (Contributed by Andrew Svetlov in gh-90927.)Added the\nRunner\nclass, which exposes the machinery used byrun()\n. (Contributed by Andrew Svetlov in gh-91218.)Added the\nBarrier\nclass to the synchronization primitives in the asyncio library, and the relatedBrokenBarrierError\nexception. (Contributed by Yves Duprat and Andrew Svetlov in gh-87518.)Added keyword argument all_errors to\nasyncio.loop.create_connection()\nso that multiple connection errors can be raised as anExceptionGroup\n.Added the\nasyncio.StreamWriter.start_tls()\nmethod for upgrading existing stream-based connections to TLS. (Contributed by Ian Good in bpo-34975.)Added raw datagram socket functions to the event loop:\nsock_sendto()\n,sock_recvfrom()\nandsock_recvfrom_into()\n. These have implementations inSelectorEventLoop\nandProactorEventLoop\n. (Contributed by Alex Gr\u00f6nholm in bpo-46805.)Added\ncancelling()\nanduncancel()\nmethods toTask\n. These are primarily intended for internal use, notably byTaskGroup\n.\ncontextlib\u00b6\ndataclasses\u00b6\ndatetime\u00b6\nAdd\ndatetime.UTC\n, a convenience alias fordatetime.timezone.utc\n. (Contributed by Kabir Kwatra in gh-91973.)datetime.date.fromisoformat()\n,datetime.time.fromisoformat()\nanddatetime.datetime.fromisoformat()\ncan now be used to parse most ISO 8601 formats (barring only those that support fractional hours and minutes). (Contributed by Paul Ganssle in gh-80010.)\nenum\u00b6\nRenamed\nEnumMeta\ntoEnumType\n(EnumMeta\nkept as an alias).Added\nStrEnum\n, with members that can be used as (and must be) strings.Added\nReprEnum\n, which only modifies the__repr__()\nof members while returning their literal values (rather than names) for__str__()\nand__format__()\n(used bystr()\n,format()\nand f-strings).Changed\nEnum.__format__()\n(the default forformat()\n,str.format()\nand f-strings) to always produce the same result asEnum.__str__()\n: for enums inheriting fromReprEnum\nit will be the member\u2019s value; for all other enums it will be the enum and member name (e.g.Color.RED\n).Added a new boundary class parameter to\nFlag\nenums and theFlagBoundary\nenum with its options, to control how to handle out-of-range flag values.Added the\nverify()\nenum decorator and theEnumCheck\nenum with its options, to check enum classes against several specific constraints.Added the\nmember()\nandnonmember()\ndecorators, to ensure the decorated object is/is not converted to an enum member.Added the\nproperty()\ndecorator, which works likeproperty()\nexcept for enums. Use this instead oftypes.DynamicClassAttribute()\n.Added the\nglobal_enum()\nenum decorator, which adjusts__repr__()\nand__str__()\nto show values as members of their module rather than the enum class. For example,'re.ASCII'\nfor theASCII\nmember ofre.RegexFlag\nrather than'RegexFlag.ASCII'\n.Enhanced\nFlag\nto supportlen()\n, iteration andin\n/not in\non its members. For example, the following now works:len(AFlag(3)) == 2 and list(AFlag(3)) == (AFlag.ONE, AFlag.TWO)\nChanged\nEnum\nandFlag\nso that members are now defined before__init_subclass__()\nis called;dir()\nnow includes methods, etc., from mixed-in data types.Changed\nFlag\nto only consider primary values (power of two) canonical while composite values (3\n,6\n,10\n, etc.) are considered aliases; inverted flags are coerced to their positive equivalent.\nfcntl\u00b6\nOn FreeBSD, the\nF_DUP2FD\nandF_DUP2FD_CLOEXEC\nflags respectively are supported, the former equals todup2\nusage while the latter set theFD_CLOEXEC\nflag in addition.\nfractions\u00b6\nfunctools\u00b6\nfunctools.singledispatch()\nnow supportstypes.UnionType\nandtyping.Union\nas annotations to the dispatch argument.:>>> from functools import singledispatch >>> @singledispatch ... def fun(arg, verbose=False): ... if verbose: ... print(\"Let me just say,\", end=\" \") ... print(arg) ... >>> @fun.register ... def _(arg: int | float, verbose=False): ... if verbose: ... print(\"Strength in numbers, eh?\", end=\" \") ... print(arg) ... >>> from typing import Union >>> @fun.register ... def _(arg: Union[list, set], verbose=False): ... if verbose: ... print(\"Enumerate this:\") ... for i, elem in enumerate(arg): ... print(i, elem) ...\n(Contributed by Yurii Karabas in bpo-46014.)\ngzip\u00b6\nThe\ngzip.compress()\nfunction is now faster when used with the mtime=0 argument as it delegates the compression entirely to a singlezlib.compress()\noperation. There is one side effect of this change: The gzip file header contains an \u201cOS\u201d byte in its header. That was traditionally always set to a value of 255 representing \u201cunknown\u201d by thegzip\nmodule. Now, when usingcompress()\nwith mtime=0, it may be set to a different value by the underlying zlib C library Python was linked against. (See gh-112346 for details on the side effect.)\nhashlib\u00b6\nhashlib.blake2b()\nandhashlib.blake2s()\nnow prefer libb2 over Python\u2019s vendored copy. (Contributed by Christian Heimes in bpo-47095.)The internal\n_sha3\nmodule with SHA3 and SHAKE algorithms now uses tiny_sha3 instead of the Keccak Code Package to reduce code and binary size. Thehashlib\nmodule prefers optimized SHA3 and SHAKE implementations from OpenSSL. The change affects only installations without OpenSSL support. (Contributed by Christian Heimes in bpo-47098.)Add\nhashlib.file_digest()\n, a helper function for efficient hashing of files or file-like objects. (Contributed by Christian Heimes in gh-89313.)\nIDLE and idlelib\u00b6\ninspect\u00b6\nAdd\ngetmembers_static()\nto return all members without triggering dynamic lookup via the descriptor protocol. (Contributed by Weipeng Hong in bpo-30533.)Add\nismethodwrapper()\nfor checking if the type of an object is aMethodWrapperType\n. (Contributed by Hakan \u00c7elik in bpo-29418.)Change the frame-related functions in the\ninspect\nmodule to return newFrameInfo\nandTraceback\nclass instances (backwards compatible with the previous named tuple-like interfaces) that includes the extended PEP 657 position information (end line number, column and end column). The affected functions are:(Contributed by Pablo Galindo in gh-88116.)\nlocale\u00b6\nAdd\nlocale.getencoding()\nto get the current locale encoding. It is similar tolocale.getpreferredencoding(False)\nbut ignores the Python UTF-8 Mode.\nlogging\u00b6\nAdded\ngetLevelNamesMapping()\nto return a mapping from logging level names (e.g.'CRITICAL'\n) to the values of their corresponding Logging Levels (e.g.50\n, by default). (Contributed by Andrei Kulakovin in gh-88024.)Added a\ncreateSocket()\nmethod toSysLogHandler\n, to matchSocketHandler.createSocket()\n. It is called automatically during handler initialization and when emitting an event, if there is no active socket. (Contributed by Kirill Pinchuk in gh-88457.)\nmath\u00b6\nAdd\nmath.exp2()\n: return 2 raised to the power of x. (Contributed by Gideon Mitchell in bpo-45917.)Add\nmath.cbrt()\n: return the cube root of x. (Contributed by Ajith Ramachandran in bpo-44357.)The behaviour of two\nmath.pow()\ncorner cases was changed, for consistency with the IEEE 754 specification. The operationsmath.pow(0.0, -math.inf)\nandmath.pow(-0.0, -math.inf)\nnow returninf\n. Previously they raisedValueError\n. (Contributed by Mark Dickinson in bpo-44339.)The\nmath.nan\nvalue is now always available. (Contributed by Victor Stinner in bpo-46917.)\noperator\u00b6\nA new function\noperator.call\nhas been added, such thatoperator.call(obj, *args, **kwargs) == obj(*args, **kwargs)\n. (Contributed by Antony Lee in bpo-44019.)\nos\u00b6\nOn Windows,\nos.urandom()\nnow usesBCryptGenRandom()\n, instead ofCryptGenRandom()\nwhich is deprecated. (Contributed by Donghee Na in bpo-44611.)\npathlib\u00b6\nre\u00b6\nAtomic grouping (\n(?>...)\n) and possessive quantifiers (*+\n,++\n,?+\n,{m,n}+\n) are now supported in regular expressions. (Contributed by Jeffrey C. Jacobs and Serhiy Storchaka in bpo-433030.)\nshutil\u00b6\nAdd optional parameter dir_fd in\nshutil.rmtree()\n. (Contributed by Serhiy Storchaka in bpo-46245.)\nsocket\u00b6\nAdd CAN Socket support for NetBSD. (Contributed by Thomas Klausner in bpo-30512.)\ncreate_connection()\nhas an option to raise, in case of failure to connect, anExceptionGroup\ncontaining all errors instead of only raising the last error. (Contributed by Irit Katriel in bpo-29980.)\nsqlite3\u00b6\nYou can now disable the authorizer by passing\nNone\ntoset_authorizer()\n. (Contributed by Erlend E. Aasland in bpo-44491.)Collation name\ncreate_collation()\ncan now contain any Unicode character. Collation names with invalid characters now raiseUnicodeEncodeError\ninstead ofsqlite3.ProgrammingError\n. (Contributed by Erlend E. Aasland in bpo-44688.)sqlite3\nexceptions now include the SQLite extended error code assqlite_errorcode\nand the SQLite error name assqlite_errorname\n. (Contributed by Aviv Palivoda, Daniel Shahaf, and Erlend E. Aasland in bpo-16379 and bpo-24139.)Add\nsetlimit()\nandgetlimit()\ntosqlite3.Connection\nfor setting and getting SQLite limits by connection basis. (Contributed by Erlend E. Aasland in bpo-45243.)sqlite3\nnow setssqlite3.threadsafety\nbased on the default threading mode the underlying SQLite library has been compiled with. (Contributed by Erlend E. Aasland in bpo-45613.)sqlite3\nC callbacks now use unraisable exceptions if callback tracebacks are enabled. Users can now register anunraisable hook handler\nto improve their debug experience. (Contributed by Erlend E. Aasland in bpo-45828.)Fetch across rollback no longer raises\nInterfaceError\n. Instead we leave it to the SQLite library to handle these cases. (Contributed by Erlend E. Aasland in bpo-44092.)Add\nserialize()\nanddeserialize()\ntosqlite3.Connection\nfor serializing and deserializing databases. (Contributed by Erlend E. Aasland in bpo-41930.)Add\ncreate_window_function()\ntosqlite3.Connection\nfor creating aggregate window functions. (Contributed by Erlend E. Aasland in bpo-34916.)Add\nblobopen()\ntosqlite3.Connection\n.sqlite3.Blob\nallows incremental I/O operations on blobs. (Contributed by Aviv Palivoda and Erlend E. Aasland in bpo-24905.)\nstring\u00b6\nAdd\nget_identifiers()\nandis_valid()\ntostring.Template\n, which respectively return all valid placeholders, and whether any invalid placeholders are present. (Contributed by Ben Kehoe in gh-90465.)\nsys\u00b6\nsys.exc_info()\nnow derives thetype\nandtraceback\nfields from thevalue\n(the exception instance), so when an exception is modified while it is being handled, the changes are reflected in the results of subsequent calls toexc_info()\n. (Contributed by Irit Katriel in bpo-45711.)Add\nsys.exception()\nwhich returns the active exception instance (equivalent tosys.exc_info()[1]\n). (Contributed by Irit Katriel in bpo-46328.)Add the\nsys.flags.safe_path\nflag. (Contributed by Victor Stinner in gh-57684.)\nsysconfig\u00b6\nThree new installation schemes (posix_venv, nt_venv and venv) were added and are used when Python creates new virtual environments or when it is running from a virtual environment. The first two schemes (posix_venv and nt_venv) are OS-specific for non-Windows and Windows, the venv is essentially an alias to one of them according to the OS Python runs on. This is useful for downstream distributors who modify\nsysconfig.get_preferred_scheme()\n. Third party code that creates new virtual environments should use the new venv installation scheme to determine the paths, as doesvenv\n. (Contributed by Miro Hron\u010dok in bpo-45413.)\ntempfile\u00b6\nSpooledTemporaryFile\nobjects now fully implement the methods ofio.BufferedIOBase\norio.TextIOBase\n(depending on file mode). This lets them work correctly with APIs that expect file-like objects, such as compression modules. (Contributed by Carey Metcalfe in gh-70363.)\nthreading\u00b6\nOn Unix, if the\nsem_clockwait()\nfunction is available in the C library (glibc 2.30 and newer), thethreading.Lock.acquire()\nmethod now uses the monotonic clock (time.CLOCK_MONOTONIC\n) for the timeout, rather than using the system clock (time.CLOCK_REALTIME\n), to not be affected by system clock changes. (Contributed by Victor Stinner in bpo-41710.)\ntime\u00b6\nOn Unix,\ntime.sleep()\nnow uses theclock_nanosleep()\nornanosleep()\nfunction, if available, which has a resolution of 1 nanosecond (10-9 seconds), rather than usingselect()\nwhich has a resolution of 1 microsecond (10-6 seconds). (Contributed by Benjamin Sz\u0151ke and Victor Stinner in bpo-21302.)On Windows 8.1 and newer,\ntime.sleep()\nnow uses a waitable timer based on high-resolution timers which has a resolution of 100 nanoseconds (10-7 seconds). Previously, it had a resolution of 1 millisecond (10-3 seconds). (Contributed by Benjamin Sz\u0151ke, Donghee Na, Eryk Sun and Victor Stinner in bpo-21302 and bpo-45429.)\ntkinter\u00b6\nAdded method\ninfo_patchlevel()\nwhich returns the exact version of the Tcl library as a named tuple similar tosys.version_info\n. (Contributed by Serhiy Storchaka in gh-91827.)\ntraceback\u00b6\nAdd\ntraceback.StackSummary.format_frame_summary()\nto allow users to override which frames appear in the traceback, and how they are formatted. (Contributed by Ammar Askar in bpo-44569.)Add\ntraceback.TracebackException.print()\n, which prints the formattedTracebackException\ninstance to a file. (Contributed by Irit Katriel in bpo-33809.)\ntyping\u00b6\nFor major changes, see New Features Related to Type Hints.\nAdd\ntyping.assert_never()\nandtyping.Never\n.typing.assert_never()\nis useful for asking a type checker to confirm that a line of code is not reachable. At runtime, it raises anAssertionError\n. (Contributed by Jelle Zijlstra in gh-90633.)Add\ntyping.reveal_type()\n. This is useful for asking a type checker what type it has inferred for a given expression. At runtime it prints the type of the received value. (Contributed by Jelle Zijlstra in gh-90572.)Add\ntyping.assert_type()\n. This is useful for asking a type checker to confirm that the type it has inferred for a given expression matches the given type. At runtime it simply returns the received value. (Contributed by Jelle Zijlstra in gh-90638.)typing.TypedDict\ntypes can now be generic. (Contributed by Samodya Abeysiriwardane in gh-89026.)NamedTuple\ntypes can now be generic. (Contributed by Serhiy Storchaka in bpo-43923.)Allow subclassing of\ntyping.Any\n. This is useful for avoiding type checker errors related to highly dynamic class, such as mocks. (Contributed by Shantanu Jain in gh-91154.)The\ntyping.final()\ndecorator now sets the__final__\nattributed on the decorated object. (Contributed by Jelle Zijlstra in gh-90500.)The\ntyping.get_overloads()\nfunction can be used for introspecting the overloads of a function.typing.clear_overloads()\ncan be used to clear all registered overloads of a function. (Contributed by Jelle Zijlstra in gh-89263.)The\n__init__()\nmethod ofProtocol\nsubclasses is now preserved. (Contributed by Adrian Garcia Badarasco in gh-88970.)The representation of empty tuple types (\nTuple[()]\n) is simplified. This affects introspection, e.g.get_args(Tuple[()])\nnow evaluates to()\ninstead of((),)\n. (Contributed by Serhiy Storchaka in gh-91137.)Loosen runtime requirements for type annotations by removing the callable check in the private\ntyping._type_check\nfunction. (Contributed by Gregory Beauregard in gh-90802.)typing.get_type_hints()\nnow supports evaluating strings as forward references in PEP 585 generic aliases. (Contributed by Niklas Rosenstein in gh-85542.)typing.get_type_hints()\nno longer addsOptional\nto parameters withNone\nas a default. (Contributed by Nikita Sobolev in gh-90353.)typing.get_type_hints()\nnow supports evaluating bare stringifiedClassVar\nannotations. (Contributed by Gregory Beauregard in gh-90711.)typing.no_type_check()\nno longer modifies external classes and functions. It also now correctly marks classmethods as not to be type checked. (Contributed by Nikita Sobolev in gh-90729.)\nunicodedata\u00b6\nThe Unicode database has been updated to version 14.0.0. (Contributed by Benjamin Peterson in bpo-45190).\nunittest\u00b6\nAdded methods\nenterContext()\nandenterClassContext()\nof classTestCase\n, methodenterAsyncContext()\nof classIsolatedAsyncioTestCase\nand functionunittest.enterModuleContext()\n. (Contributed by Serhiy Storchaka in bpo-45046.)\nvenv\u00b6\nWhen new Python virtual environments are created, the venv sysconfig installation scheme is used to determine the paths inside the environment. When Python runs in a virtual environment, the same installation scheme is the default. That means that downstream distributors can change the default sysconfig install scheme without changing behavior of virtual environments. Third party code that also creates new virtual environments should do the same. (Contributed by Miro Hron\u010dok in bpo-45413.)\nwarnings\u00b6\nwarnings.catch_warnings()\nnow accepts arguments forwarnings.simplefilter()\n, providing a more concise way to locally ignore warnings or convert them to errors. (Contributed by Zac Hatfield-Dodds in bpo-47074.)\nzipfile\u00b6\nAdded support for specifying member name encoding for reading metadata in a\nZipFile\n\u2019s directory and file headers. (Contributed by Stephen J. Turnbull and Serhiy Storchaka in bpo-28080.)Added\nZipFile.mkdir()\nfor creating new directories inside ZIP archives. (Contributed by Sam Ezeh in gh-49083.)Added\nstem\n,suffix\nandsuffixes\ntozipfile.Path\n. (Contributed by Miguel Brito in gh-88261.)\nOptimizations\u00b6\nThis section covers specific optimizations independent of the Faster CPython project, which is covered in its own section.\nThe compiler now optimizes simple printf-style % formatting on string literals containing only the format codes\n%s\n,%r\nand%a\nand makes it as fast as a corresponding f-string expression. (Contributed by Serhiy Storchaka in bpo-28307.)Integer division (\n//\n) is better tuned for optimization by compilers. It is now around 20% faster on x86-64 when dividing anint\nby a value smaller than2**30\n. (Contributed by Gregory P. Smith and Tim Peters in gh-90564.)sum()\nis now nearly 30% faster for integers smaller than2**30\n. (Contributed by Stefan Behnel in gh-68264.)Resizing lists is streamlined for the common case, speeding up\nlist.append()\nby \u224815% and simple list comprehensions by up to 20-30% (Contributed by Dennis Sweeney in gh-91165.)Dictionaries don\u2019t store hash values when all keys are Unicode objects, decreasing\ndict\nsize. For example,sys.getsizeof(dict.fromkeys(\"abcdefg\"))\nis reduced from 352 bytes to 272 bytes (23% smaller) on 64-bit platforms. (Contributed by Inada Naoki in bpo-46845.)Using\nasyncio.DatagramProtocol\nis now orders of magnitude faster when transferring large files over UDP, with speeds over 100 times higher for a \u224860 MiB file. (Contributed by msoxzw in gh-91487.)math\nfunctionscomb()\nandperm()\nare now \u224810 times faster for large arguments (with a larger speedup for larger k). (Contributed by Serhiy Storchaka in bpo-37295.)The\nstatistics\nfunctionsmean()\n,variance()\nandstdev()\nnow consume iterators in one pass rather than converting them to alist\nfirst. This is twice as fast and can save substantial memory. (Contributed by Raymond Hettinger in gh-90415.)unicodedata.normalize()\nnow normalizes pure-ASCII strings in constant time. (Contributed by Donghee Na in bpo-44987.)\nFaster CPython\u00b6\nCPython 3.11 is an average of 25% faster than CPython 3.10 as measured with the pyperformance benchmark suite, when compiled with GCC on Ubuntu Linux. Depending on your workload, the overall speedup could be 10-60%.\nThis project focuses on two major areas in Python: Faster Startup and Faster Runtime. Optimizations not covered by this project are listed separately under Optimizations.\nFaster Startup\u00b6\nFrozen imports / Static code objects\u00b6\nPython caches bytecode in the __pycache__ directory to speed up module loading.\nPreviously in 3.10, Python module execution looked like this:\nRead __pycache__ -> Unmarshal -> Heap allocated code object -> Evaluate\nIn Python 3.11, the core modules essential for Python startup are \u201cfrozen\u201d. This means that their Code Objects (and bytecode) are statically allocated by the interpreter. This reduces the steps in module execution process to:\nStatically allocated code object -> Evaluate\nInterpreter startup is now 10-15% faster in Python 3.11. This has a big impact for short-running programs using Python.\n(Contributed by Eric Snow, Guido van Rossum and Kumar Aditya in many issues.)\nFaster Runtime\u00b6\nCheaper, lazy Python frames\u00b6\nPython frames, holding execution information, are created whenever Python calls a Python function. The following are new frame optimizations:\nStreamlined the frame creation process.\nAvoided memory allocation by generously re-using frame space on the C stack.\nStreamlined the internal frame struct to contain only essential information. Frames previously held extra debugging and memory management information.\nOld-style frame objects\nare now created only when requested by debuggers\nor by Python introspection functions such as sys._getframe()\nand\ninspect.currentframe()\n. For most user code, no frame objects are\ncreated at all. As a result, nearly all Python functions calls have sped\nup significantly. We measured a 3-7% speedup in pyperformance.\n(Contributed by Mark Shannon in bpo-44590.)\nInlined Python function calls\u00b6\nDuring a Python function call, Python will call an evaluating C function to interpret that function\u2019s code. This effectively limits pure Python recursion to what\u2019s safe for the C stack.\nIn 3.11, when CPython detects Python code calling another Python function, it sets up a new frame, and \u201cjumps\u201d to the new code inside the new frame. This avoids calling the C interpreting function altogether.\nMost Python function calls now consume no C stack space, speeding them up.\nIn simple recursive functions like fibonacci or\nfactorial, we observed a 1.7x speedup. This also means recursive functions\ncan recurse significantly deeper\n(if the user increases the recursion limit with sys.setrecursionlimit()\n).\nWe measured a 1-3% improvement in pyperformance.\n(Contributed by Pablo Galindo and Mark Shannon in bpo-45256.)\nPEP 659: Specializing Adaptive Interpreter\u00b6\nPEP 659 is one of the key parts of the Faster CPython project. The general idea is that while Python is a dynamic language, most code has regions where objects and types rarely change. This concept is known as type stability.\nAt runtime, Python will try to look for common patterns and type stability in the executing code. Python will then replace the current operation with a more specialized one. This specialized operation uses fast paths available only to those use cases/types, which generally outperform their generic counterparts. This also brings in another concept called inline caching, where Python caches the results of expensive operations directly in the bytecode.\nThe specializer will also combine certain common instruction pairs into one superinstruction, reducing the overhead during execution.\nPython will only specialize when it sees code that is \u201chot\u201d (executed multiple times). This prevents Python from wasting time on run-once code. Python can also de-specialize when code is too dynamic or when the use changes. Specialization is attempted periodically, and specialization attempts are not too expensive, allowing specialization to adapt to new circumstances.\n(PEP written by Mark Shannon, with ideas inspired by Stefan Brunthaler. See PEP 659 for more information. Implementation by Mark Shannon and Brandt Bucher, with additional help from Irit Katriel and Dennis Sweeney.)\nOperation |\nForm |\nSpecialization |\nOperation speedup (up to) |\nContributor(s) |\n|---|---|---|---|---|\nBinary operations |\n|\nBinary add, multiply and subtract for common types\nsuch as |\n10% |\nMark Shannon, Donghee Na, Brandt Bucher, Dennis Sweeney |\nSubscript |\n|\nSubscripting container types such as Subscripting custom |\n10-25% |\nIrit Katriel, Mark Shannon |\nStore subscript |\n|\nSimilar to subscripting specialization above. |\n10-25% |\nDennis Sweeney |\nCalls |\n|\nCalls to common builtin (C) functions and types such\nas |\n20% |\nMark Shannon, Ken Jin |\nLoad global variable |\n|\nThe object\u2019s index in the globals/builtins namespace is cached. Loading globals and builtins require zero namespace lookups. |\nMark Shannon |\n|\nLoad attribute |\n|\nSimilar to loading global variables. The attribute\u2019s index inside the class/object\u2019s namespace is cached. In most cases, attribute loading will require zero namespace lookups. |\nMark Shannon |\n|\nLoad methods for call |\n|\nThe actual address of the method is cached. Method loading now has no namespace lookups \u2013 even for classes with long inheritance chains. |\n10-20% |\nKen Jin, Mark Shannon |\nStore attribute |\n|\nSimilar to load attribute optimization. |\n2% in pyperformance |\nMark Shannon |\nUnpack Sequence |\n|\nSpecialized for common containers such as\n|\n8% |\nBrandt Bucher |\nMisc\u00b6\nObjects now require less memory due to lazily created object namespaces. Their namespace dictionaries now also share keys more freely. (Contributed Mark Shannon in bpo-45340 and bpo-40116.)\n\u201cZero-cost\u201d exceptions are implemented, eliminating the cost of\ntry\nstatements when no exception is raised. (Contributed by Mark Shannon in bpo-40222.)A more concise representation of exceptions in the interpreter reduced the time required for catching an exception by about 10%. (Contributed by Irit Katriel in bpo-45711.)\nre\n\u2019s regular expression matching engine has been partially refactored, and now uses computed gotos (or \u201cthreaded code\u201d) on supported platforms. As a result, Python 3.11 executes the pyperformance regular expression benchmarks up to 10% faster than Python 3.10. (Contributed by Brandt Bucher in gh-91404.)\nFAQ\u00b6\nHow should I write my code to utilize these speedups?\u00b6\nWrite Pythonic code that follows common best practices; you don\u2019t have to change your code. The Faster CPython project optimizes for common code patterns we observe.\nWill CPython 3.11 use more memory?\u00b6\nMaybe not; we don\u2019t expect memory use to exceed 20% higher than 3.10. This is offset by memory optimizations for frame objects and object dictionaries as mentioned above.\nI don\u2019t see any speedups in my workload. Why?\u00b6\nCertain code won\u2019t have noticeable benefits. If your code spends most of its time on I/O operations, or already does most of its computation in a C extension library like NumPy, there won\u2019t be significant speedups. This project currently benefits pure-Python workloads the most.\nFurthermore, the pyperformance figures are a geometric mean. Even within the pyperformance benchmarks, certain benchmarks have slowed down slightly, while others have sped up by nearly 2x!\nIs there a JIT compiler?\u00b6\nNo. We\u2019re still exploring other optimizations.\nAbout\u00b6\nFaster CPython explores optimizations for CPython. The main team is funded by Microsoft to work on this full-time. Pablo Galindo Salgado is also funded by Bloomberg LP to work on the project part-time. Finally, many contributors are volunteers from the community.\nCPython bytecode changes\u00b6\nThe bytecode now contains inline cache entries,\nwhich take the form of the newly-added CACHE\ninstructions.\nMany opcodes expect to be followed by an exact number of caches,\nand instruct the interpreter to skip over them at runtime.\nPopulated caches can look like arbitrary instructions,\nso great care should be taken when reading or modifying\nraw, adaptive bytecode containing quickened data.\nNew opcodes\u00b6\nASYNC_GEN_WRAP\n,RETURN_GENERATOR\nandSEND\n, used in generators and co-routines.COPY_FREE_VARS\n, which avoids needing special caller-side code for closures.JUMP_BACKWARD_NO_INTERRUPT\n, for use in certain loops where handling interrupts is undesirable.MAKE_CELL\n, to create Cell Objects.CHECK_EG_MATCH\nandPREP_RERAISE_STAR\n, to handle the new exception groups and except* added in PEP 654.PUSH_EXC_INFO\n, for use in exception handlers.RESUME\n, a no-op, for internal tracing, debugging and optimization checks.\nReplaced opcodes\u00b6\nReplaced Opcode(s) |\nNew Opcode(s) |\nNotes |\n|---|---|---|\nBINARY_* INPLACE_* |\nReplaced all numeric binary/in-place opcodes with a single opcode |\n|\nCALL_FUNCTION CALL_FUNCTION_KW CALL_METHOD |\nDecouples argument shifting for methods from handling of keyword arguments; allows better specialization of calls |\n|\nDUP_TOP DUP_TOP_TWO ROT_TWO ROT_THREE ROT_FOUR ROT_N |\nStack manipulation instructions |\n|\nJUMP_IF_NOT_EXC_MATCH |\nNow performs check but doesn\u2019t jump |\n|\nJUMP_ABSOLUTE POP_JUMP_IF_FALSE POP_JUMP_IF_TRUE |\nSee [3];\n|\n|\nSETUP_WITH SETUP_ASYNC_WITH |\n|\n|\nAll jump opcodes are now relative, including the\nexisting JUMP_IF_TRUE_OR_POP\nand JUMP_IF_FALSE_OR_POP\n.\nThe argument is now an offset from the current instruction\nrather than an absolute location.\nChanged/removed opcodes\u00b6\nChanged\nMATCH_CLASS\nandMATCH_KEYS\nto no longer push an additional boolean value to indicate success/failure. Instead,None\nis pushed on failure in place of the tuple of extracted values.Changed opcodes that work with exceptions to reflect them now being represented as one item on the stack instead of three (see gh-89874).\nRemoved\nCOPY_DICT_WITHOUT_KEYS\n,GEN_START\n,POP_BLOCK\n,SETUP_FINALLY\nandYIELD_FROM\n.\nDeprecated\u00b6\nThis section lists Python APIs that have been deprecated in Python 3.11.\nDeprecated C APIs are listed separately.\nLanguage/Builtins\u00b6\nChaining\nclassmethod\ndescriptors (introduced in bpo-19072) is now deprecated. It can no longer be used to wrap other descriptors such asproperty\n. The core design of this feature was flawed and caused a number of downstream problems. To \u201cpass-through\u201d aclassmethod\n, consider using the__wrapped__\nattribute that was added in Python 3.10. (Contributed by Raymond Hettinger in gh-89519.)Octal escapes in string and bytes literals with values larger than\n0o377\n(255 in decimal) now produce aDeprecationWarning\n. In a future Python version, they will raise aSyntaxWarning\nand eventually aSyntaxError\n. (Contributed by Serhiy Storchaka in gh-81548.)The delegation of\nint()\nto__trunc__()\nis now deprecated. Callingint(a)\nwhentype(a)\nimplements__trunc__()\nbut not__int__()\nor__index__()\nnow raises aDeprecationWarning\n. (Contributed by Zackery Spytz in bpo-44977.)\nModules\u00b6\nPEP 594 led to the deprecations of the following modules slated for removal in Python 3.13:\naifc\nchunk\nmsilib\npipes\ntelnetlib\naudioop\ncrypt\nnis\nsndhdr\nuu\ncgi\nimghdr\nnntplib\nspwd\nxdrlib\ncgitb\nmailcap\nossaudiodev\nsunau\n(Contributed by Brett Cannon in bpo-47061 and Victor Stinner in gh-68966.)\nThe\nasynchat\n,asyncore\nandsmtpd\nmodules have been deprecated since at least Python 3.6. Their documentation and deprecation warnings have now been updated to note they will be removed in Python 3.12. (Contributed by Hugo van Kemenade in bpo-47022.)The\nlib2to3\npackage and2to3\ntool are now deprecated and may not be able to parse Python 3.10 or newer. See PEP 617, introducing the new PEG parser, for details. (Contributed by Victor Stinner in bpo-40360.)Undocumented modules\nsre_compile\n,sre_constants\nandsre_parse\nare now deprecated. (Contributed by Serhiy Storchaka in bpo-47152.)\nStandard Library\u00b6\nThe following have been deprecated in\nconfigparser\nsince Python 3.2. Their deprecation warnings have now been updated to note they will be removed in Python 3.12:the\nconfigparser.SafeConfigParser\nclassthe\nconfigparser.ParsingError.filename\npropertythe\nconfigparser.RawConfigParser.readfp()\nmethod\n(Contributed by Hugo van Kemenade in bpo-45173.)\nconfigparser.LegacyInterpolation\nhas been deprecated in the docstring since Python 3.2, and is not listed in theconfigparser\ndocumentation. It now emits aDeprecationWarning\nand will be removed in Python 3.13. Useconfigparser.BasicInterpolation\norconfigparser.ExtendedInterpolation\ninstead. (Contributed by Hugo van Kemenade in bpo-46607.)The older set of\nimportlib.resources\nfunctions were deprecated in favor of the replacements added in Python 3.9 and will be removed in a future Python version, due to not supporting resources located within package subdirectories:importlib.resources.contents()\nimportlib.resources.is_resource()\nimportlib.resources.open_binary()\nimportlib.resources.open_text()\nimportlib.resources.read_binary()\nimportlib.resources.read_text()\nimportlib.resources.path()\nThe\nlocale.getdefaultlocale()\nfunction is deprecated and will be removed in Python 3.15. Uselocale.setlocale()\n,locale.getpreferredencoding(False)\nandlocale.getlocale()\nfunctions instead. (Contributed by Victor Stinner in gh-90817.)The\nlocale.resetlocale()\nfunction is deprecated and will be removed in Python 3.13. Uselocale.setlocale(locale.LC_ALL, \"\")\ninstead. (Contributed by Victor Stinner in gh-90817.)Stricter rules will now be applied for numerical group references and group names in regular expressions. Only sequences of ASCII digits will now be accepted as a numerical reference, and the group name in\nbytes\npatterns and replacement strings can only contain ASCII letters, digits and underscores. For now, a deprecation warning is raised for syntax violating these rules. (Contributed by Serhiy Storchaka in gh-91760.)In the\nre\nmodule, there.template()\nfunction and the correspondingre.TEMPLATE\nandre.T\nflags are deprecated, as they were undocumented and lacked an obvious purpose. They will be removed in Python 3.13. (Contributed by Serhiy Storchaka and Miro Hron\u010dok in gh-92728.)turtle.settiltangle()\nhas been deprecated since Python 3.1; it now emits a deprecation warning and will be removed in Python 3.13. Useturtle.tiltangle()\ninstead (it was earlier incorrectly marked as deprecated, and its docstring is now corrected). (Contributed by Hugo van Kemenade in bpo-45837.)typing.Text\n, which exists solely to provide compatibility support between Python 2 and Python 3 code, is now deprecated. Its removal is currently unplanned, but users are encouraged to usestr\ninstead wherever possible. (Contributed by Alex Waygood in gh-92332.)The keyword argument syntax for constructing\ntyping.TypedDict\ntypes is now deprecated. Support will be removed in Python 3.13. (Contributed by Jingchen Ye in gh-90224.)webbrowser.MacOSX\nis deprecated and will be removed in Python 3.13. It is untested, undocumented, and not used bywebbrowser\nitself. (Contributed by Donghee Na in bpo-42255.)The behavior of returning a value from a\nTestCase\nandIsolatedAsyncioTestCase\ntest methods (other than the defaultNone\nvalue) is now deprecated.Deprecated the following not-formally-documented\nunittest\nfunctions, scheduled for removal in Python 3.13:unittest.findTestCases()\nunittest.makeSuite()\nunittest.getTestCaseNames()\nUse\nTestLoader\nmethods instead:(Contributed by Erlend E. Aasland in bpo-5846.)\nunittest.TestProgram.usageExit()\nis marked deprecated, to be removed in 3.13. (Contributed by Carlos Dam\u00e1zio in gh-67048.)\nPending Removal in Python 3.12\u00b6\nThe following Python APIs have been deprecated in earlier Python releases, and will be removed in Python 3.12.\nC APIs pending removal are listed separately.\nThe\nasynchat\nmoduleThe\nasyncore\nmoduleThe\nimp\nmoduleThe\ntyping.io\nnamespaceThe\ntyping.re\nnamespacecgi.log()\nimportlib.find_loader()\nimportlib.abc.Loader.module_repr()\nimportlib.abc.MetaPathFinder.find_module()\nimportlib.abc.PathEntryFinder.find_loader()\nimportlib.abc.PathEntryFinder.find_module()\nimportlib.machinery.BuiltinImporter.find_module()\nimportlib.machinery.BuiltinLoader.module_repr()\nimportlib.machinery.FileFinder.find_loader()\nimportlib.machinery.FileFinder.find_module()\nimportlib.machinery.FrozenImporter.find_module()\nimportlib.machinery.FrozenLoader.module_repr()\nimportlib.machinery.PathFinder.find_module()\nimportlib.machinery.WindowsRegistryFinder.find_module()\nimportlib.util.module_for_loader()\nimportlib.util.set_loader_wrapper()\nimportlib.util.set_package_wrapper()\npkgutil.ImpImporter\npkgutil.ImpLoader\npathlib.Path.link_to()\nsqlite3.enable_shared_cache()\nsqlite3.OptimizedUnicode()\nPYTHONTHREADDEBUG\nenvironment variableThe following deprecated aliases in\nunittest\n:Deprecated alias\nMethod Name\nDeprecated in\nfailUnless\n3.1\nfailIf\n3.1\nfailUnlessEqual\n3.1\nfailIfEqual\n3.1\nfailUnlessAlmostEqual\n3.1\nfailIfAlmostEqual\n3.1\nfailUnlessRaises\n3.1\nassert_\n3.2\nassertEquals\n3.2\nassertNotEquals\n3.2\nassertAlmostEquals\n3.2\nassertNotAlmostEquals\n3.2\nassertRegexpMatches\n3.2\nassertRaisesRegexp\n3.2\nassertNotRegexpMatches\n3.5\nRemoved\u00b6\nThis section lists Python APIs that have been removed in Python 3.11.\nRemoved C APIs are listed separately.\nRemoved the\n@asyncio.coroutine()\ndecorator enabling legacy generator-based coroutines to be compatible withasync\n/await\ncode. The function has been deprecated since Python 3.8 and the removal was initially scheduled for Python 3.10. Useasync def\ninstead. (Contributed by Illia Volochii in bpo-43216.)Removed\nasyncio.coroutines.CoroWrapper\nused for wrapping legacy generator-based coroutine objects in the debug mode. (Contributed by Illia Volochii in bpo-43216.)Due to significant security concerns, the reuse_address parameter of\nasyncio.loop.create_datagram_endpoint()\n, disabled in Python 3.9, is now entirely removed. This is because of the behavior of the socket optionSO_REUSEADDR\nin UDP. (Contributed by Hugo van Kemenade in bpo-45129.)Removed the\nbinhex\nmodule, deprecated in Python 3.9. Also removed the related, similarly-deprecatedbinascii\nfunctions:binascii.a2b_hqx()\nbinascii.b2a_hqx()\nbinascii.rlecode_hqx()\nbinascii.rldecode_hqx()\nThe\nbinascii.crc_hqx()\nfunction remains available.(Contributed by Victor Stinner in bpo-45085.)\nRemoved the\ndistutils\nbdist_msi\ncommand deprecated in Python 3.9. Usebdist_wheel\n(wheel packages) instead. (Contributed by Hugo van Kemenade in bpo-45124.)Removed the\n__getitem__()\nmethods ofxml.dom.pulldom.DOMEventStream\n,wsgiref.util.FileWrapper\nandfileinput.FileInput\n, deprecated since Python 3.9. (Contributed by Hugo van Kemenade in bpo-45132.)Removed the deprecated\ngettext\nfunctionslgettext()\n,ldgettext()\n,lngettext()\nandldngettext()\n. Also removed thebind_textdomain_codeset()\nfunction, theNullTranslations.output_charset()\nandNullTranslations.set_output_charset()\nmethods, and the codeset parameter oftranslation()\nandinstall()\n, since they are only used for thel*gettext()\nfunctions. (Contributed by Donghee Na and Serhiy Storchaka in bpo-44235.)Removed from the\ninspect\nmodule:The\ngetargspec()\nfunction, deprecated since Python 3.0; useinspect.signature()\norinspect.getfullargspec()\ninstead.The\nformatargspec()\nfunction, deprecated since Python 3.5; use theinspect.signature()\nfunction or theinspect.Signature\nobject directly.The undocumented\nSignature.from_builtin()\nandSignature.from_function()\nmethods, deprecated since Python 3.5; use theSignature.from_callable()\nmethod instead.\n(Contributed by Hugo van Kemenade in bpo-45320.)\nRemoved the\n__class_getitem__()\nmethod frompathlib.PurePath\n, because it was not used and added by mistake in previous versions. (Contributed by Nikita Sobolev in bpo-46483.)Removed the\nMailmanProxy\nclass in thesmtpd\nmodule, as it is unusable without the externalmailman\npackage. (Contributed by Donghee Na in bpo-35800.)Removed the deprecated\nsplit()\nmethod of_tkinter.TkappType\n. (Contributed by Erlend E. Aasland in bpo-38371.)Removed namespace package support from\nunittest\ndiscovery. It was introduced in Python 3.4 but has been broken since Python 3.7. (Contributed by Inada Naoki in bpo-23882.)Removed the undocumented private\nfloat.__set_format__()\nmethod, previously known asfloat.__setformat__()\nin Python 3.7. Its docstring said: \u201cYou probably don\u2019t want to use this function. It exists mainly to be used in Python\u2019s test suite.\u201d (Contributed by Victor Stinner in bpo-46852.)The\n--experimental-isolated-subinterpreters\nconfigure flag (and correspondingEXPERIMENTAL_ISOLATED_SUBINTERPRETERS\nmacro) have been removed.Pynche \u2014 The Pythonically Natural Color and Hue Editor \u2014 has been moved out of\nTools/scripts\nand is being developed independently from the Python source tree.\nPorting to Python 3.11\u00b6\nThis section lists previously described changes and other bugfixes in the Python API that may require changes to your Python code.\nPorting notes for the C API are listed separately.\nopen()\n,io.open()\n,codecs.open()\nandfileinput.FileInput\nno longer accept'U'\n(\u201cuniversal newline\u201d) in the file mode. In Python 3, \u201cuniversal newline\u201d mode is used by default whenever a file is opened in text mode, and the'U'\nflag has been deprecated since Python 3.3. The newline parameter to these functions controls how universal newlines work. (Contributed by Victor Stinner in bpo-37330.)ast.AST\nnode positions are now validated when provided tocompile()\nand other related functions. If invalid positions are detected, aValueError\nwill be raised. (Contributed by Pablo Galindo in gh-93351)Prohibited passing non-\nconcurrent.futures.ThreadPoolExecutor\nexecutors toasyncio.loop.set_default_executor()\nfollowing a deprecation in Python 3.8. (Contributed by Illia Volochii in bpo-43234.)calendar\n: Thecalendar.LocaleTextCalendar\nandcalendar.LocaleHTMLCalendar\nclasses now uselocale.getlocale()\n, instead of usinglocale.getdefaultlocale()\n, if no locale is specified. (Contributed by Victor Stinner in bpo-46659.)The\npdb\nmodule now reads the.pdbrc\nconfiguration file with the'UTF-8'\nencoding. (Contributed by Srinivas Reddy Thatiparthy (\u0c36\u0c4d\u0c30\u0c40\u0c28\u0c3f\u0c35\u0c3e\u0c38\u0c4d \u0c30\u0c46\u0c21\u0c4d\u0c21\u0c3f \u0c24\u0c3e\u0c1f\u0c3f\u0c2a\u0c30\u0c4d\u0c24\u0c3f) in bpo-41137.)The population parameter of\nrandom.sample()\nmust be a sequence, and automatic conversion ofset\ns tolist\ns is no longer supported. Also, if the sample size is larger than the population size, aValueError\nis raised. (Contributed by Raymond Hettinger in bpo-40465.)The random optional parameter of\nrandom.shuffle()\nwas removed. It was previously an arbitrary random function to use for the shuffle; now,random.random()\n(its previous default) will always be used.In\nre\nRegular Expression Syntax, global inline flags (e.g.(?i)\n) can now only be used at the start of regular expressions. Using them elsewhere has been deprecated since Python 3.6. (Contributed by Serhiy Storchaka in bpo-47066.)In the\nre\nmodule, several long-standing bugs where fixed that, in rare cases, could cause capture groups to get the wrong result. Therefore, this could change the captured output in these cases. (Contributed by Ma Lin in bpo-35859.)\nBuild Changes\u00b6\nCPython now has PEP 11 Tier 3 support for cross compiling to the WebAssembly platforms Emscripten (\nwasm32-unknown-emscripten\n, i.e. Python in the browser) and WebAssembly System Interface (WASI) (wasm32-unknown-wasi\n). The effort is inspired by previous work like Pyodide. These platforms provide a limited subset of POSIX APIs; Python standard libraries features and modules related to networking, processes, threading, signals, mmap, and users/groups are not available or don\u2019t work. (Emscripten contributed by Christian Heimes and Ethan Smith in gh-84461 and WASI contributed by Christian Heimes in gh-90473; platforms promoted in gh-95085)Building CPython now requires:\nThe\nPy_NO_NAN\nmacro has been removed. Since CPython now requires IEEE 754 floats, NaN values are always available. (Contributed by Victor Stinner in bpo-46656.)The\ntkinter\npackage now requires Tcl/Tk version 8.5.12 or newer. (Contributed by Serhiy Storchaka in bpo-46996.)Build dependencies, compiler flags, and linker flags for most stdlib extension modules are now detected by configure. libffi, libnsl, libsqlite3, zlib, bzip2, liblzma, libcrypt, Tcl/Tk, and uuid flags are detected by pkg-config (when available).\ntkinter\nnow requires a pkg-config command to detect development settings for Tcl/Tk headers and libraries. (Contributed by Christian Heimes and Erlend Egeberg Aasland in bpo-45847, bpo-45747, and bpo-45763.)libpython is no longer linked against libcrypt. (Contributed by Mike Gilbert in bpo-45433.)\nCPython can now be built with the ThinLTO option via passing\nthin\nto--with-lto\n, i.e.--with-lto=thin\n. (Contributed by Donghee Na and Brett Holman in bpo-44340.)Freelists for object structs can now be disabled. A new configure option\n--without-freelists\ncan be used to disable all freelists except empty tuple singleton. (Contributed by Christian Heimes in bpo-45522.)Modules/Setup\nandModules/makesetup\nhave been improved and tied up. Extension modules can now be built throughmakesetup\n. All except some test modules can be linked statically into a main binary or library. (Contributed by Brett Cannon and Christian Heimes in bpo-45548, bpo-45570, bpo-45571, and bpo-43974.)Note\nUse the environment variables\nTCLTK_CFLAGS\nandTCLTK_LIBS\nto manually specify the location of Tcl/Tk headers and libraries. The configure options--with-tcltk-includes\nand--with-tcltk-libs\nhave been removed.On RHEL 7 and CentOS 7 the development packages do not provide\ntcl.pc\nandtk.pc\n; useTCLTK_LIBS=\"-ltk8.5 -ltkstub8.5 -ltcl8.5\"\n. The directoryMisc/rhel7\ncontains.pc\nfiles and instructions on how to build Python with RHEL 7\u2019s and CentOS 7\u2019s Tcl/Tk and OpenSSL.CPython will now use 30-bit digits by default for the Python\nint\nimplementation. Previously, the default was to use 30-bit digits on platforms withSIZEOF_VOID_P >= 8\n, and 15-bit digits otherwise. It\u2019s still possible to explicitly request use of 15-bit digits via either the--enable-big-digits\noption to the configure script or (for Windows) thePYLONG_BITS_IN_DIGIT\nvariable inPC/pyconfig.h\n, but this option may be removed at some point in the future. (Contributed by Mark Dickinson in bpo-45569.)\nC API Changes\u00b6\nNew Features\u00b6\nAdd a new\nPyType_GetName()\nfunction to get type\u2019s short name. (Contributed by Hai Shi in bpo-42035.)Add a new\nPyType_GetQualName()\nfunction to get type\u2019s qualified name. (Contributed by Hai Shi in bpo-42035.)Add new\nPyThreadState_EnterTracing()\nandPyThreadState_LeaveTracing()\nfunctions to the limited C API to suspend and resume tracing and profiling. (Contributed by Victor Stinner in bpo-43760.)Added the\nPy_Version\nconstant which bears the same value asPY_VERSION_HEX\n. (Contributed by Gabriele N. Tornetta in bpo-43931.)Py_buffer\nand APIs are now part of the limited API and the stable ABI:bf_getbuffer\nandbf_releasebuffer\ntype slots\n(Contributed by Christian Heimes in bpo-45459.)\nAdded the\nPyType_GetModuleByDef()\nfunction, used to get the module in which a method was defined, in cases where this information is not available directly (viaPyCMethod\n). (Contributed by Petr Viktorin in bpo-46613.)Add new functions to pack and unpack C double (serialize and deserialize):\nPyFloat_Pack2()\n,PyFloat_Pack4()\n,PyFloat_Pack8()\n,PyFloat_Unpack2()\n,PyFloat_Unpack4()\nandPyFloat_Unpack8()\n. (Contributed by Victor Stinner in bpo-46906.)Add new functions to get frame object attributes:\nPyFrame_GetBuiltins()\n,PyFrame_GetGenerator()\n,PyFrame_GetGlobals()\n,PyFrame_GetLasti()\n.Added two new functions to get and set the active exception instance:\nPyErr_GetHandledException()\nandPyErr_SetHandledException()\n. These are alternatives toPyErr_SetExcInfo()\nandPyErr_GetExcInfo()\nwhich work with the legacy 3-tuple representation of exceptions. (Contributed by Irit Katriel in bpo-46343.)Added the\nPyConfig.safe_path\nmember. (Contributed by Victor Stinner in gh-57684.)\nPorting to Python 3.11\u00b6\nSome macros have been converted to static inline functions to avoid macro pitfalls. The change should be mostly transparent to users, as the replacement functions will cast their arguments to the expected types to avoid compiler warnings due to static type checks. However, when the limited C API is set to >=3.11, these casts are not done, and callers will need to cast arguments to their expected types. See PEP 670 for more details. (Contributed by Victor Stinner and Erlend E. Aasland in gh-89653.)\nPyErr_SetExcInfo()\nno longer uses thetype\nandtraceback\narguments, the interpreter now derives those values from the exception instance (thevalue\nargument). The function still steals references of all three arguments. (Contributed by Irit Katriel in bpo-45711.)PyErr_GetExcInfo()\nnow derives thetype\nandtraceback\nfields of the result from the exception instance (thevalue\nfield). (Contributed by Irit Katriel in bpo-45711.)_frozen\nhas a newis_package\nfield to indicate whether or not the frozen module is a package. Previously, a negative value in thesize\nfield was the indicator. Now only non-negative values be used forsize\n. (Contributed by Kumar Aditya in bpo-46608.)_PyFrameEvalFunction()\nnow takes_PyInterpreterFrame*\nas its second parameter, instead ofPyFrameObject*\n. See PEP 523 for more details of how to use this function pointer type.PyCode_New()\nandPyCode_NewWithPosOnlyArgs()\nnow take an additionalexception_table\nargument. Using these functions should be avoided, if at all possible. To get a custom code object: create a code object using the compiler, then get a modified version with thereplace\nmethod.PyCodeObject\nno longer has theco_code\n,co_varnames\n,co_cellvars\nandco_freevars\nfields. Instead, usePyCode_GetCode()\n,PyCode_GetVarnames()\n,PyCode_GetCellvars()\nandPyCode_GetFreevars()\nrespectively to access them via the C API. (Contributed by Brandt Bucher in bpo-46841 and Ken Jin in gh-92154 and gh-94936.)The old trashcan macros (\nPy_TRASHCAN_SAFE_BEGIN\n/Py_TRASHCAN_SAFE_END\n) are now deprecated. They should be replaced by the new macrosPy_TRASHCAN_BEGIN\nandPy_TRASHCAN_END\n.A tp_dealloc function that has the old macros, such as:\nstatic void mytype_dealloc(mytype *p) { PyObject_GC_UnTrack(p); Py_TRASHCAN_SAFE_BEGIN(p); ... Py_TRASHCAN_SAFE_END }\nshould migrate to the new macros as follows:\nstatic void mytype_dealloc(mytype *p) { PyObject_GC_UnTrack(p); Py_TRASHCAN_BEGIN(p, mytype_dealloc) ... Py_TRASHCAN_END }\nNote that\nPy_TRASHCAN_BEGIN\nhas a second argument which should be the deallocation function it is in.To support older Python versions in the same codebase, you can define the following macros and use them throughout the code (credit: these were copied from the\nmypy\ncodebase):#if PY_VERSION_HEX >= 0x03080000 # define CPy_TRASHCAN_BEGIN(op, dealloc) Py_TRASHCAN_BEGIN(op, dealloc) # define CPy_TRASHCAN_END(op) Py_TRASHCAN_END #else # define CPy_TRASHCAN_BEGIN(op, dealloc) Py_TRASHCAN_SAFE_BEGIN(op) # define CPy_TRASHCAN_END(op) Py_TRASHCAN_SAFE_END(op) #endif\nThe\nPyType_Ready()\nfunction now raises an error if a type is defined with thePy_TPFLAGS_HAVE_GC\nflag set but has no traverse function (PyTypeObject.tp_traverse\n). (Contributed by Victor Stinner in bpo-44263.)Heap types with the\nPy_TPFLAGS_IMMUTABLETYPE\nflag can now inherit the PEP 590 vectorcall protocol. Previously, this was only possible for static types. (Contributed by Erlend E. Aasland in bpo-43908)Since\nPy_TYPE()\nis changed to a inline static function,Py_TYPE(obj) = new_type\nmust be replaced withPy_SET_TYPE(obj, new_type)\n: see thePy_SET_TYPE()\nfunction (available since Python 3.9). For backward compatibility, this macro can be used:#if PY_VERSION_HEX < 0x030900A4 && !defined(Py_SET_TYPE) static inline void _Py_SET_TYPE(PyObject *ob, PyTypeObject *type) { ob->ob_type = type; } #define Py_SET_TYPE(ob, type) _Py_SET_TYPE((PyObject*)(ob), type) #endif\n(Contributed by Victor Stinner in bpo-39573.)\nSince\nPy_SIZE()\nis changed to a inline static function,Py_SIZE(obj) = new_size\nmust be replaced withPy_SET_SIZE(obj, new_size)\n: see thePy_SET_SIZE()\nfunction (available since Python 3.9). For backward compatibility, this macro can be used:#if PY_VERSION_HEX < 0x030900A4 && !defined(Py_SET_SIZE) static inline void _Py_SET_SIZE(PyVarObject *ob, Py_ssize_t size) { ob->ob_size = size; } #define Py_SET_SIZE(ob, size) _Py_SET_SIZE((PyVarObject*)(ob), size) #endif\n(Contributed by Victor Stinner in bpo-39573.)\n\nno longer includes the header files\n,\n,\nand\nwhen thePy_LIMITED_API\nmacro is set to0x030b0000\n(Python 3.11) or higher. C extensions should explicitly include the header files after#include \n. (Contributed by Victor Stinner in bpo-45434.)The non-limited API files\ncellobject.h\n,classobject.h\n,code.h\n,context.h\n,funcobject.h\n,genobject.h\nandlongintrepr.h\nhave been moved to theInclude/cpython\ndirectory. Moreover, theeval.h\nheader file was removed. These files must not be included directly, as they are already included inPython.h\n: Include Files. If they have been included directly, consider includingPython.h\ninstead. (Contributed by Victor Stinner in bpo-35134.)The\nPyUnicode_CHECK_INTERNED()\nmacro has been excluded from the limited C API. It was never usable there, because it used internal structures which are not available in the limited C API. (Contributed by Victor Stinner in bpo-46007.)The following frame functions and type are now directly available with\n#include \n, it\u2019s no longer needed to add#include \n:(Contributed by Victor Stinner in gh-93937.)\nThe\nPyFrameObject\nstructure members have been removed from the public C API.While the documentation notes that the\nPyFrameObject\nfields are subject to change at any time, they have been stable for a long time and were used in several popular extensions.In Python 3.11, the frame struct was reorganized to allow performance optimizations. Some fields were removed entirely, as they were details of the old implementation.\nPyFrameObject\nfields:f_back\n: usePyFrame_GetBack()\n.f_blockstack\n: removed.f_builtins\n: usePyFrame_GetBuiltins()\n.f_code\n: usePyFrame_GetCode()\n.f_gen\n: usePyFrame_GetGenerator()\n.f_globals\n: usePyFrame_GetGlobals()\n.f_iblock\n: removed.f_lasti\n: usePyFrame_GetLasti()\n. Code usingf_lasti\nwithPyCode_Addr2Line()\nshould usePyFrame_GetLineNumber()\ninstead; it may be faster.f_lineno\n: usePyFrame_GetLineNumber()\nf_locals\n: usePyFrame_GetLocals()\n.f_stackdepth\n: removed.f_state\n: no public API (renamed tof_frame.f_state\n).f_trace\n: no public API.f_trace_lines\n: usePyObject_GetAttrString((PyObject*)frame, \"f_trace_lines\")\n.f_trace_opcodes\n: usePyObject_GetAttrString((PyObject*)frame, \"f_trace_opcodes\")\n.f_localsplus\n: no public API (renamed tof_frame.localsplus\n).f_valuestack\n: removed.\nThe Python frame object is now created lazily. A side effect is that the\nf_back\nmember must not be accessed directly, since its value is now also computed lazily. ThePyFrame_GetBack()\nfunction must be called instead.Debuggers that accessed the\nf_locals\ndirectly must callPyFrame_GetLocals()\ninstead. They no longer need to callPyFrame_FastToLocalsWithError()\norPyFrame_LocalsToFast()\n, in fact they should not call those functions. The necessary updating of the frame is now managed by the virtual machine.Code defining\nPyFrame_GetCode()\non Python 3.8 and older:#if PY_VERSION_HEX < 0x030900B1 static inline PyCodeObject* PyFrame_GetCode(PyFrameObject *frame) { Py_INCREF(frame->f_code); return frame->f_code; } #endif\nCode defining\nPyFrame_GetBack()\non Python 3.8 and older:#if PY_VERSION_HEX < 0x030900B1 static inline PyFrameObject* PyFrame_GetBack(PyFrameObject *frame) { Py_XINCREF(frame->f_back); return frame->f_back; } #endif\nOr use the pythoncapi_compat project to get these two functions on older Python versions.\nChanges of the\nPyThreadState\nstructure members:frame\n: removed, usePyThreadState_GetFrame()\n(function added to Python 3.9 by bpo-40429). Warning: the function returns a strong reference, need to callPy_XDECREF()\n.tracing\n: changed, usePyThreadState_EnterTracing()\nandPyThreadState_LeaveTracing()\n(functions added to Python 3.11 by bpo-43760).recursion_depth\n: removed, use(tstate->recursion_limit - tstate->recursion_remaining)\ninstead.stackcheck_counter\n: removed.\nCode defining\nPyThreadState_GetFrame()\non Python 3.8 and older:#if PY_VERSION_HEX < 0x030900B1 static inline PyFrameObject* PyThreadState_GetFrame(PyThreadState *tstate) { Py_XINCREF(tstate->frame); return tstate->frame; } #endif\nCode defining\nPyThreadState_EnterTracing()\nandPyThreadState_LeaveTracing()\non Python 3.10 and older:#if PY_VERSION_HEX < 0x030B00A2 static inline void PyThreadState_EnterTracing(PyThreadState *tstate) { tstate->tracing++; #if PY_VERSION_HEX >= 0x030A00A1 tstate->cframe->use_tracing = 0; #else tstate->use_tracing = 0; #endif } static inline void PyThreadState_LeaveTracing(PyThreadState *tstate) { int use_tracing = (tstate->c_tracefunc != NULL || tstate->c_profilefunc != NULL); tstate->tracing--; #if PY_VERSION_HEX >= 0x030A00A1 tstate->cframe->use_tracing = use_tracing; #else tstate->use_tracing = use_tracing; #endif } #endif\nOr use the pythoncapi-compat project to get these functions on old Python functions.\nDistributors are encouraged to build Python with the optimized Blake2 library libb2.\nThe\nPyConfig.module_search_paths_set\nfield must now be set to 1 for initialization to usePyConfig.module_search_paths\nto initializesys.path\n. Otherwise, initialization will recalculate the path and replace any values added tomodule_search_paths\n.PyConfig_Read()\nno longer calculates the initial search path, and will not fill any values intoPyConfig.module_search_paths\n. To calculate default paths and then modify them, finish initialization and usePySys_GetObject()\nto retrievesys.path\nas a Python list object and modify it directly.\nDeprecated\u00b6\nDeprecate the following functions to configure the Python initialization:\nPySys_AddWarnOptionUnicode()\nPySys_AddWarnOption()\nPySys_AddXOption()\nPySys_HasWarnOptions()\nPySys_SetArgvEx()\nPySys_SetArgv()\nPySys_SetPath()\nPy_SetPath()\nPy_SetProgramName()\nPy_SetPythonHome()\nPy_SetStandardStreamEncoding()\n_Py_SetProgramFullPath()\nUse the new\nPyConfig\nAPI of the Python Initialization Configuration instead (PEP 587). (Contributed by Victor Stinner in gh-88279.)Deprecate the\nob_shash\nmember of thePyBytesObject\n. UsePyObject_Hash()\ninstead. (Contributed by Inada Naoki in bpo-46864.)\nPending Removal in Python 3.12\u00b6\nThe following C APIs have been deprecated in earlier Python releases, and will be removed in Python 3.12.\nPyUnicode_AS_DATA()\nPyUnicode_AS_UNICODE()\nPyUnicode_AsUnicodeAndSize()\nPyUnicode_AsUnicode()\nPyUnicode_FromUnicode()\nPyUnicode_GET_DATA_SIZE()\nPyUnicode_GET_SIZE()\nPyUnicode_GetSize()\nPyUnicode_IS_COMPACT()\nPyUnicode_IS_READY()\nPyUnicode_WSTR_LENGTH()\n_PyUnicode_AsUnicode()\nPyUnicode_WCHAR_KIND\nPyUnicode_InternImmortal()\nRemoved\u00b6\nPyFrame_BlockSetup()\nandPyFrame_BlockPop()\nhave been removed. (Contributed by Mark Shannon in bpo-40222.)Remove the following math macros using the\nerrno\nvariable:Py_ADJUST_ERANGE1()\nPy_ADJUST_ERANGE2()\nPy_OVERFLOWED()\nPy_SET_ERANGE_IF_OVERFLOW()\nPy_SET_ERRNO_ON_MATH_ERROR()\n(Contributed by Victor Stinner in bpo-45412.)\nRemove\nPy_UNICODE_COPY()\nandPy_UNICODE_FILL()\nmacros, deprecated since Python 3.3. UsePyUnicode_CopyCharacters()\normemcpy()\n(wchar_t*\nstring), andPyUnicode_Fill()\nfunctions instead. (Contributed by Victor Stinner in bpo-41123.)Remove the\npystrhex.h\nheader file. It only contains private functions. C extensions should only include the main\nheader file. (Contributed by Victor Stinner in bpo-45434.)Remove the\nPy_FORCE_DOUBLE()\nmacro. It was used by thePy_IS_INFINITY()\nmacro. (Contributed by Victor Stinner in bpo-45440.)The following items are no longer available when\nPy_LIMITED_API\nis defined:the\nPy_MARSHAL_VERSION\nmacro\nThese are not part of the limited API.\n(Contributed by Victor Stinner in bpo-45474.)\nExclude\nPyWeakref_GET_OBJECT()\nfrom the limited C API. It never worked since thePyWeakReference\nstructure is opaque in the limited C API. (Contributed by Victor Stinner in bpo-35134.)Remove the\nPyHeapType_GET_MEMBERS()\nmacro. It was exposed in the public C API by mistake, it must only be used by Python internally. Use thePyTypeObject.tp_members\nmember instead. (Contributed by Victor Stinner in bpo-40170.)Remove the\nHAVE_PY_SET_53BIT_PRECISION\nmacro (moved to the internal C API). (Contributed by Victor Stinner in bpo-45412.)\nRemove the\nPy_UNICODE\nencoder APIs, as they have been deprecated since Python 3.3, are little used and are inefficient relative to the recommended alternatives.The removed functions are:\nPyUnicode_Encode()\nPyUnicode_EncodeASCII()\nPyUnicode_EncodeLatin1()\nPyUnicode_EncodeUTF7()\nPyUnicode_EncodeUTF8()\nPyUnicode_EncodeUTF16()\nPyUnicode_EncodeUTF32()\nPyUnicode_EncodeUnicodeEscape()\nPyUnicode_EncodeRawUnicodeEscape()\nPyUnicode_EncodeCharmap()\nPyUnicode_TranslateCharmap()\nPyUnicode_EncodeDecimal()\nPyUnicode_TransformDecimalToASCII()\nSee PEP 624 for details and migration guidance. (Contributed by Inada Naoki in bpo-44029.)\nNotable changes in 3.11.4\u00b6\ntarfile\u00b6\nThe extraction methods in\ntarfile\n, andshutil.unpack_archive()\n, have a new a filter argument that allows limiting tar features than may be surprising or dangerous, such as creating files outside the destination directory. See Extraction filters for details. In Python 3.12, use without the filter argument will show aDeprecationWarning\n. In Python 3.14, the default will switch to'data'\n. (Contributed by Petr Viktorin in PEP 706.)\nNotable changes in 3.11.5\u00b6\nOpenSSL\u00b6\nWindows builds and macOS installers from python.org now use OpenSSL 3.0.", "code_snippets": ["\n ", " ", "\n ", " ", "\n\n", " ", " ", " ", " ", " ", " ", " ", "\n", " ", " ", " ", " ", " ", "\n", " ", " ", " ", " ", " ", "\n", " ", "\n ", " ", "\n ", " ", "\n", "\n ", " ", " ", "\n ", "\n ", " ", "\n\n ", "\n\n", "\n ", "\n ", " ", " ", " ", " ", "\n ", " ", " ", "\n\n ", "\n", " ", " ", " ", "\n ", "\n\n", "\n ", " ", "\n ", " ", "\n ", " ", "\n", " ", " ", "\n ", " ", "\n ", " ", "\n ", " ", " ", " ", "\n ", " ", "\n ", " ", "\n ", "\n ", "\n", "\n", "\n", " ", " ", " ", "\n ", " ", " ", "\n ", " ", " ", "\n ", " ", " ", "\n ", " ", "\n\n", "\n", "\n", "\n ", " ", "\n ", " ", "\n\n", " ", " ", " ", "\n", " ", "\n", "\n", " ", "\n", " ", " ", "\n", " ", " ", "\n", " ", "\n", "\n", "\n", " ", " ", " ", " ", "\n", " ", " ", "\n", " ", " ", "\n", " ", "\n", "\n", " ", "\n", "\n", " ", " ", " ", "\n", " ", " ", "\n", " ", "\n", " ", " ", " ", " ", " ", "\n", " ", " ", "\n", "\n", " ", "\n", " ", "\n", "\n ", "\n ", "\n ", "\n ", "\n", "\n", " ", "\n", " ", "\n", "\n ", "\n ", " ", "\n ", "\n ", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", " ", " ", " ", " ", " ", " ", "\n", " ", " ", " ", " ", "\n", "\n", "\n", "\n", " ", " ", " ", " ", " ", " ", "\n", " ", " ", " ", " ", "\n", "\n", "\n", "\n", " ", " ", " ", " ", "\n", "\n ", "\n ", " ", "\n", "\n", "\n", "\n", " ", " ", " ", " ", "\n", "\n ", "\n ", " ", "\n", "\n", "\n", "\n", " ", " ", " ", " ", "\n", "\n ", "\n ", " ", "\n", "\n", "\n", "\n", " ", " ", " ", " ", "\n", "\n ", "\n", "\n ", " ", " ", "\n", "\n ", " ", " ", "\n", "\n", "\n\n", " ", " ", " ", " ", "\n", "\n ", " ", " ", " ", " ", " ", " ", " ", " ", " ", "\n ", "\n", "\n ", " ", " ", "\n", "\n ", " ", " ", "\n", "\n", "\n", "\n"], "language": "Python", "source": "python.org", "token_count": 17622} +{"url": "https://docs.python.org/3/c-api/conversion.html", "title": "String conversion and formatting", "content": "String conversion and formatting\u00b6\nFunctions for number conversion and formatted string output.\n-\nint PyOS_snprintf(char *str, size_t size, const char *format, ...)\u00b6\n- Part of the Stable ABI.\nOutput not more than size bytes to str according to the format string format and the extra arguments. See the Unix man page snprintf(3).\n-\nint PyOS_vsnprintf(char *str, size_t size, const char *format, va_list va)\u00b6\n- Part of the Stable ABI.\nOutput not more than size bytes to str according to the format string format and the variable argument list va. Unix man page vsnprintf(3).\nPyOS_snprintf()\nand PyOS_vsnprintf()\nwrap the Standard C library\nfunctions snprintf()\nand vsnprintf()\n. Their purpose is to\nguarantee consistent behavior in corner cases, which the Standard C functions do\nnot.\nThe wrappers ensure that str[size-1]\nis always '\\0'\nupon return. They\nnever write more than size bytes (including the trailing '\\0'\n) into str.\nBoth functions require that str != NULL\n, size > 0\n, format != NULL\nand size < INT_MAX\n. Note that this means there is no equivalent to the C99\nn = snprintf(NULL, 0, ...)\nwhich would determine the necessary buffer size.\nThe return value (rv) for these functions should be interpreted as follows:\nWhen\n0 <= rv < size\n, the output conversion was successful and rv characters were written to str (excluding the trailing'\\0'\nbyte atstr[rv]\n).When\nrv >= size\n, the output conversion was truncated and a buffer withrv + 1\nbytes would have been needed to succeed.str[size-1]\nis'\\0'\nin this case.When\nrv < 0\n, the output conversion failed andstr[size-1]\nis'\\0'\nin this case too, but the rest of str is undefined. The exact cause of the error depends on the underlying platform.\nThe following functions provide locale-independent string to number conversions.\n-\nunsigned long PyOS_strtoul(const char *str, char **ptr, int base)\u00b6\n- Part of the Stable ABI.\nConvert the initial part of the string in\nstr\nto an unsigned long value according to the givenbase\n, which must be between2\nand36\ninclusive, or be the special value0\n.Leading white space and case of characters are ignored. If\nbase\nis zero it looks for a leading0b\n,0o\nor0x\nto tell which base. If these are absent it defaults to10\n. Base must be 0 or between 2 and 36 (inclusive). Ifptr\nis non-NULL\nit will contain a pointer to the end of the scan.If the converted value falls out of range of corresponding return type, range error occurs (\nerrno\nis set toERANGE\n) andULONG_MAX\nis returned. If no conversion can be performed,0\nis returned.See also the Unix man page strtoul(3).\nAdded in version 3.2.\n-\nlong PyOS_strtol(const char *str, char **ptr, int base)\u00b6\n- Part of the Stable ABI.\nConvert the initial part of the string in\nstr\nto an long value according to the givenbase\n, which must be between2\nand36\ninclusive, or be the special value0\n.Same as\nPyOS_strtoul()\n, but return a long value instead andLONG_MAX\non overflows.See also the Unix man page strtol(3).\nAdded in version 3.2.\n-\ndouble PyOS_string_to_double(const char *s, char **endptr, PyObject *overflow_exception)\u00b6\n- Part of the Stable ABI.\nConvert a string\ns\nto a double, raising a Python exception on failure. The set of accepted strings corresponds to the set of strings accepted by Python\u2019sfloat()\nconstructor, except thats\nmust not have leading or trailing whitespace. The conversion is independent of the current locale.If\nendptr\nisNULL\n, convert the whole string. RaiseValueError\nand return-1.0\nif the string is not a valid representation of a floating-point number.If endptr is not\nNULL\n, convert as much of the string as possible and set*endptr\nto point to the first unconverted character. If no initial segment of the string is the valid representation of a floating-point number, set*endptr\nto point to the beginning of the string, raise ValueError, and return-1.0\n.If\ns\nrepresents a value that is too large to store in a float (for example,\"1e500\"\nis such a string on many platforms) then ifoverflow_exception\nisNULL\nreturnPy_INFINITY\n(with an appropriate sign) and don\u2019t set any exception. Otherwise,overflow_exception\nmust point to a Python exception object; raise that exception and return-1.0\n. In both cases, set*endptr\nto point to the first character after the converted value.If any other error occurs during the conversion (for example an out-of-memory error), set the appropriate Python exception and return\n-1.0\n.Added in version 3.1.\n-\nchar *PyOS_double_to_string(double val, char format_code, int precision, int flags, int *ptype)\u00b6\n- Part of the Stable ABI.\nConvert a double val to a string using supplied format_code, precision, and flags.\nformat_code must be one of\n'e'\n,'E'\n,'f'\n,'F'\n,'g'\n,'G'\nor'r'\n. For'r'\n, the supplied precision must be 0 and is ignored. The'r'\nformat code specifies the standardrepr()\nformat.flags can be zero or more of the following values or-ed together:\n-\nPy_DTSF_SIGN\u00b6\nAlways precede the returned string with a sign character, even if val is non-negative.\n-\nPy_DTSF_ADD_DOT_0\u00b6\nEnsure that the returned string will not look like an integer.\n-\nPy_DTSF_ALT\u00b6\nApply \u201calternate\u201d formatting rules. See the documentation for the\nPyOS_snprintf()\n'#'\nspecifier for details.\n-\nPy_DTSF_NO_NEG_0\u00b6\nNegative zero is converted to positive zero.\nAdded in version 3.11.\nIf ptype is non-\nNULL\n, then the value it points to will be set to one of the following constants depending on the type of val:*ptype\ntype of val\n-\nPy_DTST_FINITE\u00b6\nfinite number\n-\nPy_DTST_INFINITE\u00b6\ninfinite number\n-\nPy_DTST_NAN\u00b6\nnot a number\nThe return value is a pointer to buffer with the converted string or\nNULL\nif the conversion failed. The caller is responsible for freeing the returned string by callingPyMem_Free()\n.Added in version 3.1.\n-\nPy_DTSF_SIGN\u00b6\n-\nint PyOS_mystricmp(const char *str1, const char *str2)\u00b6\n-\nint PyOS_mystrnicmp(const char *str1, const char *str2, Py_ssize_t size)\u00b6\n- Part of the Stable ABI.\nCase insensitive comparison of strings. These functions work almost identically to\nstrcmp()\nandstrncmp()\n(respectively), except that they ignore the case of ASCII characters.Return\n0\nif the strings are equal, a negative value if str1 sorts lexicographically before str2, or a positive value if it sorts after.In the str1 or str2 arguments, a NUL byte marks the end of the string. For\nPyOS_mystrnicmp()\n, the size argument gives the maximum size of the string, as if NUL was present at the index given by size.These functions do not use the locale.\n-\nint PyOS_stricmp(const char *str1, const char *str2)\u00b6\n-\nint PyOS_strnicmp(const char *str1, const char *str2, Py_ssize_t size)\u00b6\nCase insensitive comparison of strings.\nOn Windows, these are aliases of\nstricmp()\nandstrnicmp()\n, respectively.On other platforms, they are aliases of\nPyOS_mystricmp()\nandPyOS_mystrnicmp()\n, respectively.\nCharacter classification and conversion\u00b6\nThe following macros provide locale-independent (unlike the C standard library\nctype.h\n) character classification and conversion.\nThe argument must be a signed or unsigned char.\n-\nPy_ISALNUM(c)\u00b6\nReturn true if the character c is an alphanumeric character.\n-\nPy_ISALPHA(c)\u00b6\nReturn true if the character c is an alphabetic character (\na-z\nandA-Z\n).\n-\nPy_ISDIGIT(c)\u00b6\nReturn true if the character c is a decimal digit (\n0-9\n).\n-\nPy_ISLOWER(c)\u00b6\nReturn true if the character c is a lowercase ASCII letter (\na-z\n).\n-\nPy_ISUPPER(c)\u00b6\nReturn true if the character c is an uppercase ASCII letter (\nA-Z\n).\n-\nPy_ISSPACE(c)\u00b6\nReturn true if the character c is a whitespace character (space, tab, carriage return, newline, vertical tab, or form feed).\n-\nPy_ISXDIGIT(c)\u00b6\nReturn true if the character c is a hexadecimal digit (\n0-9\n,a-f\n, andA-F\n).\n-\nPy_TOLOWER(c)\u00b6\nReturn the lowercase equivalent of the character c.\n-\nPy_TOUPPER(c)\u00b6\nReturn the uppercase equivalent of the character c.", "code_snippets": [], "language": "Python", "source": "python.org", "token_count": 1939} +{"url": "https://docs.python.org/3/faq/gui.html", "title": null, "content": "Graphic User Interface FAQ\u00b6\nGeneral GUI Questions\u00b6\nWhat GUI toolkits exist for Python?\u00b6\nStandard builds of Python include an object-oriented interface to the Tcl/Tk widget set, called tkinter. This is probably the easiest to install (since it comes included with most binary distributions of Python) and use. For more info about Tk, including pointers to the source, see the Tcl/Tk home page. Tcl/Tk is fully portable to the macOS, Windows, and Unix platforms.\nDepending on what platform(s) you are aiming at, there are also several alternatives. A list of cross-platform and platform-specific GUI frameworks can be found on the python wiki.\nTkinter questions\u00b6\nHow do I freeze Tkinter applications?\u00b6\nFreeze is a tool to create stand-alone applications. When freezing Tkinter applications, the applications will not be truly stand-alone, as the application will still need the Tcl and Tk libraries.\nOne solution is to ship the application with the Tcl and Tk libraries, and point\nto them at run-time using the TCL_LIBRARY\nand TK_LIBRARY\nenvironment variables.\nVarious third-party freeze libraries such as py2exe and cx_Freeze have handling for Tkinter applications built-in.\nCan I have Tk events handled while waiting for I/O?\u00b6\nOn platforms other than Windows, yes, and you don\u2019t even\nneed threads! But you\u2019ll have to restructure your I/O\ncode a bit. Tk has the equivalent of Xt\u2019s XtAddInput()\ncall, which allows you\nto register a callback function which will be called from the Tk mainloop when\nI/O is possible on a file descriptor. See File Handlers.\nI can\u2019t get key bindings to work in Tkinter: why?\u00b6\nAn often-heard complaint is that event handlers bound\nto events with the bind()\nmethod\ndon\u2019t get handled even when the appropriate key is pressed.\nThe most common cause is that the widget to which the binding applies doesn\u2019t have \u201ckeyboard focus\u201d. Check out the Tk documentation for the focus command. Usually a widget is given the keyboard focus by clicking in it (but not for labels; see the takefocus option).", "code_snippets": [], "language": "Python", "source": "python.org", "token_count": 504} +{"url": "https://docs.python.org/3/howto/unicode.html", "title": "Unicode HOWTO", "content": "Unicode HOWTO\u00b6\n- Release:\n1.12\nThis HOWTO discusses Python\u2019s support for the Unicode specification for representing textual data, and explains various problems that people commonly encounter when trying to work with Unicode.\nIntroduction to Unicode\u00b6\nDefinitions\u00b6\nToday\u2019s programs need to be able to handle a wide variety of characters. Applications are often internationalized to display messages and output in a variety of user-selectable languages; the same program might need to output an error message in English, French, Japanese, Hebrew, or Russian. Web content can be written in any of these languages and can also include a variety of emoji symbols. Python\u2019s string type uses the Unicode Standard for representing characters, which lets Python programs work with all these different possible characters.\nUnicode (https://www.unicode.org/) is a specification that aims to list every character used by human languages and give each character its own unique code. The Unicode specifications are continually revised and updated to add new languages and symbols.\nA character is the smallest possible component of a text. \u2018A\u2019, \u2018B\u2019, \u2018C\u2019, etc., are all different characters. So are \u2018\u00c8\u2019 and \u2018\u00cd\u2019. Characters vary depending on the language or context you\u2019re talking about. For example, there\u2019s a character for \u201cRoman Numeral One\u201d, \u2018\u2160\u2019, that\u2019s separate from the uppercase letter \u2018I\u2019. They\u2019ll usually look the same, but these are two different characters that have different meanings.\nThe Unicode standard describes how characters are represented by\ncode points. A code point value is an integer in the range 0 to\n0x10FFFF (about 1.1 million values, the\nactual number assigned\nis less than that). In the standard and in this document, a code point is written\nusing the notation U+265E\nto mean the character with value\n0x265e\n(9,822 in decimal).\nThe Unicode standard contains a lot of tables listing characters and their corresponding code points:\n0061 'a'; LATIN SMALL LETTER A\n0062 'b'; LATIN SMALL LETTER B\n0063 'c'; LATIN SMALL LETTER C\n...\n007B '{'; LEFT CURLY BRACKET\n...\n2167 '\u2167'; ROMAN NUMERAL EIGHT\n2168 '\u2168'; ROMAN NUMERAL NINE\n...\n265E '\u265e'; BLACK CHESS KNIGHT\n265F '\u265f'; BLACK CHESS PAWN\n...\n1F600 '\ud83d\ude00'; GRINNING FACE\n1F609 '\ud83d\ude09'; WINKING FACE\n...\nStrictly, these definitions imply that it\u2019s meaningless to say \u2018this is\ncharacter U+265E\n\u2019. U+265E\nis a code point, which represents some particular\ncharacter; in this case, it represents the character \u2018BLACK CHESS KNIGHT\u2019,\n\u2018\u265e\u2019. In\ninformal contexts, this distinction between code points and characters will\nsometimes be forgotten.\nA character is represented on a screen or on paper by a set of graphical elements that\u2019s called a glyph. The glyph for an uppercase A, for example, is two diagonal strokes and a horizontal stroke, though the exact details will depend on the font being used. Most Python code doesn\u2019t need to worry about glyphs; figuring out the correct glyph to display is generally the job of a GUI toolkit or a terminal\u2019s font renderer.\nEncodings\u00b6\nTo summarize the previous section: a Unicode string is a sequence of\ncode points, which are numbers from 0 through 0x10FFFF\n(1,114,111\ndecimal). This sequence of code points needs to be represented in\nmemory as a set of code units, and code units are then mapped\nto 8-bit bytes. The rules for translating a Unicode string into a\nsequence of bytes are called a character encoding, or just\nan encoding.\nThe first encoding you might think of is using 32-bit integers as the code unit, and then using the CPU\u2019s representation of 32-bit integers. In this representation, the string \u201cPython\u201d might look like this:\nP y t h o n\n0x50 00 00 00 79 00 00 00 74 00 00 00 68 00 00 00 6f 00 00 00 6e 00 00 00\n0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23\nThis representation is straightforward but using it presents a number of problems.\nIt\u2019s not portable; different processors order the bytes differently.\nIt\u2019s very wasteful of space. In most texts, the majority of the code points are less than 127, or less than 255, so a lot of space is occupied by\n0x00\nbytes. The above string takes 24 bytes compared to the 6 bytes needed for an ASCII representation. Increased RAM usage doesn\u2019t matter too much (desktop computers have gigabytes of RAM, and strings aren\u2019t usually that large), but expanding our usage of disk and network bandwidth by a factor of 4 is intolerable.It\u2019s not compatible with existing C functions such as\nstrlen()\n, so a new family of wide string functions would need to be used.\nTherefore this encoding isn\u2019t used very much, and people instead choose other encodings that are more efficient and convenient, such as UTF-8.\nUTF-8 is one of the most commonly used encodings, and Python often defaults to using it. UTF stands for \u201cUnicode Transformation Format\u201d, and the \u20188\u2019 means that 8-bit values are used in the encoding. (There are also UTF-16 and UTF-32 encodings, but they are less frequently used than UTF-8.) UTF-8 uses the following rules:\nIf the code point is < 128, it\u2019s represented by the corresponding byte value.\nIf the code point is >= 128, it\u2019s turned into a sequence of two, three, or four bytes, where each byte of the sequence is between 128 and 255.\nUTF-8 has several convenient properties:\nIt can handle any Unicode code point.\nA Unicode string is turned into a sequence of bytes that contains embedded zero bytes only where they represent the null character (U+0000). This means that UTF-8 strings can be processed by C functions such as\nstrcpy()\nand sent through protocols that can\u2019t handle zero bytes for anything other than end-of-string markers.A string of ASCII text is also valid UTF-8 text.\nUTF-8 is fairly compact; the majority of commonly used characters can be represented with one or two bytes.\nIf bytes are corrupted or lost, it\u2019s possible to determine the start of the next UTF-8-encoded code point and resynchronize. It\u2019s also unlikely that random 8-bit data will look like valid UTF-8.\nUTF-8 is a byte oriented encoding. The encoding specifies that each character is represented by a specific sequence of one or more bytes. This avoids the byte-ordering issues that can occur with integer and word oriented encodings, like UTF-16 and UTF-32, where the sequence of bytes varies depending on the hardware on which the string was encoded.\nReferences\u00b6\nThe Unicode Consortium site has character charts, a glossary, and PDF versions of the Unicode specification. Be prepared for some difficult reading. A chronology of the origin and development of Unicode is also available on the site.\nOn the Computerphile Youtube channel, Tom Scott briefly discusses the history of Unicode and UTF-8 (9 minutes 36 seconds).\nTo help understand the standard, Jukka Korpela has written an introductory guide to reading the Unicode character tables.\nAnother good introductory article was written by Joel Spolsky. If this introduction didn\u2019t make things clear to you, you should try reading this alternate article before continuing.\nWikipedia entries are often helpful; see the entries for \u201ccharacter encoding\u201d and UTF-8, for example.\nPython\u2019s Unicode Support\u00b6\nNow that you\u2019ve learned the rudiments of Unicode, we can look at Python\u2019s Unicode features.\nThe String Type\u00b6\nSince Python 3.0, the language\u2019s str\ntype contains Unicode\ncharacters, meaning any string created using \"unicode rocks!\"\n, 'unicode\nrocks!'\n, or the triple-quoted string syntax is stored as Unicode.\nThe default encoding for Python source code is UTF-8, so you can simply include a Unicode character in a string literal:\ntry:\nwith open('/tmp/input.txt', 'r') as f:\n...\nexcept OSError:\n# 'File not found' error message.\nprint(\"Fichier non trouv\u00e9\")\nSide note: Python 3 also supports using Unicode characters in identifiers:\nr\u00e9pertoire = \"/tmp/records.log\"\nwith open(r\u00e9pertoire, \"w\") as f:\nf.write(\"test\\n\")\nIf you can\u2019t enter a particular character in your editor or want to keep the source code ASCII-only for some reason, you can also use escape sequences in string literals. (Depending on your system, you may see the actual capital-delta glyph instead of a u escape.)\n>>> \"\\N{GREEK CAPITAL LETTER DELTA}\" # Using the character name\n'\\u0394'\n>>> \"\\u0394\" # Using a 16-bit hex value\n'\\u0394'\n>>> \"\\U00000394\" # Using a 32-bit hex value\n'\\u0394'\nIn addition, one can create a string using the decode()\nmethod of\nbytes\n. This method takes an encoding argument, such as UTF-8\n,\nand optionally an errors argument.\nThe errors argument specifies the response when the input string can\u2019t be\nconverted according to the encoding\u2019s rules. Legal values for this argument are\n'strict'\n(raise a UnicodeDecodeError\nexception), 'replace'\n(use\nU+FFFD\n, REPLACEMENT CHARACTER\n), 'ignore'\n(just leave the\ncharacter out of the Unicode result), or 'backslashreplace'\n(inserts a\n\\xNN\nescape sequence).\nThe following examples show the differences:\n>>> b'\\x80abc'.decode(\"utf-8\", \"strict\")\nTraceback (most recent call last):\n...\nUnicodeDecodeError: 'utf-8' codec can't decode byte 0x80 in position 0:\ninvalid start byte\n>>> b'\\x80abc'.decode(\"utf-8\", \"replace\")\n'\\ufffdabc'\n>>> b'\\x80abc'.decode(\"utf-8\", \"backslashreplace\")\n'\\\\x80abc'\n>>> b'\\x80abc'.decode(\"utf-8\", \"ignore\")\n'abc'\nEncodings are specified as strings containing the encoding\u2019s name. Python\ncomes with roughly 100 different encodings; see the Python Library Reference at\nStandard Encodings for a list. Some encodings have multiple names; for\nexample, 'latin-1'\n, 'iso_8859_1'\nand '8859\n\u2019 are all synonyms for\nthe same encoding.\nOne-character Unicode strings can also be created with the chr()\nbuilt-in function, which takes integers and returns a Unicode string of length 1\nthat contains the corresponding code point. The reverse operation is the\nbuilt-in ord()\nfunction that takes a one-character Unicode string and\nreturns the code point value:\n>>> chr(57344)\n'\\ue000'\n>>> ord('\\ue000')\n57344\nConverting to Bytes\u00b6\nThe opposite method of bytes.decode()\nis str.encode()\n,\nwhich returns a bytes\nrepresentation of the Unicode string, encoded in the\nrequested encoding.\nThe errors parameter is the same as the parameter of the\ndecode()\nmethod but supports a few more possible handlers. As well as\n'strict'\n, 'ignore'\n, and 'replace'\n(which in this case\ninserts a question mark instead of the unencodable character), there is\nalso 'xmlcharrefreplace'\n(inserts an XML character reference),\nbackslashreplace\n(inserts a \\uNNNN\nescape sequence) and\nnamereplace\n(inserts a \\N{...}\nescape sequence).\nThe following example shows the different results:\n>>> u = chr(40960) + 'abcd' + chr(1972)\n>>> u.encode('utf-8')\nb'\\xea\\x80\\x80abcd\\xde\\xb4'\n>>> u.encode('ascii')\nTraceback (most recent call last):\n...\nUnicodeEncodeError: 'ascii' codec can't encode character '\\ua000' in\nposition 0: ordinal not in range(128)\n>>> u.encode('ascii', 'ignore')\nb'abcd'\n>>> u.encode('ascii', 'replace')\nb'?abcd?'\n>>> u.encode('ascii', 'xmlcharrefreplace')\nb'\ua000abcd\u07b4'\n>>> u.encode('ascii', 'backslashreplace')\nb'\\\\ua000abcd\\\\u07b4'\n>>> u.encode('ascii', 'namereplace')\nb'\\\\N{YI SYLLABLE IT}abcd\\\\u07b4'\nThe low-level routines for registering and accessing the available\nencodings are found in the codecs\nmodule. Implementing new\nencodings also requires understanding the codecs\nmodule.\nHowever, the encoding and decoding functions returned by this module\nare usually more low-level than is comfortable, and writing new encodings\nis a specialized task, so the module won\u2019t be covered in this HOWTO.\nUnicode Literals in Python Source Code\u00b6\nIn Python source code, specific Unicode code points can be written using the\n\\u\nescape sequence, which is followed by four hex digits giving the code\npoint. The \\U\nescape sequence is similar, but expects eight hex digits,\nnot four:\n>>> s = \"a\\xac\\u1234\\u20ac\\U00008000\"\n... # ^^^^ two-digit hex escape\n... # ^^^^^^ four-digit Unicode escape\n... # ^^^^^^^^^^ eight-digit Unicode escape\n>>> [ord(c) for c in s]\n[97, 172, 4660, 8364, 32768]\nUsing escape sequences for code points greater than 127 is fine in small doses,\nbut becomes an annoyance if you\u2019re using many accented characters, as you would\nin a program with messages in French or some other accent-using language. You\ncan also assemble strings using the chr()\nbuilt-in function, but this is\neven more tedious.\nIdeally, you\u2019d want to be able to write literals in your language\u2019s natural encoding. You could then edit Python source code with your favorite editor which would display the accented characters naturally, and have the right characters used at runtime.\nPython supports writing source code in UTF-8 by default, but you can use almost any encoding if you declare the encoding being used. This is done by including a special comment as either the first or second line of the source file:\n#!/usr/bin/env python\n# -*- coding: latin-1 -*-\nu = 'abcd\u00e9'\nprint(ord(u[-1]))\nThe syntax is inspired by Emacs\u2019s notation for specifying variables local to a\nfile. Emacs supports many different variables, but Python only supports\n\u2018coding\u2019. The -*-\nsymbols indicate to Emacs that the comment is special;\nthey have no significance to Python but are a convention. Python looks for\ncoding: name\nor coding=name\nin the comment.\nIf you don\u2019t include such a comment, the default encoding used will be UTF-8 as already mentioned. See also PEP 263 for more information.\nUnicode Properties\u00b6\nThe Unicode specification includes a database of information about code points. For each defined code point, the information includes the character\u2019s name, its category, the numeric value if applicable (for characters representing numeric concepts such as the Roman numerals, fractions such as one-third and four-fifths, etc.). There are also display-related properties, such as how to use the code point in bidirectional text.\nThe following program displays some information about several characters, and prints the numeric value of one particular character:\nimport unicodedata\nu = chr(233) + chr(0x0bf2) + chr(3972) + chr(6000) + chr(13231)\nfor i, c in enumerate(u):\nprint(i, '%04x' % ord(c), unicodedata.category(c), end=\" \")\nprint(unicodedata.name(c))\n# Get numeric value of second character\nprint(unicodedata.numeric(u[1]))\nWhen run, this prints:\n0 00e9 Ll LATIN SMALL LETTER E WITH ACUTE\n1 0bf2 No TAMIL NUMBER ONE THOUSAND\n2 0f84 Mn TIBETAN MARK HALANTA\n3 1770 Lo TAGBANWA LETTER SA\n4 33af So SQUARE RAD OVER S SQUARED\n1000.0\nThe category codes are abbreviations describing the nature of the character.\nThese are grouped into categories such as \u201cLetter\u201d, \u201cNumber\u201d, \u201cPunctuation\u201d, or\n\u201cSymbol\u201d, which in turn are broken up into subcategories. To take the codes\nfrom the above output, 'Ll'\nmeans \u2018Letter, lowercase\u2019, 'No'\nmeans\n\u201cNumber, other\u201d, 'Mn'\nis \u201cMark, nonspacing\u201d, and 'So'\nis \u201cSymbol,\nother\u201d. See\nthe General Category Values section of the Unicode Character Database documentation for a\nlist of category codes.\nComparing Strings\u00b6\nUnicode adds some complication to comparing strings, because the same set of characters can be represented by different sequences of code points. For example, a letter like \u2018\u00ea\u2019 can be represented as a single code point U+00EA, or as U+0065 U+0302, which is the code point for \u2018e\u2019 followed by a code point for \u2018COMBINING CIRCUMFLEX ACCENT\u2019. These will produce the same output when printed, but one is a string of length 1 and the other is of length 2.\nOne tool for a case-insensitive comparison is the\ncasefold()\nstring method that converts a string to a\ncase-insensitive form following an algorithm described by the Unicode\nStandard. This algorithm has special handling for characters such as\nthe German letter \u2018\u00df\u2019 (code point U+00DF), which becomes the pair of\nlowercase letters \u2018ss\u2019.\n>>> street = 'G\u00fcrzenichstra\u00dfe'\n>>> street.casefold()\n'g\u00fcrzenichstrasse'\nA second tool is the unicodedata\nmodule\u2019s\nnormalize()\nfunction that converts strings to one\nof several normal forms, where letters followed by a combining character are\nreplaced with single characters. normalize()\ncan\nbe used to perform string comparisons that won\u2019t falsely report\ninequality if two strings use combining characters differently:\nimport unicodedata\ndef compare_strs(s1, s2):\ndef NFD(s):\nreturn unicodedata.normalize('NFD', s)\nreturn NFD(s1) == NFD(s2)\nsingle_char = '\u00ea'\nmultiple_chars = '\\N{LATIN SMALL LETTER E}\\N{COMBINING CIRCUMFLEX ACCENT}'\nprint('length of first string=', len(single_char))\nprint('length of second string=', len(multiple_chars))\nprint(compare_strs(single_char, multiple_chars))\nWhen run, this outputs:\n$ python compare-strs.py\nlength of first string= 1\nlength of second string= 2\nTrue\nThe first argument to the normalize()\nfunction is a\nstring giving the desired normalization form, which can be one of\n\u2018NFC\u2019, \u2018NFKC\u2019, \u2018NFD\u2019, and \u2018NFKD\u2019.\nThe Unicode Standard also specifies how to do caseless comparisons:\nimport unicodedata\ndef compare_caseless(s1, s2):\ndef NFD(s):\nreturn unicodedata.normalize('NFD', s)\nreturn NFD(NFD(s1).casefold()) == NFD(NFD(s2).casefold())\n# Example usage\nsingle_char = '\u00ea'\nmultiple_chars = '\\N{LATIN CAPITAL LETTER E}\\N{COMBINING CIRCUMFLEX ACCENT}'\nprint(compare_caseless(single_char, multiple_chars))\nThis will print True\n. (Why is NFD()\ninvoked twice? Because\nthere are a few characters that make casefold()\nreturn a\nnon-normalized string, so the result needs to be normalized again. See\nsection 3.13 of the Unicode Standard for a discussion and an example.)\nUnicode Regular Expressions\u00b6\nThe regular expressions supported by the re\nmodule can be provided\neither as bytes or strings. Some of the special character sequences such as\n\\d\nand \\w\nhave different meanings depending on whether\nthe pattern is supplied as bytes or a string. For example,\n\\d\nwill match the characters [0-9]\nin bytes but\nin strings will match any character that\u2019s in the 'Nd'\ncategory.\nThe string in this example has the number 57 written in both Thai and Arabic numerals:\nimport re\np = re.compile(r'\\d+')\ns = \"Over \\u0e55\\u0e57 57 flavours\"\nm = p.search(s)\nprint(repr(m.group()))\nWhen executed, \\d+\nwill match the Thai numerals and print them\nout. If you supply the re.ASCII\nflag to\ncompile()\n, \\d+\nwill match the substring \u201c57\u201d instead.\nSimilarly, \\w\nmatches a wide variety of Unicode characters but\nonly [a-zA-Z0-9_]\nin bytes or if re.ASCII\nis supplied,\nand \\s\nwill match either Unicode whitespace characters or\n[ \\t\\n\\r\\f\\v]\n.\nReferences\u00b6\nSome good alternative discussions of Python\u2019s Unicode support are:\nProcessing Text Files in Python 3, by Nick Coghlan.\nPragmatic Unicode, a PyCon 2012 presentation by Ned Batchelder.\nThe str\ntype is described in the Python library reference at\nText Sequence Type \u2014 str.\nThe documentation for the unicodedata\nmodule.\nThe documentation for the codecs\nmodule.\nMarc-Andr\u00e9 Lemburg gave a presentation titled \u201cPython and Unicode\u201d (PDF slides) at\nEuroPython 2002. The slides are an excellent overview of the design of Python\n2\u2019s Unicode features (where the Unicode string type is called unicode\nand\nliterals start with u\n).\nReading and Writing Unicode Data\u00b6\nOnce you\u2019ve written some code that works with Unicode data, the next problem is input/output. How do you get Unicode strings into your program, and how do you convert Unicode into a form suitable for storage or transmission?\nIt\u2019s possible that you may not need to do anything depending on your input sources and output destinations; you should check whether the libraries used in your application support Unicode natively. XML parsers often return Unicode data, for example. Many relational databases also support Unicode-valued columns and can return Unicode values from an SQL query.\nUnicode data is usually converted to a particular encoding before it gets\nwritten to disk or sent over a socket. It\u2019s possible to do all the work\nyourself: open a file, read an 8-bit bytes object from it, and convert the bytes\nwith bytes.decode(encoding)\n. However, the manual approach is not recommended.\nOne problem is the multi-byte nature of encodings; one Unicode character can be represented by several bytes. If you want to read the file in arbitrary-sized chunks (say, 1024 or 4096 bytes), you need to write error-handling code to catch the case where only part of the bytes encoding a single Unicode character are read at the end of a chunk. One solution would be to read the entire file into memory and then perform the decoding, but that prevents you from working with files that are extremely large; if you need to read a 2 GiB file, you need 2 GiB of RAM. (More, really, since for at least a moment you\u2019d need to have both the encoded string and its Unicode version in memory.)\nThe solution would be to use the low-level decoding interface to catch the case\nof partial coding sequences. The work of implementing this has already been\ndone for you: the built-in open()\nfunction can return a file-like object\nthat assumes the file\u2019s contents are in a specified encoding and accepts Unicode\nparameters for methods such as read()\nand\nwrite()\n. This works through open()\n's encoding and\nerrors parameters which are interpreted just like those in str.encode()\nand bytes.decode()\n.\nReading Unicode from a file is therefore simple:\nwith open('unicode.txt', encoding='utf-8') as f:\nfor line in f:\nprint(repr(line))\nIt\u2019s also possible to open files in update mode, allowing both reading and writing:\nwith open('test', encoding='utf-8', mode='w+') as f:\nf.write('\\u4500 blah blah blah\\n')\nf.seek(0)\nprint(repr(f.readline()[:1]))\nThe Unicode character U+FEFF\nis used as a byte-order mark (BOM), and is often\nwritten as the first character of a file in order to assist with autodetection\nof the file\u2019s byte ordering. Some encodings, such as UTF-16, expect a BOM to be\npresent at the start of a file; when such an encoding is used, the BOM will be\nautomatically written as the first character and will be silently dropped when\nthe file is read. There are variants of these encodings, such as \u2018utf-16-le\u2019\nand \u2018utf-16-be\u2019 for little-endian and big-endian encodings, that specify one\nparticular byte ordering and don\u2019t skip the BOM.\nIn some areas, it is also convention to use a \u201cBOM\u201d at the start of UTF-8 encoded files; the name is misleading since UTF-8 is not byte-order dependent. The mark simply announces that the file is encoded in UTF-8. For reading such files, use the \u2018utf-8-sig\u2019 codec to automatically skip the mark if present.\nUnicode filenames\u00b6\nMost of the operating systems in common use today support filenames\nthat contain arbitrary Unicode characters. Usually this is\nimplemented by converting the Unicode string into some encoding that\nvaries depending on the system. Today Python is converging on using\nUTF-8: Python on MacOS has used UTF-8 for several versions, and Python\n3.6 switched to using UTF-8 on Windows as well. On Unix systems,\nthere will only be a filesystem encoding. if you\u2019ve set the LANG\nor LC_CTYPE\nenvironment variables; if\nyou haven\u2019t, the default encoding is again UTF-8.\nThe sys.getfilesystemencoding()\nfunction returns the encoding to use on\nyour current system, in case you want to do the encoding manually, but there\u2019s\nnot much reason to bother. When opening a file for reading or writing, you can\nusually just provide the Unicode string as the filename, and it will be\nautomatically converted to the right encoding for you:\nfilename = 'filename\\u4500abc'\nwith open(filename, 'w') as f:\nf.write('blah\\n')\nFunctions in the os\nmodule such as os.stat()\nwill also accept Unicode\nfilenames.\nThe os.listdir()\nfunction returns filenames, which raises an issue: should it return\nthe Unicode version of filenames, or should it return bytes containing\nthe encoded versions? os.listdir()\ncan do both, depending on whether you\nprovided the directory path as bytes or a Unicode string. If you pass a\nUnicode string as the path, filenames will be decoded using the filesystem\u2019s\nencoding and a list of Unicode strings will be returned, while passing a byte\npath will return the filenames as bytes. For example,\nassuming the default filesystem encoding is UTF-8, running the following program:\nfn = 'filename\\u4500abc'\nf = open(fn, 'w')\nf.close()\nimport os\nprint(os.listdir(b'.'))\nprint(os.listdir('.'))\nwill produce the following output:\n$ python listdir-test.py\n[b'filename\\xe4\\x94\\x80abc', ...]\n['filename\\u4500abc', ...]\nThe first list contains UTF-8-encoded filenames, and the second list contains the Unicode versions.\nNote that on most occasions, you should can just stick with using Unicode with these APIs. The bytes APIs should only be used on systems where undecodable file names can be present; that\u2019s pretty much only Unix systems now.\nTips for Writing Unicode-aware Programs\u00b6\nThis section provides some suggestions on writing software that deals with Unicode.\nThe most important tip is:\nSoftware should only work with Unicode strings internally, decoding the input data as soon as possible and encoding the output only at the end.\nIf you attempt to write processing functions that accept both Unicode and byte\nstrings, you will find your program vulnerable to bugs wherever you combine the\ntwo different kinds of strings. There is no automatic encoding or decoding: if\nyou do e.g. str + bytes\n, a TypeError\nwill be raised.\nWhen using data coming from a web browser or some other untrusted source, a common technique is to check for illegal characters in a string before using the string in a generated command line or storing it in a database. If you\u2019re doing this, be careful to check the decoded string, not the encoded bytes data; some encodings may have interesting properties, such as not being bijective or not being fully ASCII-compatible. This is especially true if the input data also specifies the encoding, since the attacker can then choose a clever way to hide malicious text in the encoded bytestream.\nConverting Between File Encodings\u00b6\nThe StreamRecoder\nclass can transparently convert between\nencodings, taking a stream that returns data in encoding #1\nand behaving like a stream returning data in encoding #2.\nFor example, if you have an input file f that\u2019s in Latin-1, you\ncan wrap it with a StreamRecoder\nto return bytes encoded in\nUTF-8:\nnew_f = codecs.StreamRecoder(f,\n# en/decoder: used by read() to encode its results and\n# by write() to decode its input.\ncodecs.getencoder('utf-8'), codecs.getdecoder('utf-8'),\n# reader/writer: used to read and write to the stream.\ncodecs.getreader('latin-1'), codecs.getwriter('latin-1') )\nFiles in an Unknown Encoding\u00b6\nWhat can you do if you need to make a change to a file, but don\u2019t know\nthe file\u2019s encoding? If you know the encoding is ASCII-compatible and\nonly want to examine or modify the ASCII parts, you can open the file\nwith the surrogateescape\nerror handler:\nwith open(fname, 'r', encoding=\"ascii\", errors=\"surrogateescape\") as f:\ndata = f.read()\n# make changes to the string 'data'\nwith open(fname + '.new', 'w',\nencoding=\"ascii\", errors=\"surrogateescape\") as f:\nf.write(data)\nThe surrogateescape\nerror handler will decode any non-ASCII bytes\nas code points in a special range running from U+DC80 to\nU+DCFF. These code points will then turn back into the\nsame bytes when the surrogateescape\nerror handler is used to\nencode the data and write it back out.\nReferences\u00b6\nOne section of Mastering Python 3 Input/Output, a PyCon 2010 talk by David Beazley, discusses text processing and binary data handling.\nThe PDF slides for Marc-Andr\u00e9 Lemburg\u2019s presentation \u201cWriting Unicode-aware Applications in Python\u201d discuss questions of character encodings as well as how to internationalize and localize an application. These slides cover Python 2.x only.\nThe Guts of Unicode in Python is a PyCon 2013 talk by Benjamin Peterson that discusses the internal Unicode representation in Python 3.3.\nAcknowledgements\u00b6\nThe initial draft of this document was written by Andrew Kuchling. It has since been revised further by Alexander Belopolsky, Georg Brandl, Andrew Kuchling, and Ezio Melotti.\nThanks to the following people who have noted errors or offered suggestions on this article: \u00c9ric Araujo, Nicholas Bastin, Nick Coghlan, Marius Gedminas, Kent Johnson, Ken Krugler, Marc-Andr\u00e9 Lemburg, Martin von L\u00f6wis, Terry J. Reedy, Serhiy Storchaka, Eryk Sun, Chad Whitacre, Graham Wideman.", "code_snippets": ["\n ", " ", " ", " ", " ", "\n ", "\n", " ", "\n ", "\n ", "\n", " ", " ", "\n", " ", " ", " ", " ", "\n ", "\n", " ", "\n", "\n", " ", "\n", "\n", " ", "\n", "\n", " ", "\n", "\n", "\n", ": ", "\n", "\n", " ", "\n", "\n", " ", "\n", "\n", " ", "\n", "\n", "\n", "\n", "\n", "\n", " ", " ", " ", " ", " ", " ", "\n", "\n", "\n", "\n", "\n", "\n", ": ", "\n", "\n", " ", "\n", "\n", " ", "\n", "\n", " ", "\n", "\n", " ", "\n", "\n", " ", "\n", "\n", " ", " ", "\n", "\n", "\n", "\n", " ", " ", " ", " ", "\n", "\n", "\n", "\n\n", " ", " ", "\n", "\n", "\n\n", " ", " ", " ", " ", " ", " ", " ", " ", " ", " ", "\n\n", " ", " ", " ", " ", "\n ", " ", " ", " ", " ", " ", "\n ", "\n\n", "\n", "\n", " ", " ", "\n", "\n", "\n", "\n\n", " ", "\n ", "\n ", " ", " ", "\n\n ", " ", " ", " ", "\n\n", " ", " ", "\n", " ", " ", "\n", " ", "\n", " ", "\n", " ", "\n", "\n\n", " ", "\n ", "\n ", " ", " ", "\n\n ", " ", " ", " ", "\n\n", "\n", " ", " ", "\n", " ", " ", "\n\n", " ", "\n", "\n", " ", " ", "\n\n", " ", " ", "\n", " ", " ", "\n", "\n", " ", " ", " ", " ", "\n ", " ", " ", " ", "\n ", "\n", " ", " ", " ", " ", " ", "\n ", "\n ", "\n ", "\n", " ", " ", "\n", " ", " ", " ", " ", "\n ", "\n", " ", " ", "\n", " ", " ", " ", "\n", "\n\n", "\n", "\n", "\n", " ", " ", "\n ", "\n ", "\n ", " ", "\n\n ", "\n ", " ", " ", "\n", " ", " ", " ", " ", " ", " ", "\n ", " ", " ", "\n\n", "\n\n", " ", " ", " ", " ", "\n ", " ", " ", " ", "\n ", "\n"], "language": "Python", "source": "python.org", "token_count": 7029} +{"url": "https://docs.python.org/3/library/asyncio-eventloop.html", "title": "Event Loop", "content": "Event Loop\u00b6\nSource code: Lib/asyncio/events.py, Lib/asyncio/base_events.py\nPreface\nThe event loop is the core of every asyncio application. Event loops run asynchronous tasks and callbacks, perform network IO operations, and run subprocesses.\nApplication developers should typically use the high-level asyncio functions,\nsuch as asyncio.run()\n, and should rarely need to reference the loop\nobject or call its methods. This section is intended mostly for authors\nof lower-level code, libraries, and frameworks, who need finer control over\nthe event loop behavior.\nObtaining the Event Loop\nThe following low-level functions can be used to get, set, or create an event loop:\n- asyncio.get_running_loop()\u00b6\nReturn the running event loop in the current OS thread.\nRaise a\nRuntimeError\nif there is no running event loop.This function can only be called from a coroutine or a callback.\nAdded in version 3.7.\n- asyncio.get_event_loop()\u00b6\nGet the current event loop.\nWhen called from a coroutine or a callback (e.g. scheduled with call_soon or similar API), this function will always return the running event loop.\nIf there is no running event loop set, the function will return the result of the\nget_event_loop_policy().get_event_loop()\ncall.Because this function has rather complex behavior (especially when custom event loop policies are in use), using the\nget_running_loop()\nfunction is preferred toget_event_loop()\nin coroutines and callbacks.As noted above, consider using the higher-level\nasyncio.run()\nfunction, instead of using these lower level functions to manually create and close an event loop.Changed in version 3.14: Raises a\nRuntimeError\nif there is no current event loop.Note\nThe\nasyncio\npolicy system is deprecated and will be removed in Python 3.16; from there on, this function will return the current running event loop if present else it will return the loop set byset_event_loop()\n.\n- asyncio.set_event_loop(loop)\u00b6\nSet loop as the current event loop for the current OS thread.\n- asyncio.new_event_loop()\u00b6\nCreate and return a new event loop object.\nNote that the behaviour of get_event_loop()\n, set_event_loop()\n,\nand new_event_loop()\nfunctions can be altered by\nsetting a custom event loop policy.\nContents\nThis documentation page contains the following sections:\nThe Event Loop Methods section is the reference documentation of the event loop APIs;\nThe Callback Handles section documents the\nHandle\nandTimerHandle\ninstances which are returned from scheduling methods such asloop.call_soon()\nandloop.call_later()\n;The Server Objects section documents types returned from event loop methods like\nloop.create_server()\n;The Event Loop Implementations section documents the\nSelectorEventLoop\nandProactorEventLoop\nclasses;The Examples section showcases how to work with some event loop APIs.\nEvent Loop Methods\u00b6\nEvent loops have low-level APIs for the following:\nRunning and stopping the loop\u00b6\n- loop.run_until_complete(future)\u00b6\nRun until the future (an instance of\nFuture\n) has completed.If the argument is a coroutine object it is implicitly scheduled to run as a\nasyncio.Task\n.Return the Future\u2019s result or raise its exception.\n- loop.run_forever()\u00b6\nRun the event loop until\nstop()\nis called.If\nstop()\nis called beforerun_forever()\nis called, the loop will poll the I/O selector once with a timeout of zero, run all callbacks scheduled in response to I/O events (and those that were already scheduled), and then exit.If\nstop()\nis called whilerun_forever()\nis running, the loop will run the current batch of callbacks and then exit. Note that new callbacks scheduled by callbacks will not run in this case; instead, they will run the next timerun_forever()\norrun_until_complete()\nis called.\n- loop.stop()\u00b6\nStop the event loop.\n- loop.is_running()\u00b6\nReturn\nTrue\nif the event loop is currently running.\n- loop.is_closed()\u00b6\nReturn\nTrue\nif the event loop was closed.\n- loop.close()\u00b6\nClose the event loop.\nThe loop must not be running when this function is called. Any pending callbacks will be discarded.\nThis method clears all queues and shuts down the executor, but does not wait for the executor to finish.\nThis method is idempotent and irreversible. No other methods should be called after the event loop is closed.\n- async loop.shutdown_asyncgens()\u00b6\nSchedule all currently open asynchronous generator objects to close with an\naclose()\ncall. After calling this method, the event loop will issue a warning if a new asynchronous generator is iterated. This should be used to reliably finalize all scheduled asynchronous generators.Note that there is no need to call this function when\nasyncio.run()\nis used.Example:\ntry: loop.run_forever() finally: loop.run_until_complete(loop.shutdown_asyncgens()) loop.close()\nAdded in version 3.6.\n- async loop.shutdown_default_executor(timeout=None)\u00b6\nSchedule the closure of the default executor and wait for it to join all of the threads in the\nThreadPoolExecutor\n. Once this method has been called, using the default executor withloop.run_in_executor()\nwill raise aRuntimeError\n.The timeout parameter specifies the amount of time (in\nfloat\nseconds) the executor will be given to finish joining. With the default,None\n, the executor is allowed an unlimited amount of time.If the timeout is reached, a\nRuntimeWarning\nis emitted and the default executor is terminated without waiting for its threads to finish joining.Note\nDo not call this method when using\nasyncio.run()\n, as the latter handles default executor shutdown automatically.Added in version 3.9.\nChanged in version 3.12: Added the timeout parameter.\nScheduling callbacks\u00b6\n- loop.call_soon(callback, *args, context=None)\u00b6\nSchedule the callback callback to be called with args arguments at the next iteration of the event loop.\nReturn an instance of\nasyncio.Handle\n, which can be used later to cancel the callback.Callbacks are called in the order in which they are registered. Each callback will be called exactly once.\nThe optional keyword-only context argument specifies a custom\ncontextvars.Context\nfor the callback to run in. Callbacks use the current context when no context is provided.Unlike\ncall_soon_threadsafe()\n, this method is not thread-safe.\n- loop.call_soon_threadsafe(callback, *args, context=None)\u00b6\nA thread-safe variant of\ncall_soon()\n. When scheduling callbacks from another thread, this function must be used, sincecall_soon()\nis not thread-safe.This function is safe to be called from a reentrant context or signal handler, however, it is not safe or fruitful to use the returned handle in such contexts.\nRaises\nRuntimeError\nif called on a loop that\u2019s been closed. This can happen on a secondary thread when the main application is shutting down.See the concurrency and multithreading section of the documentation.\nChanged in version 3.7: The context keyword-only parameter was added. See PEP 567 for more details.\nNote\nMost asyncio\nscheduling functions don\u2019t allow passing\nkeyword arguments. To do that, use functools.partial()\n:\n# will schedule \"print(\"Hello\", flush=True)\"\nloop.call_soon(\nfunctools.partial(print, \"Hello\", flush=True))\nUsing partial objects is usually more convenient than using lambdas, as asyncio can render partial objects better in debug and error messages.\nScheduling delayed callbacks\u00b6\nEvent loop provides mechanisms to schedule callback functions to be called at some point in the future. Event loop uses monotonic clocks to track time.\n- loop.call_later(delay, callback, *args, context=None)\u00b6\nSchedule callback to be called after the given delay number of seconds (can be either an int or a float).\nAn instance of\nasyncio.TimerHandle\nis returned which can be used to cancel the callback.callback will be called exactly once. If two callbacks are scheduled for exactly the same time, the order in which they are called is undefined.\nThe optional positional args will be passed to the callback when it is called. Use\nfunctools.partial()\nto pass keyword arguments to callback.An optional keyword-only context argument allows specifying a custom\ncontextvars.Context\nfor the callback to run in. The current context is used when no context is provided.Note\nFor performance, callbacks scheduled with\nloop.call_later()\nmay run up to one clock-resolution early (seetime.get_clock_info('monotonic').resolution\n).Changed in version 3.7: The context keyword-only parameter was added. See PEP 567 for more details.\nChanged in version 3.8: In Python 3.7 and earlier with the default event loop implementation, the delay could not exceed one day. This has been fixed in Python 3.8.\n- loop.call_at(when, callback, *args, context=None)\u00b6\nSchedule callback to be called at the given absolute timestamp when (an int or a float), using the same time reference as\nloop.time()\n.This method\u2019s behavior is the same as\ncall_later()\n.An instance of\nasyncio.TimerHandle\nis returned which can be used to cancel the callback.Note\nFor performance, callbacks scheduled with\nloop.call_at()\nmay run up to one clock-resolution early (seetime.get_clock_info('monotonic').resolution\n).Changed in version 3.7: The context keyword-only parameter was added. See PEP 567 for more details.\nChanged in version 3.8: In Python 3.7 and earlier with the default event loop implementation, the difference between when and the current time could not exceed one day. This has been fixed in Python 3.8.\n- loop.time()\u00b6\nReturn the current time, as a\nfloat\nvalue, according to the event loop\u2019s internal monotonic clock.\nNote\nChanged in version 3.8: In Python 3.7 and earlier timeouts (relative delay or absolute when) should not exceed one day. This has been fixed in Python 3.8.\nSee also\nThe asyncio.sleep()\nfunction.\nCreating Futures and Tasks\u00b6\n- loop.create_future()\u00b6\nCreate an\nasyncio.Future\nobject attached to the event loop.This is the preferred way to create Futures in asyncio. This lets third-party event loops provide alternative implementations of the Future object (with better performance or instrumentation).\nAdded in version 3.5.2.\n- loop.create_task(coro, *, name=None, context=None, eager_start=None, **kwargs)\u00b6\nSchedule the execution of coroutine coro. Return a\nTask\nobject.Third-party event loops can use their own subclass of\nTask\nfor interoperability. In this case, the result type is a subclass ofTask\n.The full function signature is largely the same as that of the\nTask\nconstructor (or factory) - all of the keyword arguments to this function are passed through to that interface.If the name argument is provided and not\nNone\n, it is set as the name of the task usingTask.set_name()\n.An optional keyword-only context argument allows specifying a custom\ncontextvars.Context\nfor the coro to run in. The current context copy is created when no context is provided.An optional keyword-only eager_start argument allows specifying if the task should execute eagerly during the call to create_task, or be scheduled later. If eager_start is not passed the mode set by\nloop.set_task_factory()\nwill be used.Changed in version 3.8: Added the name parameter.\nChanged in version 3.11: Added the context parameter.\nChanged in version 3.13.3: Added\nkwargs\nwhich passes on arbitrary extra parameters, includingname\nandcontext\n.Changed in version 3.13.4: Rolled back the change that passes on name and context (if it is None), while still passing on other arbitrary keyword arguments (to avoid breaking backwards compatibility with 3.13.3).\nChanged in version 3.14: All kwargs are now passed on. The eager_start parameter works with eager task factories.\n- loop.set_task_factory(factory)\u00b6\nSet a task factory that will be used by\nloop.create_task()\n.If factory is\nNone\nthe default task factory will be set. Otherwise, factory must be a callable with the signature matching(loop, coro, **kwargs)\n, where loop is a reference to the active event loop, and coro is a coroutine object. The callable must pass on all kwargs, and return aasyncio.Task\n-compatible object.Changed in version 3.13.3: Required that all kwargs are passed on to\nasyncio.Task\n.Changed in version 3.13.4: name is no longer passed to task factories. context is no longer passed to task factories if it is\nNone\n.Changed in version 3.14: name and context are now unconditionally passed on to task factories again.\n- loop.get_task_factory()\u00b6\nReturn a task factory or\nNone\nif the default one is in use.\nOpening network connections\u00b6\n- async loop.create_connection(protocol_factory, host=None, port=None, *, ssl=None, family=0, proto=0, flags=0, sock=None, local_addr=None, server_hostname=None, ssl_handshake_timeout=None, ssl_shutdown_timeout=None, happy_eyeballs_delay=None, interleave=None, all_errors=False)\u00b6\nOpen a streaming transport connection to a given address specified by host and port.\nThe socket family can be either\nAF_INET\norAF_INET6\ndepending on host (or the family argument, if provided).The socket type will be\nSOCK_STREAM\n.protocol_factory must be a callable returning an asyncio protocol implementation.\nThis method will try to establish the connection in the background. When successful, it returns a\n(transport, protocol)\npair.The chronological synopsis of the underlying operation is as follows:\nThe connection is established and a transport is created for it.\nprotocol_factory is called without arguments and is expected to return a protocol instance.\nThe protocol instance is coupled with the transport by calling its\nconnection_made()\nmethod.A\n(transport, protocol)\ntuple is returned on success.\nThe created transport is an implementation-dependent bidirectional stream.\nOther arguments:\nssl: if given and not false, a SSL/TLS transport is created (by default a plain TCP transport is created). If ssl is a\nssl.SSLContext\nobject, this context is used to create the transport; if ssl isTrue\n, a default context returned fromssl.create_default_context()\nis used.See also\nserver_hostname sets or overrides the hostname that the target server\u2019s certificate will be matched against. Should only be passed if ssl is not\nNone\n. By default the value of the host argument is used. If host is empty, there is no default and you must pass a value for server_hostname. If server_hostname is an empty string, hostname matching is disabled (which is a serious security risk, allowing for potential man-in-the-middle attacks).family, proto, flags are the optional address family, protocol and flags to be passed through to getaddrinfo() for host resolution. If given, these should all be integers from the corresponding\nsocket\nmodule constants.happy_eyeballs_delay, if given, enables Happy Eyeballs for this connection. It should be a floating-point number representing the amount of time in seconds to wait for a connection attempt to complete, before starting the next attempt in parallel. This is the \u201cConnection Attempt Delay\u201d as defined in RFC 8305. A sensible default value recommended by the RFC is\n0.25\n(250 milliseconds).interleave controls address reordering when a host name resolves to multiple IP addresses. If\n0\nor unspecified, no reordering is done, and addresses are tried in the order returned bygetaddrinfo()\n. If a positive integer is specified, the addresses are interleaved by address family, and the given integer is interpreted as \u201cFirst Address Family Count\u201d as defined in RFC 8305. The default is0\nif happy_eyeballs_delay is not specified, and1\nif it is.sock, if given, should be an existing, already connected\nsocket.socket\nobject to be used by the transport. If sock is given, none of host, port, family, proto, flags, happy_eyeballs_delay, interleave and local_addr should be specified.Note\nThe sock argument transfers ownership of the socket to the transport created. To close the socket, call the transport\u2019s\nclose()\nmethod.local_addr, if given, is a\n(local_host, local_port)\ntuple used to bind the socket locally. The local_host and local_port are looked up usinggetaddrinfo()\n, similarly to host and port.ssl_handshake_timeout is (for a TLS connection) the time in seconds to wait for the TLS handshake to complete before aborting the connection.\n60.0\nseconds ifNone\n(default).ssl_shutdown_timeout is the time in seconds to wait for the SSL shutdown to complete before aborting the connection.\n30.0\nseconds ifNone\n(default).all_errors determines what exceptions are raised when a connection cannot be created. By default, only a single\nException\nis raised: the first exception if there is only one or all errors have same message, or a singleOSError\nwith the error messages combined. Whenall_errors\nisTrue\n, anExceptionGroup\nwill be raised containing all exceptions (even if there is only one).\nChanged in version 3.5: Added support for SSL/TLS in\nProactorEventLoop\n.Changed in version 3.6: The socket option socket.TCP_NODELAY is set by default for all TCP connections.\nChanged in version 3.7: Added the ssl_handshake_timeout parameter.\nChanged in version 3.8: Added the happy_eyeballs_delay and interleave parameters.\nHappy Eyeballs Algorithm: Success with Dual-Stack Hosts. When a server\u2019s IPv4 path and protocol are working, but the server\u2019s IPv6 path and protocol are not working, a dual-stack client application experiences significant connection delay compared to an IPv4-only client. This is undesirable because it causes the dual-stack client to have a worse user experience. This document specifies requirements for algorithms that reduce this user-visible delay and provides an algorithm.\nFor more information: https://datatracker.ietf.org/doc/html/rfc6555\nChanged in version 3.11: Added the ssl_shutdown_timeout parameter.\nChanged in version 3.12: all_errors was added.\nSee also\nThe\nopen_connection()\nfunction is a high-level alternative API. It returns a pair of (StreamReader\n,StreamWriter\n) that can be used directly in async/await code.\n- async loop.create_datagram_endpoint(protocol_factory, local_addr=None, remote_addr=None, *, family=0, proto=0, flags=0, reuse_port=None, allow_broadcast=None, sock=None)\u00b6\nCreate a datagram connection.\nThe socket family can be either\nAF_INET\n,AF_INET6\n, orAF_UNIX\n, depending on host (or the family argument, if provided).The socket type will be\nSOCK_DGRAM\n.protocol_factory must be a callable returning a protocol implementation.\nA tuple of\n(transport, protocol)\nis returned on success.Other arguments:\nlocal_addr, if given, is a\n(local_host, local_port)\ntuple used to bind the socket locally. The local_host and local_port are looked up usinggetaddrinfo()\n.Note\nOn Windows, when using the proactor event loop with\nlocal_addr=None\n, anOSError\nwitherrno.WSAEINVAL\nwill be raised when running it.remote_addr, if given, is a\n(remote_host, remote_port)\ntuple used to connect the socket to a remote address. The remote_host and remote_port are looked up usinggetaddrinfo()\n.family, proto, flags are the optional address family, protocol and flags to be passed through to\ngetaddrinfo()\nfor host resolution. If given, these should all be integers from the correspondingsocket\nmodule constants.reuse_port tells the kernel to allow this endpoint to be bound to the same port as other existing endpoints are bound to, so long as they all set this flag when being created. This option is not supported on Windows and some Unixes. If the socket.SO_REUSEPORT constant is not defined then this capability is unsupported.\nallow_broadcast tells the kernel to allow this endpoint to send messages to the broadcast address.\nsock can optionally be specified in order to use a preexisting, already connected,\nsocket.socket\nobject to be used by the transport. If specified, local_addr and remote_addr should be omitted (must beNone\n).Note\nThe sock argument transfers ownership of the socket to the transport created. To close the socket, call the transport\u2019s\nclose()\nmethod.\nSee UDP echo client protocol and UDP echo server protocol examples.\nChanged in version 3.4.4: The family, proto, flags, reuse_address, reuse_port, allow_broadcast, and sock parameters were added.\nChanged in version 3.8: Added support for Windows.\nChanged in version 3.8.1: The reuse_address parameter is no longer supported, as using socket.SO_REUSEADDR poses a significant security concern for UDP. Explicitly passing\nreuse_address=True\nwill raise an exception.When multiple processes with differing UIDs assign sockets to an identical UDP socket address with\nSO_REUSEADDR\n, incoming packets can become randomly distributed among the sockets.For supported platforms, reuse_port can be used as a replacement for similar functionality. With reuse_port, socket.SO_REUSEPORT is used instead, which specifically prevents processes with differing UIDs from assigning sockets to the same socket address.\nChanged in version 3.11: The reuse_address parameter, disabled since Python 3.8.1, 3.7.6 and 3.6.10, has been entirely removed.\n- async loop.create_unix_connection(protocol_factory, path=None, *, ssl=None, sock=None, server_hostname=None, ssl_handshake_timeout=None, ssl_shutdown_timeout=None)\u00b6\nCreate a Unix connection.\nThe socket family will be\nAF_UNIX\n; socket type will beSOCK_STREAM\n.A tuple of\n(transport, protocol)\nis returned on success.path is the name of a Unix domain socket and is required, unless a sock parameter is specified. Abstract Unix sockets,\nstr\n,bytes\n, andPath\npaths are supported.See the documentation of the\nloop.create_connection()\nmethod for information about arguments to this method.Availability: Unix.\nChanged in version 3.7: Added the ssl_handshake_timeout parameter. The path parameter can now be a path-like object.\nChanged in version 3.11: Added the ssl_shutdown_timeout parameter.\nCreating network servers\u00b6\n- async loop.create_server(protocol_factory, host=None, port=None, *, family=socket.AF_UNSPEC, flags=socket.AI_PASSIVE, sock=None, backlog=100, ssl=None, reuse_address=None, reuse_port=None, keep_alive=None, ssl_handshake_timeout=None, ssl_shutdown_timeout=None, start_serving=True)\u00b6\nCreate a TCP server (socket type\nSOCK_STREAM\n) listening on port of the host address.Returns a\nServer\nobject.Arguments:\nprotocol_factory must be a callable returning a protocol implementation.\nThe host parameter can be set to several types which determine where the server would be listening:\nIf host is a string, the TCP server is bound to a single network interface specified by host.\nIf host is a sequence of strings, the TCP server is bound to all network interfaces specified by the sequence.\nIf host is an empty string or\nNone\n, all interfaces are assumed and a list of multiple sockets will be returned (most likely one for IPv4 and another one for IPv6).\nThe port parameter can be set to specify which port the server should listen on. If\n0\norNone\n(the default), a random unused port will be selected (note that if host resolves to multiple network interfaces, a different random port will be selected for each interface).family can be set to either\nsocket.AF_INET\norAF_INET6\nto force the socket to use IPv4 or IPv6. If not set, the family will be determined from host name (defaults toAF_UNSPEC\n).flags is a bitmask for\ngetaddrinfo()\n.sock can optionally be specified in order to use a preexisting socket object. If specified, host and port must not be specified.\nNote\nThe sock argument transfers ownership of the socket to the server created. To close the socket, call the server\u2019s\nclose()\nmethod.backlog is the maximum number of queued connections passed to\nlisten()\n(defaults to 100).ssl can be set to an\nSSLContext\ninstance to enable TLS over the accepted connections.reuse_address tells the kernel to reuse a local socket in\nTIME_WAIT\nstate, without waiting for its natural timeout to expire. If not specified will automatically be set toTrue\non Unix.reuse_port tells the kernel to allow this endpoint to be bound to the same port as other existing endpoints are bound to, so long as they all set this flag when being created. This option is not supported on Windows.\nkeep_alive set to\nTrue\nkeeps connections active by enabling the periodic transmission of messages.\nChanged in version 3.13: Added the keep_alive parameter.\nssl_handshake_timeout is (for a TLS server) the time in seconds to wait for the TLS handshake to complete before aborting the connection.\n60.0\nseconds ifNone\n(default).ssl_shutdown_timeout is the time in seconds to wait for the SSL shutdown to complete before aborting the connection.\n30.0\nseconds ifNone\n(default).start_serving set to\nTrue\n(the default) causes the created server to start accepting connections immediately. When set toFalse\n, the user should await onServer.start_serving()\norServer.serve_forever()\nto make the server to start accepting connections.\nChanged in version 3.5: Added support for SSL/TLS in\nProactorEventLoop\n.Changed in version 3.5.1: The host parameter can be a sequence of strings.\nChanged in version 3.6: Added ssl_handshake_timeout and start_serving parameters. The socket option socket.TCP_NODELAY is set by default for all TCP connections.\nChanged in version 3.11: Added the ssl_shutdown_timeout parameter.\nSee also\nThe\nstart_server()\nfunction is a higher-level alternative API that returns a pair ofStreamReader\nandStreamWriter\nthat can be used in an async/await code.\n- async loop.create_unix_server(protocol_factory, path=None, *, sock=None, backlog=100, ssl=None, ssl_handshake_timeout=None, ssl_shutdown_timeout=None, start_serving=True, cleanup_socket=True)\u00b6\nSimilar to\nloop.create_server()\nbut works with theAF_UNIX\nsocket family.path is the name of a Unix domain socket, and is required, unless a sock argument is provided. Abstract Unix sockets,\nstr\n,bytes\n, andPath\npaths are supported.If cleanup_socket is true then the Unix socket will automatically be removed from the filesystem when the server is closed, unless the socket has been replaced after the server has been created.\nSee the documentation of the\nloop.create_server()\nmethod for information about arguments to this method.Availability: Unix.\nChanged in version 3.7: Added the ssl_handshake_timeout and start_serving parameters. The path parameter can now be a\nPath\nobject.Changed in version 3.11: Added the ssl_shutdown_timeout parameter.\nChanged in version 3.13: Added the cleanup_socket parameter.\n- async loop.connect_accepted_socket(protocol_factory, sock, *, ssl=None, ssl_handshake_timeout=None, ssl_shutdown_timeout=None)\u00b6\nWrap an already accepted connection into a transport/protocol pair.\nThis method can be used by servers that accept connections outside of asyncio but that use asyncio to handle them.\nParameters:\nprotocol_factory must be a callable returning a protocol implementation.\nsock is a preexisting socket object returned from\nsocket.accept\n.Note\nThe sock argument transfers ownership of the socket to the transport created. To close the socket, call the transport\u2019s\nclose()\nmethod.ssl can be set to an\nSSLContext\nto enable SSL over the accepted connections.ssl_handshake_timeout is (for an SSL connection) the time in seconds to wait for the SSL handshake to complete before aborting the connection.\n60.0\nseconds ifNone\n(default).ssl_shutdown_timeout is the time in seconds to wait for the SSL shutdown to complete before aborting the connection.\n30.0\nseconds ifNone\n(default).\nReturns a\n(transport, protocol)\npair.Added in version 3.5.3.\nChanged in version 3.7: Added the ssl_handshake_timeout parameter.\nChanged in version 3.11: Added the ssl_shutdown_timeout parameter.\nTransferring files\u00b6\n- async loop.sendfile(transport, file, offset=0, count=None, *, fallback=True)\u00b6\nSend a file over a transport. Return the total number of bytes sent.\nThe method uses high-performance\nos.sendfile()\nif available.file must be a regular file object opened in binary mode.\noffset tells from where to start reading the file. If specified, count is the total number of bytes to transmit as opposed to sending the file until EOF is reached. File position is always updated, even when this method raises an error, and\nfile.tell()\ncan be used to obtain the actual number of bytes sent.fallback set to\nTrue\nmakes asyncio to manually read and send the file when the platform does not support the sendfile system call (e.g. Windows or SSL socket on Unix).Raise\nSendfileNotAvailableError\nif the system does not support the sendfile syscall and fallback isFalse\n.Added in version 3.7.\nTLS Upgrade\u00b6\n- async loop.start_tls(transport, protocol, sslcontext, *, server_side=False, server_hostname=None, ssl_handshake_timeout=None, ssl_shutdown_timeout=None)\u00b6\nUpgrade an existing transport-based connection to TLS.\nCreate a TLS coder/decoder instance and insert it between the transport and the protocol. The coder/decoder implements both transport-facing protocol and protocol-facing transport.\nReturn the created two-interface instance. After await, the protocol must stop using the original transport and communicate with the returned object only because the coder caches protocol-side data and sporadically exchanges extra TLS session packets with transport.\nIn some situations (e.g. when the passed transport is already closing) this may return\nNone\n.Parameters:\ntransport and protocol instances that methods like\ncreate_server()\nandcreate_connection()\nreturn.sslcontext: a configured instance of\nSSLContext\n.server_side pass\nTrue\nwhen a server-side connection is being upgraded (like the one created bycreate_server()\n).server_hostname: sets or overrides the host name that the target server\u2019s certificate will be matched against.\nssl_handshake_timeout is (for a TLS connection) the time in seconds to wait for the TLS handshake to complete before aborting the connection.\n60.0\nseconds ifNone\n(default).ssl_shutdown_timeout is the time in seconds to wait for the SSL shutdown to complete before aborting the connection.\n30.0\nseconds ifNone\n(default).\nAdded in version 3.7.\nChanged in version 3.11: Added the ssl_shutdown_timeout parameter.\nWatching file descriptors\u00b6\n- loop.add_reader(fd, callback, *args)\u00b6\nStart monitoring the fd file descriptor for read availability and invoke callback with the specified arguments once fd is available for reading.\nAny preexisting callback registered for fd is cancelled and replaced by callback.\n- loop.remove_reader(fd)\u00b6\nStop monitoring the fd file descriptor for read availability. Returns\nTrue\nif fd was previously being monitored for reads.\n- loop.add_writer(fd, callback, *args)\u00b6\nStart monitoring the fd file descriptor for write availability and invoke callback with the specified arguments args once fd is available for writing.\nAny preexisting callback registered for fd is cancelled and replaced by callback.\nUse\nfunctools.partial()\nto pass keyword arguments to callback.\n- loop.remove_writer(fd)\u00b6\nStop monitoring the fd file descriptor for write availability. Returns\nTrue\nif fd was previously being monitored for writes.\nSee also Platform Support section for some limitations of these methods.\nWorking with socket objects directly\u00b6\nIn general, protocol implementations that use transport-based APIs\nsuch as loop.create_connection()\nand loop.create_server()\nare faster than implementations that work with sockets directly.\nHowever, there are some use cases when performance is not critical, and\nworking with socket\nobjects directly is more\nconvenient.\n- async loop.sock_recv(sock, nbytes)\u00b6\nReceive up to nbytes from sock. Asynchronous version of\nsocket.recv()\n.Return the received data as a bytes object.\nsock must be a non-blocking socket.\nChanged in version 3.7: Even though this method was always documented as a coroutine method, releases before Python 3.7 returned a\nFuture\n. Since Python 3.7 this is anasync def\nmethod.\n- async loop.sock_recv_into(sock, buf)\u00b6\nReceive data from sock into the buf buffer. Modeled after the blocking\nsocket.recv_into()\nmethod.Return the number of bytes written to the buffer.\nsock must be a non-blocking socket.\nAdded in version 3.7.\n- async loop.sock_recvfrom(sock, bufsize)\u00b6\nReceive a datagram of up to bufsize from sock. Asynchronous version of\nsocket.recvfrom()\n.Return a tuple of (received data, remote address).\nsock must be a non-blocking socket.\nAdded in version 3.11.\n- async loop.sock_recvfrom_into(sock, buf, nbytes=0)\u00b6\nReceive a datagram of up to nbytes from sock into buf. Asynchronous version of\nsocket.recvfrom_into()\n.Return a tuple of (number of bytes received, remote address).\nsock must be a non-blocking socket.\nAdded in version 3.11.\n- async loop.sock_sendall(sock, data)\u00b6\nSend data to the sock socket. Asynchronous version of\nsocket.sendall()\n.This method continues to send to the socket until either all data in data has been sent or an error occurs.\nNone\nis returned on success. On error, an exception is raised. Additionally, there is no way to determine how much data, if any, was successfully processed by the receiving end of the connection.sock must be a non-blocking socket.\nChanged in version 3.7: Even though the method was always documented as a coroutine method, before Python 3.7 it returned a\nFuture\n. Since Python 3.7, this is anasync def\nmethod.\n- async loop.sock_sendto(sock, data, address)\u00b6\nSend a datagram from sock to address. Asynchronous version of\nsocket.sendto()\n.Return the number of bytes sent.\nsock must be a non-blocking socket.\nAdded in version 3.11.\n- async loop.sock_connect(sock, address)\u00b6\nConnect sock to a remote socket at address.\nAsynchronous version of\nsocket.connect()\n.sock must be a non-blocking socket.\nChanged in version 3.5.2:\naddress\nno longer needs to be resolved.sock_connect\nwill try to check if the address is already resolved by callingsocket.inet_pton()\n. If not,loop.getaddrinfo()\nwill be used to resolve the address.See also\n- async loop.sock_accept(sock)\u00b6\nAccept a connection. Modeled after the blocking\nsocket.accept()\nmethod.The socket must be bound to an address and listening for connections. The return value is a pair\n(conn, address)\nwhere conn is a new socket object usable to send and receive data on the connection, and address is the address bound to the socket on the other end of the connection.sock must be a non-blocking socket.\nChanged in version 3.7: Even though the method was always documented as a coroutine method, before Python 3.7 it returned a\nFuture\n. Since Python 3.7, this is anasync def\nmethod.See also\n- async loop.sock_sendfile(sock, file, offset=0, count=None, *, fallback=True)\u00b6\nSend a file using high-performance\nos.sendfile\nif possible. Return the total number of bytes sent.Asynchronous version of\nsocket.sendfile()\n.sock must be a non-blocking\nsocket.SOCK_STREAM\nsocket\n.file must be a regular file object open in binary mode.\noffset tells from where to start reading the file. If specified, count is the total number of bytes to transmit as opposed to sending the file until EOF is reached. File position is always updated, even when this method raises an error, and\nfile.tell()\ncan be used to obtain the actual number of bytes sent.fallback, when set to\nTrue\n, makes asyncio manually read and send the file when the platform does not support the sendfile syscall (e.g. Windows or SSL socket on Unix).Raise\nSendfileNotAvailableError\nif the system does not support sendfile syscall and fallback isFalse\n.sock must be a non-blocking socket.\nAdded in version 3.7.\nDNS\u00b6\n- async loop.getaddrinfo(host, port, *, family=0, type=0, proto=0, flags=0)\u00b6\nAsynchronous version of\nsocket.getaddrinfo()\n.\n- async loop.getnameinfo(sockaddr, flags=0)\u00b6\nAsynchronous version of\nsocket.getnameinfo()\n.\nNote\nBoth getaddrinfo and getnameinfo internally utilize their synchronous versions through the loop\u2019s default thread pool executor. When this executor is saturated, these methods may experience delays, which higher-level networking libraries may report as increased timeouts. To mitigate this, consider using a custom executor for other user tasks, or setting a default executor with a larger number of workers.\nChanged in version 3.7: Both getaddrinfo and getnameinfo methods were always documented\nto return a coroutine, but prior to Python 3.7 they were, in fact,\nreturning asyncio.Future\nobjects. Starting with Python 3.7\nboth methods are coroutines.\nWorking with pipes\u00b6\n- async loop.connect_read_pipe(protocol_factory, pipe)\u00b6\nRegister the read end of pipe in the event loop.\nprotocol_factory must be a callable returning an asyncio protocol implementation.\npipe is a file-like object.\nReturn pair\n(transport, protocol)\n, where transport supports theReadTransport\ninterface and protocol is an object instantiated by the protocol_factory.With\nSelectorEventLoop\nevent loop, the pipe is set to non-blocking mode.\n- async loop.connect_write_pipe(protocol_factory, pipe)\u00b6\nRegister the write end of pipe in the event loop.\nprotocol_factory must be a callable returning an asyncio protocol implementation.\npipe is file-like object.\nReturn pair\n(transport, protocol)\n, where transport supportsWriteTransport\ninterface and protocol is an object instantiated by the protocol_factory.With\nSelectorEventLoop\nevent loop, the pipe is set to non-blocking mode.\nNote\nSelectorEventLoop\ndoes not support the above methods on\nWindows. Use ProactorEventLoop\ninstead for Windows.\nSee also\nThe loop.subprocess_exec()\nand\nloop.subprocess_shell()\nmethods.\nUnix signals\u00b6\n- loop.add_signal_handler(signum, callback, *args)\u00b6\nSet callback as the handler for the signum signal, passing args as positional arguments.\nThe callback will be invoked by loop, along with other queued callbacks and runnable coroutines of that event loop. Unlike signal handlers registered using\nsignal.signal()\n, a callback registered with this function is allowed to interact with the event loop.Raise\nValueError\nif the signal number is invalid or uncatchable. RaiseRuntimeError\nif there is a problem setting up the handler.Use\nfunctools.partial()\nto pass keyword arguments to callback.Like\nsignal.signal()\n, this function must be invoked in the main thread.\n- loop.remove_signal_handler(sig)\u00b6\nRemove the handler for the sig signal.\nReturn\nTrue\nif the signal handler was removed, orFalse\nif no handler was set for the given signal.Availability: Unix.\nSee also\nThe signal\nmodule.\nExecuting code in thread or process pools\u00b6\n- awaitable loop.run_in_executor(executor, func, *args)\u00b6\nArrange for func to be called in the specified executor passing args as positional arguments.\nThe executor argument should be an\nconcurrent.futures.Executor\ninstance. The default executor is used if executor isNone\n. The default executor can be set byloop.set_default_executor()\n, otherwise, aconcurrent.futures.ThreadPoolExecutor\nwill be lazy-initialized and used byrun_in_executor()\nif needed.Example:\nimport asyncio import concurrent.futures def blocking_io(): # File operations (such as logging) can block the # event loop: run them in a thread pool. with open('/dev/urandom', 'rb') as f: return f.read(100) def cpu_bound(): # CPU-bound operations will block the event loop: # in general it is preferable to run them in a # process pool. return sum(i * i for i in range(10 ** 7)) async def main(): loop = asyncio.get_running_loop() ## Options: # 1. Run in the default loop's executor: result = await loop.run_in_executor( None, blocking_io) print('default thread pool', result) # 2. Run in a custom thread pool: with concurrent.futures.ThreadPoolExecutor() as pool: result = await loop.run_in_executor( pool, blocking_io) print('custom thread pool', result) # 3. Run in a custom process pool: with concurrent.futures.ProcessPoolExecutor() as pool: result = await loop.run_in_executor( pool, cpu_bound) print('custom process pool', result) # 4. Run in a custom interpreter pool: with concurrent.futures.InterpreterPoolExecutor() as pool: result = await loop.run_in_executor( pool, cpu_bound) print('custom interpreter pool', result) if __name__ == '__main__': asyncio.run(main())\nNote that the entry point guard (\nif __name__ == '__main__'\n) is required for option 3 due to the peculiarities ofmultiprocessing\n, which is used byProcessPoolExecutor\n. See Safe importing of main module.This method returns a\nasyncio.Future\nobject.Use\nfunctools.partial()\nto pass keyword arguments to func.Changed in version 3.5.3:\nloop.run_in_executor()\nno longer configures themax_workers\nof the thread pool executor it creates, instead leaving it up to the thread pool executor (ThreadPoolExecutor\n) to set the default.\n- loop.set_default_executor(executor)\u00b6\nSet executor as the default executor used by\nrun_in_executor()\n. executor must be an instance ofThreadPoolExecutor\n, which includesInterpreterPoolExecutor\n.Changed in version 3.11: executor must be an instance of\nThreadPoolExecutor\n.\nError Handling API\u00b6\nAllows customizing how exceptions are handled in the event loop.\n- loop.set_exception_handler(handler)\u00b6\nSet handler as the new event loop exception handler.\nIf handler is\nNone\n, the default exception handler will be set. Otherwise, handler must be a callable with the signature matching(loop, context)\n, whereloop\nis a reference to the active event loop, andcontext\nis adict\nobject containing the details of the exception (seecall_exception_handler()\ndocumentation for details about context).If the handler is called on behalf of a\nTask\norHandle\n, it is run in thecontextvars.Context\nof that task or callback handle.Changed in version 3.12: The handler may be called in the\nContext\nof the task or handle where the exception originated.\n- loop.get_exception_handler()\u00b6\nReturn the current exception handler, or\nNone\nif no custom exception handler was set.Added in version 3.5.2.\n- loop.default_exception_handler(context)\u00b6\nDefault exception handler.\nThis is called when an exception occurs and no exception handler is set. This can be called by a custom exception handler that wants to defer to the default handler behavior.\ncontext parameter has the same meaning as in\ncall_exception_handler()\n.\n- loop.call_exception_handler(context)\u00b6\nCall the current event loop exception handler.\ncontext is a\ndict\nobject containing the following keys (new keys may be introduced in future Python versions):\u2018message\u2019: Error message;\n\u2018exception\u2019 (optional): Exception object;\n\u2018future\u2019 (optional):\nasyncio.Future\ninstance;\u2018task\u2019 (optional):\nasyncio.Task\ninstance;\u2018handle\u2019 (optional):\nasyncio.Handle\ninstance;\u2018protocol\u2019 (optional): Protocol instance;\n\u2018transport\u2019 (optional): Transport instance;\n\u2018socket\u2019 (optional):\nsocket.socket\ninstance;\u2018source_traceback\u2019 (optional): Traceback of the source;\n\u2018handle_traceback\u2019 (optional): Traceback of the handle;\n- \u2018asyncgen\u2019 (optional): Asynchronous generator that caused\nthe exception.\nNote\nThis method should not be overloaded in subclassed event loops. For custom exception handling, use the\nset_exception_handler()\nmethod.\nEnabling debug mode\u00b6\n- loop.get_debug()\u00b6\nGet the debug mode (\nbool\n) of the event loop.The default value is\nTrue\nif the environment variablePYTHONASYNCIODEBUG\nis set to a non-empty string,False\notherwise.\n- loop.set_debug(enabled: bool)\u00b6\nSet the debug mode of the event loop.\nChanged in version 3.7: The new Python Development Mode can now also be used to enable the debug mode.\n- loop.slow_callback_duration\u00b6\nThis attribute can be used to set the minimum execution duration in seconds that is considered \u201cslow\u201d. When debug mode is enabled, \u201cslow\u201d callbacks are logged.\nDefault value is 100 milliseconds.\nSee also\nRunning Subprocesses\u00b6\nMethods described in this subsections are low-level. In regular\nasync/await code consider using the high-level\nasyncio.create_subprocess_shell()\nand\nasyncio.create_subprocess_exec()\nconvenience functions instead.\nNote\nOn Windows, the default event loop ProactorEventLoop\nsupports\nsubprocesses, whereas SelectorEventLoop\ndoes not. See\nSubprocess Support on Windows for\ndetails.\n- async loop.subprocess_exec(protocol_factory, *args, stdin=subprocess.PIPE, stdout=subprocess.PIPE, stderr=subprocess.PIPE, **kwargs)\u00b6\nCreate a subprocess from one or more string arguments specified by args.\nargs must be a list of strings represented by:\nstr\n;or\nbytes\n, encoded to the filesystem encoding.\nThe first string specifies the program executable, and the remaining strings specify the arguments. Together, string arguments form the\nargv\nof the program.This is similar to the standard library\nsubprocess.Popen\nclass called withshell=False\nand the list of strings passed as the first argument; however, wherePopen\ntakes a single argument which is list of strings, subprocess_exec takes multiple string arguments.The protocol_factory must be a callable returning a subclass of the\nasyncio.SubprocessProtocol\nclass.Other parameters:\nstdin can be any of these:\na file-like object\nan existing file descriptor (a positive integer), for example those created with\nos.pipe()\nthe\nsubprocess.PIPE\nconstant (default) which will create a new pipe and connect it,the value\nNone\nwhich will make the subprocess inherit the file descriptor from this processthe\nsubprocess.DEVNULL\nconstant which indicates that the specialos.devnull\nfile will be used\nstdout can be any of these:\na file-like object\nthe\nsubprocess.PIPE\nconstant (default) which will create a new pipe and connect it,the value\nNone\nwhich will make the subprocess inherit the file descriptor from this processthe\nsubprocess.DEVNULL\nconstant which indicates that the specialos.devnull\nfile will be used\nstderr can be any of these:\na file-like object\nthe\nsubprocess.PIPE\nconstant (default) which will create a new pipe and connect it,the value\nNone\nwhich will make the subprocess inherit the file descriptor from this processthe\nsubprocess.DEVNULL\nconstant which indicates that the specialos.devnull\nfile will be usedthe\nsubprocess.STDOUT\nconstant which will connect the standard error stream to the process\u2019 standard output stream\nAll other keyword arguments are passed to\nsubprocess.Popen\nwithout interpretation, except for bufsize, universal_newlines, shell, text, encoding and errors, which should not be specified at all.The\nasyncio\nsubprocess API does not support decoding the streams as text.bytes.decode()\ncan be used to convert the bytes returned from the stream to text.\nIf a file-like object passed as stdin, stdout or stderr represents a pipe, then the other side of this pipe should be registered with\nconnect_write_pipe()\norconnect_read_pipe()\nfor use with the event loop.See the constructor of the\nsubprocess.Popen\nclass for documentation on other arguments.Returns a pair of\n(transport, protocol)\n, where transport conforms to theasyncio.SubprocessTransport\nbase class and protocol is an object instantiated by the protocol_factory.If the transport is closed or is garbage collected, the child process is killed if it is still running.\n- async loop.subprocess_shell(protocol_factory, cmd, *, stdin=subprocess.PIPE, stdout=subprocess.PIPE, stderr=subprocess.PIPE, **kwargs)\u00b6\nCreate a subprocess from cmd, which can be a\nstr\nor abytes\nstring encoded to the filesystem encoding, using the platform\u2019s \u201cshell\u201d syntax.This is similar to the standard library\nsubprocess.Popen\nclass called withshell=True\n.The protocol_factory must be a callable returning a subclass of the\nSubprocessProtocol\nclass.See\nsubprocess_exec()\nfor more details about the remaining arguments.Returns a pair of\n(transport, protocol)\n, where transport conforms to theSubprocessTransport\nbase class and protocol is an object instantiated by the protocol_factory.If the transport is closed or is garbage collected, the child process is killed if it is still running.\nNote\nIt is the application\u2019s responsibility to ensure that all whitespace\nand special characters are quoted appropriately to avoid shell injection\nvulnerabilities. The shlex.quote()\nfunction can be used to\nproperly escape whitespace and special characters in strings that\nare going to be used to construct shell commands.\nCallback Handles\u00b6\n- class asyncio.Handle\u00b6\nA callback wrapper object returned by\nloop.call_soon()\n,loop.call_soon_threadsafe()\n.- get_context()\u00b6\nReturn the\ncontextvars.Context\nobject associated with the handle.Added in version 3.12.\n- cancel()\u00b6\nCancel the callback. If the callback has already been canceled or executed, this method has no effect.\n- cancelled()\u00b6\nReturn\nTrue\nif the callback was cancelled.Added in version 3.7.\n- class asyncio.TimerHandle\u00b6\nA callback wrapper object returned by\nloop.call_later()\n, andloop.call_at()\n.This class is a subclass of\nHandle\n.- when()\u00b6\nReturn a scheduled callback time as\nfloat\nseconds.The time is an absolute timestamp, using the same time reference as\nloop.time()\n.Added in version 3.7.\nServer Objects\u00b6\nServer objects are created by loop.create_server()\n,\nloop.create_unix_server()\n, start_server()\n,\nand start_unix_server()\nfunctions.\nDo not instantiate the Server\nclass directly.\n- class asyncio.Server\u00b6\nServer objects are asynchronous context managers. When used in an\nasync with\nstatement, it\u2019s guaranteed that the Server object is closed and not accepting new connections when theasync with\nstatement is completed:srv = await loop.create_server(...) async with srv: # some code # At this point, srv is closed and no longer accepts new connections.\nChanged in version 3.7: Server object is an asynchronous context manager since Python 3.7.\nChanged in version 3.11: This class was exposed publicly as\nasyncio.Server\nin Python 3.9.11, 3.10.3 and 3.11.- close()\u00b6\nStop serving: close listening sockets and set the\nsockets\nattribute toNone\n.The sockets that represent existing incoming client connections are left open.\nThe server is closed asynchronously; use the\nwait_closed()\ncoroutine to wait until the server is closed (and no more connections are active).\n- close_clients()\u00b6\nClose all existing incoming client connections.\nCalls\nclose()\non all associated transports.close()\nshould be called beforeclose_clients()\nwhen closing the server to avoid races with new clients connecting.Added in version 3.13.\n- abort_clients()\u00b6\nClose all existing incoming client connections immediately, without waiting for pending operations to complete.\nCalls\nabort()\non all associated transports.close()\nshould be called beforeabort_clients()\nwhen closing the server to avoid races with new clients connecting.Added in version 3.13.\n- get_loop()\u00b6\nReturn the event loop associated with the server object.\nAdded in version 3.7.\n- async start_serving()\u00b6\nStart accepting connections.\nThis method is idempotent, so it can be called when the server is already serving.\nThe start_serving keyword-only parameter to\nloop.create_server()\nandasyncio.start_server()\nallows creating a Server object that is not accepting connections initially. In this caseServer.start_serving()\n, orServer.serve_forever()\ncan be used to make the Server start accepting connections.Added in version 3.7.\n- async serve_forever()\u00b6\nStart accepting connections until the coroutine is cancelled. Cancellation of\nserve_forever\ntask causes the server to be closed.This method can be called if the server is already accepting connections. Only one\nserve_forever\ntask can exist per one Server object.Example:\nasync def client_connected(reader, writer): # Communicate with the client with # reader/writer streams. For example: await reader.readline() async def main(host, port): srv = await asyncio.start_server( client_connected, host, port) await srv.serve_forever() asyncio.run(main('127.0.0.1', 0))\nAdded in version 3.7.\n- is_serving()\u00b6\nReturn\nTrue\nif the server is accepting new connections.Added in version 3.7.\n- async wait_closed()\u00b6\nWait until the\nclose()\nmethod completes and all active connections have finished.\n- sockets\u00b6\nList of socket-like objects,\nasyncio.trsock.TransportSocket\n, which the server is listening on.Changed in version 3.7: Prior to Python 3.7\nServer.sockets\nused to return an internal list of server sockets directly. In 3.7 a copy of that list is returned.\nEvent Loop Implementations\u00b6\nasyncio ships with two different event loop implementations:\nSelectorEventLoop\nand ProactorEventLoop\n.\nBy default asyncio is configured to use EventLoop\n.\n- class asyncio.SelectorEventLoop\u00b6\nA subclass of\nAbstractEventLoop\nbased on theselectors\nmodule.Uses the most efficient selector available for the given platform. It is also possible to manually configure the exact selector implementation to be used:\nimport asyncio import selectors async def main(): ... loop_factory = lambda: asyncio.SelectorEventLoop(selectors.SelectSelector()) asyncio.run(main(), loop_factory=loop_factory)\nAvailability: Unix, Windows.\n- class asyncio.ProactorEventLoop\u00b6\nA subclass of\nAbstractEventLoop\nfor Windows that uses \u201cI/O Completion Ports\u201d (IOCP).Availability: Windows.\n- class asyncio.EventLoop\u00b6\nAn alias to the most efficient available subclass of\nAbstractEventLoop\nfor the given platform.It is an alias to\nSelectorEventLoop\non Unix andProactorEventLoop\non Windows.Added in version 3.13.\n- class asyncio.AbstractEventLoop\u00b6\nAbstract base class for asyncio-compliant event loops.\nThe Event Loop Methods section lists all methods that an alternative implementation of\nAbstractEventLoop\nshould have defined.\nExamples\u00b6\nNote that all examples in this section purposefully show how\nto use the low-level event loop APIs, such as loop.run_forever()\nand loop.call_soon()\n. Modern asyncio applications rarely\nneed to be written this way; consider using the high-level functions\nlike asyncio.run()\n.\nHello World with call_soon()\u00b6\nAn example using the loop.call_soon()\nmethod to schedule a\ncallback. The callback displays \"Hello World\"\nand then stops the\nevent loop:\nimport asyncio\ndef hello_world(loop):\n\"\"\"A callback to print 'Hello World' and stop the event loop\"\"\"\nprint('Hello World')\nloop.stop()\nloop = asyncio.new_event_loop()\n# Schedule a call to hello_world()\nloop.call_soon(hello_world, loop)\n# Blocking call interrupted by loop.stop()\ntry:\nloop.run_forever()\nfinally:\nloop.close()\nSee also\nA similar Hello World\nexample created with a coroutine and the run()\nfunction.\nDisplay the current date with call_later()\u00b6\nAn example of a callback displaying the current date every second. The\ncallback uses the loop.call_later()\nmethod to reschedule itself\nafter 5 seconds, and then stops the event loop:\nimport asyncio\nimport datetime\ndef display_date(end_time, loop):\nprint(datetime.datetime.now())\nif (loop.time() + 1.0) < end_time:\nloop.call_later(1, display_date, end_time, loop)\nelse:\nloop.stop()\nloop = asyncio.new_event_loop()\n# Schedule the first call to display_date()\nend_time = loop.time() + 5.0\nloop.call_soon(display_date, end_time, loop)\n# Blocking call interrupted by loop.stop()\ntry:\nloop.run_forever()\nfinally:\nloop.close()\nSee also\nA similar current date example\ncreated with a coroutine and the run()\nfunction.\nWatch a file descriptor for read events\u00b6\nWait until a file descriptor received some data using the\nloop.add_reader()\nmethod and then close the event loop:\nimport asyncio\nfrom socket import socketpair\n# Create a pair of connected file descriptors\nrsock, wsock = socketpair()\nloop = asyncio.new_event_loop()\ndef reader():\ndata = rsock.recv(100)\nprint(\"Received:\", data.decode())\n# We are done: unregister the file descriptor\nloop.remove_reader(rsock)\n# Stop the event loop\nloop.stop()\n# Register the file descriptor for read event\nloop.add_reader(rsock, reader)\n# Simulate the reception of data from the network\nloop.call_soon(wsock.send, 'abc'.encode())\ntry:\n# Run the event loop\nloop.run_forever()\nfinally:\n# We are done. Close sockets and the event loop.\nrsock.close()\nwsock.close()\nloop.close()\nSee also\nA similar example using transports, protocols, and the\nloop.create_connection()\nmethod.Another similar example using the high-level\nasyncio.open_connection()\nfunction and streams.\nSet signal handlers for SIGINT and SIGTERM\u00b6\n(This signals\nexample only works on Unix.)\nRegister handlers for signals SIGINT\nand SIGTERM\nusing the loop.add_signal_handler()\nmethod:\nimport asyncio\nimport functools\nimport os\nimport signal\ndef ask_exit(signame, loop):\nprint(\"got signal %s: exit\" % signame)\nloop.stop()\nasync def main():\nloop = asyncio.get_running_loop()\nfor signame in {'SIGINT', 'SIGTERM'}:\nloop.add_signal_handler(\ngetattr(signal, signame),\nfunctools.partial(ask_exit, signame, loop))\nawait asyncio.sleep(3600)\nprint(\"Event loop running for 1 hour, press Ctrl+C to interrupt.\")\nprint(f\"pid {os.getpid()}: send SIGINT or SIGTERM to exit.\")\nasyncio.run(main())", "code_snippets": ["\n ", "\n", "\n ", "\n ", "\n", "\n", "\n ", " ", " ", "\n", "\n", "\n\n", "\n ", "\n ", "\n ", " ", " ", " ", " ", "\n ", " ", "\n\n", "\n ", "\n ", "\n ", "\n ", " ", " ", " ", " ", " ", " ", " ", " ", " ", "\n\n", " ", "\n ", " ", " ", "\n\n ", "\n\n ", "\n ", " ", " ", " ", "\n ", " ", "\n ", " ", "\n\n ", "\n ", " ", " ", " ", "\n ", " ", " ", " ", "\n ", " ", "\n ", " ", "\n\n ", "\n ", " ", " ", " ", "\n ", " ", " ", " ", "\n ", " ", "\n ", " ", "\n\n ", "\n ", " ", " ", " ", "\n ", " ", " ", " ", "\n ", " ", "\n ", " ", "\n\n", " ", " ", " ", "\n ", "\n", " ", " ", " ", "\n\n", " ", " ", "\n ", "\n\n", "\n", " ", " ", "\n ", "\n ", "\n ", " ", "\n\n", " ", " ", "\n ", " ", " ", " ", "\n ", " ", " ", "\n ", " ", "\n\n", " ", "\n", "\n", "\n\n", " ", "\n ", "\n\n", " ", " ", " ", "\n", " ", "\n", "\n\n", "\n", "\n ", "\n ", "\n\n", " ", " ", "\n\n", "\n", " ", "\n\n", "\n", "\n ", "\n", "\n ", "\n", "\n", "\n\n", " ", "\n ", "\n ", " ", " ", " ", " ", " ", "\n ", " ", " ", " ", "\n ", "\n ", "\n\n", " ", " ", "\n\n", "\n", " ", " ", " ", " ", "\n", " ", " ", "\n\n", "\n", "\n ", "\n", "\n ", "\n", "\n", " ", "\n\n", "\n", " ", " ", " ", "\n\n", " ", " ", "\n\n", "\n ", " ", " ", "\n ", " ", "\n\n ", "\n ", "\n\n ", "\n ", "\n\n", "\n", " ", "\n\n", "\n", " ", "\n\n", "\n ", "\n ", "\n", "\n ", "\n ", "\n ", "\n ", "\n", "\n", "\n", "\n", "\n\n", " ", "\n ", " ", " ", "\n ", "\n\n", " ", "\n ", " ", " ", "\n\n ", " ", " ", " ", " ", "\n ", "\n ", " ", "\n ", " ", " ", "\n\n ", " ", "\n\n", "\n", "\n\n", "\n"], "language": "Python", "source": "python.org", "token_count": 13964} +{"url": "https://docs.python.org/3/reference/introduction.html", "title": "Introduction", "content": "1. Introduction\u00b6\nThis reference manual describes the Python programming language. It is not intended as a tutorial.\nWhile I am trying to be as precise as possible, I chose to use English rather than formal specifications for everything except syntax and lexical analysis. This should make the document more understandable to the average reader, but will leave room for ambiguities. Consequently, if you were coming from Mars and tried to re-implement Python from this document alone, you might have to guess things and in fact you would probably end up implementing quite a different language. On the other hand, if you are using Python and wonder what the precise rules about a particular area of the language are, you should definitely be able to find them here. If you would like to see a more formal definition of the language, maybe you could volunteer your time \u2014 or invent a cloning machine :-).\nIt is dangerous to add too many implementation details to a language reference document \u2014 the implementation may change, and other implementations of the same language may work differently. On the other hand, CPython is the one Python implementation in widespread use (although alternate implementations continue to gain support), and its particular quirks are sometimes worth being mentioned, especially where the implementation imposes additional limitations. Therefore, you\u2019ll find short \u201cimplementation notes\u201d sprinkled throughout the text.\nEvery Python implementation comes with a number of built-in and standard modules. These are documented in The Python Standard Library. A few built-in modules are mentioned when they interact in a significant way with the language definition.\n1.1. Alternate Implementations\u00b6\nThough there is one Python implementation which is by far the most popular, there are some alternate implementations which are of particular interest to different audiences.\nKnown implementations include:\n- CPython\nThis is the original and most-maintained implementation of Python, written in C. New language features generally appear here first.\n- Jython\nPython implemented in Java. This implementation can be used as a scripting language for Java applications, or can be used to create applications using the Java class libraries. It is also often used to create tests for Java libraries. More information can be found at the Jython website.\n- Python for .NET\nThis implementation actually uses the CPython implementation, but is a managed .NET application and makes .NET libraries available. It was created by Brian Lloyd. For more information, see the Python for .NET home page.\n- IronPython\nAn alternate Python for .NET. Unlike Python.NET, this is a complete Python implementation that generates IL, and compiles Python code directly to .NET assemblies. It was created by Jim Hugunin, the original creator of Jython. For more information, see the IronPython website.\n- PyPy\nAn implementation of Python written completely in Python. It supports several advanced features not found in other implementations like stackless support and a Just in Time compiler. One of the goals of the project is to encourage experimentation with the language itself by making it easier to modify the interpreter (since it is written in Python). Additional information is available on the PyPy project\u2019s home page.\nEach of these implementations varies in some way from the language as documented in this manual, or introduces specific information beyond what\u2019s covered in the standard Python documentation. Please refer to the implementation-specific documentation to determine what else you need to know about the specific implementation you\u2019re using.\n1.2. Notation\u00b6\nThe descriptions of lexical analysis and syntax use a grammar notation that is a mixture of EBNF and PEG. For example:\nname:letter\n(letter\n|digit\n| \"_\")* letter: \"a\"...\"z\" | \"A\"...\"Z\" digit: \"0\"...\"9\"\nIn this example, the first line says that a name\nis a letter\nfollowed\nby a sequence of zero or more letter\ns, digit\ns, and underscores.\nA letter\nin turn is any of the single characters 'a'\nthrough\n'z'\nand A\nthrough Z\n; a digit\nis a single character from 0\nto 9\n.\nEach rule begins with a name (which identifies the rule that\u2019s being defined)\nfollowed by a colon, :\n.\nThe definition to the right of the colon uses the following syntax elements:\nname\n: A name refers to another rule. Where possible, it is a link to the rule\u2019s definition.TOKEN\n: An uppercase name refers to a token. For the purposes of grammar definitions, tokens are the same as rules.\n\"text\"\n,'text'\n: Text in single or double quotes must match literally (without the quotes). The type of quote is chosen according to the meaning oftext\n:'if'\n: A name in single quotes denotes a keyword.\"case\"\n: A name in double quotes denotes a soft-keyword.'@'\n: A non-letter symbol in single quotes denotes anOP\ntoken, that is, a delimiter or operator.\ne1 e2\n: Items separated only by whitespace denote a sequence. Here,e1\nmust be followed bye2\n.e1 | e2\n: A vertical bar is used to separate alternatives. It denotes PEG\u2019s \u201cordered choice\u201d: ife1\nmatches,e2\nis not considered. In traditional PEG grammars, this is written as a slash,/\n, rather than a vertical bar. See PEP 617 for more background and details.e*\n: A star means zero or more repetitions of the preceding item.e+\n: Likewise, a plus means one or more repetitions.[e]\n: A phrase enclosed in square brackets means zero or one occurrences. In other words, the enclosed phrase is optional.e?\n: A question mark has exactly the same meaning as square brackets: the preceding item is optional.(e)\n: Parentheses are used for grouping.\nThe following notation is only used in lexical definitions.\n\"a\"...\"z\"\n: Two literal characters separated by three dots mean a choice of any single character in the given (inclusive) range of ASCII characters.<...>\n: A phrase between angular brackets gives an informal description of the matched symbol (for example,\n), or an abbreviation that is defined in nearby text (for example,\n).\nSome definitions also use lookaheads, which indicate that an element must (or must not) match at a given position, but without consuming any input:\n&e\n: a positive lookahead (that is,e\nis required to match)!e\n: a negative lookahead (that is,e\nis required not to match)\nThe unary operators (*\n, +\n, ?\n) bind as tightly as possible;\nthe vertical bar (|\n) binds most loosely.\nWhite space is only meaningful to separate tokens.\nRules are normally contained on a single line, but rules that are too long may be wrapped:\nliteral: stringliteral | bytesliteral | integer | floatnumber | imagnumber\nAlternatively, rules may be formatted with the first line ending at the colon, and each alternative beginning with a vertical bar on a new line. For example:\nliteral: | stringliteral | bytesliteral | integer | floatnumber | imagnumber\nThis does not mean that there is an empty first alternative.\n1.2.1. Lexical and Syntactic definitions\u00b6\nThere is some difference between lexical and syntactic analysis: the lexical analyzer operates on the individual characters of the input source, while the parser (syntactic analyzer) operates on the stream of tokens generated by the lexical analysis. However, in some cases the exact boundary between the two phases is a CPython implementation detail.\nThe practical difference between the two is that in lexical definitions,\nall whitespace is significant.\nThe lexical analyzer discards all whitespace that is not\nconverted to tokens like token.INDENT\nor NEWLINE\n.\nSyntactic definitions then use these tokens, rather than source characters.\nThis documentation uses the same BNF grammar for both styles of definitions. All uses of BNF in the next chapter (Lexical analysis) are lexical definitions; uses in subsequent chapters are syntactic definitions.", "code_snippets": [], "language": "Python", "source": "python.org", "token_count": 1954} +{"url": "https://docs.python.org/3/reference/import.html", "title": "The import system", "content": "5. The import system\u00b6\nPython code in one module gains access to the code in another module\nby the process of importing it. The import\nstatement is\nthe most common way of invoking the import machinery, but it is not the only\nway. Functions such as importlib.import_module()\nand built-in\n__import__()\ncan also be used to invoke the import machinery.\nThe import\nstatement combines two operations; it searches for the\nnamed module, then it binds the results of that search to a name in the local\nscope. The search operation of the import\nstatement is defined as\na call to the __import__()\nfunction, with the appropriate arguments.\nThe return value of __import__()\nis used to perform the name\nbinding operation of the import\nstatement. See the\nimport\nstatement for the exact details of that name binding\noperation.\nA direct call to __import__()\nperforms only the module search and, if\nfound, the module creation operation. While certain side-effects may occur,\nsuch as the importing of parent packages, and the updating of various caches\n(including sys.modules\n), only the import\nstatement performs\na name binding operation.\nWhen an import\nstatement is executed, the standard builtin\n__import__()\nfunction is called. Other mechanisms for invoking the\nimport system (such as importlib.import_module()\n) may choose to bypass\n__import__()\nand use their own solutions to implement import semantics.\nWhen a module is first imported, Python searches for the module and if found,\nit creates a module object [1], initializing it. If the named module\ncannot be found, a ModuleNotFoundError\nis raised. Python implements various\nstrategies to search for the named module when the import machinery is\ninvoked. These strategies can be modified and extended by using various hooks\ndescribed in the sections below.\nChanged in version 3.3: The import system has been updated to fully implement the second phase\nof PEP 302. There is no longer any implicit import machinery - the full\nimport system is exposed through sys.meta_path\n. In addition,\nnative namespace package support has been implemented (see PEP 420).\n5.1. importlib\n\u00b6\nThe importlib\nmodule provides a rich API for interacting with the\nimport system. For example importlib.import_module()\nprovides a\nrecommended, simpler API than built-in __import__()\nfor invoking the\nimport machinery. Refer to the importlib\nlibrary documentation for\nadditional detail.\n5.2. Packages\u00b6\nPython has only one type of module object, and all modules are of this type, regardless of whether the module is implemented in Python, C, or something else. To help organize modules and provide a naming hierarchy, Python has a concept of packages.\nYou can think of packages as the directories on a file system and modules as files within directories, but don\u2019t take this analogy too literally since packages and modules need not originate from the file system. For the purposes of this documentation, we\u2019ll use this convenient analogy of directories and files. Like file system directories, packages are organized hierarchically, and packages may themselves contain subpackages, as well as regular modules.\nIt\u2019s important to keep in mind that all packages are modules, but not all\nmodules are packages. Or put another way, packages are just a special kind of\nmodule. Specifically, any module that contains a __path__\nattribute is\nconsidered a package.\nAll modules have a name. Subpackage names are separated from their parent\npackage name by a dot, akin to Python\u2019s standard attribute access syntax. Thus\nyou might have a package called email\n, which in turn has a subpackage\ncalled email.mime\nand a module within that subpackage called\nemail.mime.text\n.\n5.2.1. Regular packages\u00b6\nPython defines two types of packages, regular packages and namespace packages. Regular\npackages are traditional packages as they existed in Python 3.2 and earlier.\nA regular package is typically implemented as a directory containing an\n__init__.py\nfile. When a regular package is imported, this\n__init__.py\nfile is implicitly executed, and the objects it defines are\nbound to names in the package\u2019s namespace. The __init__.py\nfile can\ncontain the same Python code that any other module can contain, and Python\nwill add some additional attributes to the module when it is imported.\nFor example, the following file system layout defines a top level parent\npackage with three subpackages:\nparent/\n__init__.py\none/\n__init__.py\ntwo/\n__init__.py\nthree/\n__init__.py\nImporting parent.one\nwill implicitly execute parent/__init__.py\nand\nparent/one/__init__.py\n. Subsequent imports of parent.two\nor\nparent.three\nwill execute parent/two/__init__.py\nand\nparent/three/__init__.py\nrespectively.\n5.2.2. Namespace packages\u00b6\nA namespace package is a composite of various portions, where each portion contributes a subpackage to the parent package. Portions may reside in different locations on the file system. Portions may also be found in zip files, on the network, or anywhere else that Python searches during import. Namespace packages may or may not correspond directly to objects on the file system; they may be virtual modules that have no concrete representation.\nNamespace packages do not use an ordinary list for their __path__\nattribute. They instead use a custom iterable type which will automatically\nperform a new search for package portions on the next import attempt within\nthat package if the path of their parent package (or sys.path\nfor a\ntop level package) changes.\nWith namespace packages, there is no parent/__init__.py\nfile. In fact,\nthere may be multiple parent\ndirectories found during import search, where\neach one is provided by a different portion. Thus parent/one\nmay not be\nphysically located next to parent/two\n. In this case, Python will create a\nnamespace package for the top-level parent\npackage whenever it or one of\nits subpackages is imported.\nSee also PEP 420 for the namespace package specification.\n5.3. Searching\u00b6\nTo begin the search, Python needs the fully qualified\nname of the module (or package, but for the purposes of this discussion, the\ndifference is immaterial) being imported. This name may come from various\narguments to the import\nstatement, or from the parameters to the\nimportlib.import_module()\nor __import__()\nfunctions.\nThis name will be used in various phases of the import search, and it may be\nthe dotted path to a submodule, e.g. foo.bar.baz\n. In this case, Python\nfirst tries to import foo\n, then foo.bar\n, and finally foo.bar.baz\n.\nIf any of the intermediate imports fail, a ModuleNotFoundError\nis raised.\n5.3.1. The module cache\u00b6\nThe first place checked during import search is sys.modules\n. This\nmapping serves as a cache of all modules that have been previously imported,\nincluding the intermediate paths. So if foo.bar.baz\nwas previously\nimported, sys.modules\nwill contain entries for foo\n, foo.bar\n,\nand foo.bar.baz\n. Each key will have as its value the corresponding module\nobject.\nDuring import, the module name is looked up in sys.modules\nand if\npresent, the associated value is the module satisfying the import, and the\nprocess completes. However, if the value is None\n, then a\nModuleNotFoundError\nis raised. If the module name is missing, Python will\ncontinue searching for the module.\nsys.modules\nis writable. Deleting a key may not destroy the\nassociated module (as other modules may hold references to it),\nbut it will invalidate the cache entry for the named module, causing\nPython to search anew for the named module upon its next\nimport. The key can also be assigned to None\n, forcing the next import\nof the module to result in a ModuleNotFoundError\n.\nBeware though, as if you keep a reference to the module object,\ninvalidate its cache entry in sys.modules\n, and then re-import the\nnamed module, the two module objects will not be the same. By contrast,\nimportlib.reload()\nwill reuse the same module object, and simply\nreinitialise the module contents by rerunning the module\u2019s code.\n5.3.2. Finders and loaders\u00b6\nIf the named module is not found in sys.modules\n, then Python\u2019s import\nprotocol is invoked to find and load the module. This protocol consists of\ntwo conceptual objects, finders and loaders.\nA finder\u2019s job is to determine whether it can find the named module using\nwhatever strategy it knows about. Objects that implement both of these\ninterfaces are referred to as importers - they return\nthemselves when they find that they can load the requested module.\nPython includes a number of default finders and importers. The first one knows how to locate built-in modules, and the second knows how to locate frozen modules. A third default finder searches an import path for modules. The import path is a list of locations that may name file system paths or zip files. It can also be extended to search for any locatable resource, such as those identified by URLs.\nThe import machinery is extensible, so new finders can be added to extend the range and scope of module searching.\nFinders do not actually load modules. If they can find the named module, they return a module spec, an encapsulation of the module\u2019s import-related information, which the import machinery then uses when loading the module.\nThe following sections describe the protocol for finders and loaders in more detail, including how you can create and register new ones to extend the import machinery.\nChanged in version 3.4: In previous versions of Python, finders returned loaders directly, whereas now they return module specs which contain loaders. Loaders are still used during import but have fewer responsibilities.\n5.3.3. Import hooks\u00b6\nThe import machinery is designed to be extensible; the primary mechanism for this are the import hooks. There are two types of import hooks: meta hooks and import path hooks.\nMeta hooks are called at the start of import processing, before any other\nimport processing has occurred, other than sys.modules\ncache look up.\nThis allows meta hooks to override sys.path\nprocessing, frozen\nmodules, or even built-in modules. Meta hooks are registered by adding new\nfinder objects to sys.meta_path\n, as described below.\nImport path hooks are called as part of sys.path\n(or\npackage.__path__\n) processing, at the point where their associated path\nitem is encountered. Import path hooks are registered by adding new callables\nto sys.path_hooks\nas described below.\n5.3.4. The meta path\u00b6\nWhen the named module is not found in sys.modules\n, Python next\nsearches sys.meta_path\n, which contains a list of meta path finder\nobjects. These finders are queried in order to see if they know how to handle\nthe named module. Meta path finders must implement a method called\nfind_spec()\nwhich takes three arguments:\na name, an import path, and (optionally) a target module. The meta path\nfinder can use any strategy it wants to determine whether it can handle\nthe named module or not.\nIf the meta path finder knows how to handle the named module, it returns a\nspec object. If it cannot handle the named module, it returns None\n. If\nsys.meta_path\nprocessing reaches the end of its list without returning\na spec, then a ModuleNotFoundError\nis raised. Any other exceptions\nraised are simply propagated up, aborting the import process.\nThe find_spec()\nmethod of meta path\nfinders is called with two or three arguments. The first is the fully\nqualified name of the module being imported, for example foo.bar.baz\n.\nThe second argument is the path entries to use for the module search. For\ntop-level modules, the second argument is None\n, but for submodules or\nsubpackages, the second argument is the value of the parent package\u2019s\n__path__\nattribute. If the appropriate __path__\nattribute cannot\nbe accessed, a ModuleNotFoundError\nis raised. The third argument\nis an existing module object that will be the target of loading later.\nThe import system passes in a target module only during reload.\nThe meta path may be traversed multiple times for a single import request.\nFor example, assuming none of the modules involved has already been cached,\nimporting foo.bar.baz\nwill first perform a top level import, calling\nmpf.find_spec(\"foo\", None, None)\non each meta path finder (mpf\n). After\nfoo\nhas been imported, foo.bar\nwill be imported by traversing the\nmeta path a second time, calling\nmpf.find_spec(\"foo.bar\", foo.__path__, None)\n. Once foo.bar\nhas been\nimported, the final traversal will call\nmpf.find_spec(\"foo.bar.baz\", foo.bar.__path__, None)\n.\nSome meta path finders only support top level imports. These importers will\nalways return None\nwhen anything other than None\nis passed as the\nsecond argument.\nPython\u2019s default sys.meta_path\nhas three meta path finders, one that\nknows how to import built-in modules, one that knows how to import frozen\nmodules, and one that knows how to import modules from an import path\n(i.e. the path based finder).\nChanged in version 3.4: The find_spec()\nmethod of meta path\nfinders replaced find_module()\n, which\nis now deprecated. While it will continue to work without change, the\nimport machinery will try it only if the finder does not implement\nfind_spec()\n.\nChanged in version 3.10: Use of find_module()\nby the import system\nnow raises ImportWarning\n.\nChanged in version 3.12: find_module()\nhas been removed.\nUse find_spec()\ninstead.\n5.4. Loading\u00b6\nIf and when a module spec is found, the import machinery will use it (and the loader it contains) when loading the module. Here is an approximation of what happens during the loading portion of import:\nmodule = None\nif spec.loader is not None and hasattr(spec.loader, 'create_module'):\n# It is assumed 'exec_module' will also be defined on the loader.\nmodule = spec.loader.create_module(spec)\nif module is None:\nmodule = ModuleType(spec.name)\n# The import-related module attributes get set here:\n_init_module_attrs(spec, module)\nif spec.loader is None:\n# unsupported\nraise ImportError\nif spec.origin is None and spec.submodule_search_locations is not None:\n# namespace package\nsys.modules[spec.name] = module\nelif not hasattr(spec.loader, 'exec_module'):\nmodule = spec.loader.load_module(spec.name)\nelse:\nsys.modules[spec.name] = module\ntry:\nspec.loader.exec_module(module)\nexcept BaseException:\ntry:\ndel sys.modules[spec.name]\nexcept KeyError:\npass\nraise\nreturn sys.modules[spec.name]\nNote the following details:\nIf there is an existing module object with the given name in\nsys.modules\n, import will have already returned it.The module will exist in\nsys.modules\nbefore the loader executes the module code. This is crucial because the module code may (directly or indirectly) import itself; adding it tosys.modules\nbeforehand prevents unbounded recursion in the worst case and multiple loading in the best.If loading fails, the failing module \u2013 and only the failing module \u2013 gets removed from\nsys.modules\n. Any module already in thesys.modules\ncache, and any module that was successfully loaded as a side-effect, must remain in the cache. This contrasts with reloading where even the failing module is left insys.modules\n.After the module is created but before execution, the import machinery sets the import-related module attributes (\u201c_init_module_attrs\u201d in the pseudo-code example above), as summarized in a later section.\nModule execution is the key moment of loading in which the module\u2019s namespace gets populated. Execution is entirely delegated to the loader, which gets to decide what gets populated and how.\nThe module created during loading and passed to exec_module() may not be the one returned at the end of import [2].\nChanged in version 3.4: The import system has taken over the boilerplate responsibilities of\nloaders. These were previously performed by the\nimportlib.abc.Loader.load_module()\nmethod.\n5.4.1. Loaders\u00b6\nModule loaders provide the critical function of loading: module execution.\nThe import machinery calls the importlib.abc.Loader.exec_module()\nmethod with a single argument, the module object to execute. Any value\nreturned from exec_module()\nis ignored.\nLoaders must satisfy the following requirements:\nIf the module is a Python module (as opposed to a built-in module or a dynamically loaded extension), the loader should execute the module\u2019s code in the module\u2019s global name space (\nmodule.__dict__\n).If the loader cannot execute the module, it should raise an\nImportError\n, although any other exception raised duringexec_module()\nwill be propagated.\nIn many cases, the finder and loader can be the same object; in such cases the\nfind_spec()\nmethod would just return a\nspec with the loader set to self\n.\nModule loaders may opt in to creating the module object during loading\nby implementing a create_module()\nmethod.\nIt takes one argument, the module spec, and returns the new module object\nto use during loading. create_module()\ndoes not need to set any attributes\non the module object. If the method returns None\n, the\nimport machinery will create the new module itself.\nAdded in version 3.4: The create_module()\nmethod of loaders.\nChanged in version 3.4: The load_module()\nmethod was replaced by\nexec_module()\nand the import\nmachinery assumed all the boilerplate responsibilities of loading.\nFor compatibility with existing loaders, the import machinery will use\nthe load_module()\nmethod of loaders if it exists and the loader does\nnot also implement exec_module()\n. However, load_module()\nhas been\ndeprecated and loaders should implement exec_module()\ninstead.\nThe load_module()\nmethod must implement all the boilerplate loading\nfunctionality described above in addition to executing the module. All\nthe same constraints apply, with some additional clarification:\nIf there is an existing module object with the given name in\nsys.modules\n, the loader must use that existing module. (Otherwise,importlib.reload()\nwill not work correctly.) If the named module does not exist insys.modules\n, the loader must create a new module object and add it tosys.modules\n.The module must exist in\nsys.modules\nbefore the loader executes the module code, to prevent unbounded recursion or multiple loading.If loading fails, the loader must remove any modules it has inserted into\nsys.modules\n, but it must remove only the failing module(s), and only if the loader itself has loaded the module(s) explicitly.\nChanged in version 3.5: A DeprecationWarning\nis raised when exec_module()\nis defined but\ncreate_module()\nis not.\nChanged in version 3.6: An ImportError\nis raised when exec_module()\nis defined but\ncreate_module()\nis not.\nChanged in version 3.10: Use of load_module()\nwill raise ImportWarning\n.\n5.4.2. Submodules\u00b6\nWhen a submodule is loaded using any mechanism (e.g. importlib\nAPIs, the\nimport\nor import-from\nstatements, or built-in __import__()\n) a\nbinding is placed in the parent module\u2019s namespace to the submodule object.\nFor example, if package spam\nhas a submodule foo\n, after importing\nspam.foo\n, spam\nwill have an attribute foo\nwhich is bound to the\nsubmodule. Let\u2019s say you have the following directory structure:\nspam/\n__init__.py\nfoo.py\nand spam/__init__.py\nhas the following line in it:\nfrom .foo import Foo\nthen executing the following puts name bindings for foo\nand Foo\nin the\nspam\nmodule:\n>>> import spam\n>>> spam.foo\n\n>>> spam.Foo\n\nGiven Python\u2019s familiar name binding rules this might seem surprising, but\nit\u2019s actually a fundamental feature of the import system. The invariant\nholding is that if you have sys.modules['spam']\nand\nsys.modules['spam.foo']\n(as you would after the above import), the latter\nmust appear as the foo\nattribute of the former.\n5.4.3. Module specs\u00b6\nThe import machinery uses a variety of information about each module during import, especially before loading. Most of the information is common to all modules. The purpose of a module\u2019s spec is to encapsulate this import-related information on a per-module basis.\nUsing a spec during import allows state to be transferred between import system components, e.g. between the finder that creates the module spec and the loader that executes it. Most importantly, it allows the import machinery to perform the boilerplate operations of loading, whereas without a module spec the loader had that responsibility.\nThe module\u2019s spec is exposed as module.__spec__\n. Setting\n__spec__\nappropriately applies equally to\nmodules initialized during interpreter startup.\nThe one exception is __main__\n, where __spec__\nis\nset to None in some cases.\nSee ModuleSpec\nfor details on the contents of\nthe module spec.\nAdded in version 3.4.\n5.4.4. __path__ attributes on modules\u00b6\nThe __path__\nattribute should be a (possibly empty)\nsequence of strings enumerating the locations where the package\u2019s\nsubmodules will be found. By definition, if a module has a __path__\nattribute, it is a package.\nA package\u2019s __path__\nattribute is used during imports of its\nsubpackages.\nWithin the import machinery, it functions much the same as sys.path\n,\ni.e. providing a list of locations to search for modules during import.\nHowever, __path__\nis typically much more constrained than\nsys.path\n.\nThe same rules used for sys.path\nalso apply to a package\u2019s\n__path__\n. sys.path_hooks\n(described below) are\nconsulted when traversing a package\u2019s __path__\n.\nA package\u2019s __init__.py\nfile may set or alter the package\u2019s\n__path__\nattribute, and this was typically the way namespace packages were implemented\nprior to PEP 420. With the adoption of PEP 420, namespace packages no\nlonger need to supply __init__.py\nfiles containing only __path__\nmanipulation code; the import machinery automatically sets __path__\ncorrectly for the namespace package.\n5.4.5. Module reprs\u00b6\nBy default, all modules have a usable repr, however depending on the attributes set above, and in the module\u2019s spec, you can more explicitly control the repr of module objects.\nIf the module has a spec (__spec__\n), the import machinery will try\nto generate a repr from it. If that fails or there is no spec, the import\nsystem will craft a default repr using whatever information is available\non the module. It will try to use the module.__name__\n,\nmodule.__file__\n, and module.__loader__\nas input into the repr,\nwith defaults for whatever information is missing.\nHere are the exact rules used:\nIf the module has a\n__spec__\nattribute, the information in the spec is used to generate the repr. The \u201cname\u201d, \u201cloader\u201d, \u201corigin\u201d, and \u201chas_location\u201d attributes are consulted.If the module has a\n__file__\nattribute, this is used as part of the module\u2019s repr.If the module has no\n__file__\nbut does have a__loader__\nthat is notNone\n, then the loader\u2019s repr is used as part of the module\u2019s repr.Otherwise, just use the module\u2019s\n__name__\nin the repr.\nChanged in version 3.12: Use of module_repr()\n, having been deprecated since Python 3.4, was\nremoved in Python 3.12 and is no longer called during the resolution of a\nmodule\u2019s repr.\n5.4.6. Cached bytecode invalidation\u00b6\nBefore Python loads cached bytecode from a .pyc\nfile, it checks whether the\ncache is up-to-date with the source .py\nfile. By default, Python does this\nby storing the source\u2019s last-modified timestamp and size in the cache file when\nwriting it. At runtime, the import system then validates the cache file by\nchecking the stored metadata in the cache file against the source\u2019s\nmetadata.\nPython also supports \u201chash-based\u201d cache files, which store a hash of the source\nfile\u2019s contents rather than its metadata. There are two variants of hash-based\n.pyc\nfiles: checked and unchecked. For checked hash-based .pyc\nfiles,\nPython validates the cache file by hashing the source file and comparing the\nresulting hash with the hash in the cache file. If a checked hash-based cache\nfile is found to be invalid, Python regenerates it and writes a new checked\nhash-based cache file. For unchecked hash-based .pyc\nfiles, Python simply\nassumes the cache file is valid if it exists. Hash-based .pyc\nfiles\nvalidation behavior may be overridden with the --check-hash-based-pycs\nflag.\nChanged in version 3.7: Added hash-based .pyc\nfiles. Previously, Python only supported\ntimestamp-based invalidation of bytecode caches.\n5.5. The Path Based Finder\u00b6\nAs mentioned previously, Python comes with several default meta path finders.\nOne of these, called the path based finder\n(PathFinder\n), searches an import path,\nwhich contains a list of path entries. Each path\nentry names a location to search for modules.\nThe path based finder itself doesn\u2019t know how to import anything. Instead, it traverses the individual path entries, associating each of them with a path entry finder that knows how to handle that particular kind of path.\nThe default set of path entry finders implement all the semantics for finding\nmodules on the file system, handling special file types such as Python source\ncode (.py\nfiles), Python byte code (.pyc\nfiles) and\nshared libraries (e.g. .so\nfiles). When supported by the zipimport\nmodule in the standard library, the default path entry finders also handle\nloading all of these file types (other than shared libraries) from zipfiles.\nPath entries need not be limited to file system locations. They can refer to URLs, database queries, or any other location that can be specified as a string.\nThe path based finder provides additional hooks and protocols so that you can extend and customize the types of searchable path entries. For example, if you wanted to support path entries as network URLs, you could write a hook that implements HTTP semantics to find modules on the web. This hook (a callable) would return a path entry finder supporting the protocol described below, which was then used to get a loader for the module from the web.\nA word of warning: this section and the previous both use the term finder,\ndistinguishing between them by using the terms meta path finder and\npath entry finder. These two types of finders are very similar,\nsupport similar protocols, and function in similar ways during the import\nprocess, but it\u2019s important to keep in mind that they are subtly different.\nIn particular, meta path finders operate at the beginning of the import\nprocess, as keyed off the sys.meta_path\ntraversal.\nBy contrast, path entry finders are in a sense an implementation detail\nof the path based finder, and in fact, if the path based finder were to be\nremoved from sys.meta_path\n, none of the path entry finder semantics\nwould be invoked.\n5.5.1. Path entry finders\u00b6\nThe path based finder is responsible for finding and loading Python modules and packages whose location is specified with a string path entry. Most path entries name locations in the file system, but they need not be limited to this.\nAs a meta path finder, the path based finder implements the\nfind_spec()\nprotocol previously\ndescribed, however it exposes additional hooks that can be used to\ncustomize how modules are found and loaded from the import path.\nThree variables are used by the path based finder, sys.path\n,\nsys.path_hooks\nand sys.path_importer_cache\n. The __path__\nattributes on package objects are also used. These provide additional ways\nthat the import machinery can be customized.\nsys.path\ncontains a list of strings providing search locations for\nmodules and packages. It is initialized from the PYTHONPATH\nenvironment variable and various other installation- and\nimplementation-specific defaults. Entries in sys.path\ncan name\ndirectories on the file system, zip files, and potentially other \u201clocations\u201d\n(see the site\nmodule) that should be searched for modules, such as\nURLs, or database queries. Only strings should be present on\nsys.path\n; all other data types are ignored.\nThe path based finder is a meta path finder, so the import\nmachinery begins the import path search by calling the path\nbased finder\u2019s find_spec()\nmethod as\ndescribed previously. When the path\nargument to\nfind_spec()\nis given, it will be a\nlist of string paths to traverse - typically a package\u2019s __path__\nattribute for an import within that package. If the path\nargument is\nNone\n, this indicates a top level import and sys.path\nis used.\nThe path based finder iterates over every entry in the search path, and\nfor each of these, looks for an appropriate path entry finder\n(PathEntryFinder\n) for the\npath entry. Because this can be an expensive operation (e.g. there may be\nstat()\ncall overheads for this search), the path based finder maintains\na cache mapping path entries to path entry finders. This cache is maintained\nin sys.path_importer_cache\n(despite the name, this cache actually\nstores finder objects rather than being limited to importer objects).\nIn this way, the expensive search for a particular path entry\nlocation\u2019s path entry finder need only be done once. User code is\nfree to remove cache entries from sys.path_importer_cache\nforcing\nthe path based finder to perform the path entry search again.\nIf the path entry is not present in the cache, the path based finder iterates\nover every callable in sys.path_hooks\n. Each of the path entry\nhooks in this list is called with a single argument, the\npath entry to be searched. This callable may either return a path\nentry finder that can handle the path entry, or it may raise\nImportError\n. An ImportError\nis used by the path based finder to\nsignal that the hook cannot find a path entry finder\nfor that path entry. The\nexception is ignored and import path iteration continues. The hook\nshould expect either a string or bytes object; the encoding of bytes objects\nis up to the hook (e.g. it may be a file system encoding, UTF-8, or something\nelse), and if the hook cannot decode the argument, it should raise\nImportError\n.\nIf sys.path_hooks\niteration ends with no path entry finder\nbeing returned, then the path based finder\u2019s\nfind_spec()\nmethod will store None\nin sys.path_importer_cache\n(to indicate that there is no finder for\nthis path entry) and return None\n, indicating that this\nmeta path finder could not find the module.\nIf a path entry finder is returned by one of the path entry\nhook callables on sys.path_hooks\n, then the following protocol is used\nto ask the finder for a module spec, which is then used when loading the\nmodule.\nThe current working directory \u2013 denoted by an empty string \u2013 is handled\nslightly differently from other entries on sys.path\n. First, if the\ncurrent working directory cannot be determined or is found not to exist, no\nvalue is stored in sys.path_importer_cache\n. Second, the value for the\ncurrent working directory is looked up fresh for each module lookup. Third,\nthe path used for sys.path_importer_cache\nand returned by\nimportlib.machinery.PathFinder.find_spec()\nwill be the actual current\nworking directory and not the empty string.\n5.5.2. Path entry finder protocol\u00b6\nIn order to support imports of modules and initialized packages and also to\ncontribute portions to namespace packages, path entry finders must implement\nthe find_spec()\nmethod.\nfind_spec()\ntakes two arguments: the\nfully qualified name of the module being imported, and the (optional) target\nmodule. find_spec()\nreturns a fully populated spec for the module.\nThis spec will always have \u201cloader\u201d set (with one exception).\nTo indicate to the import machinery that the spec represents a namespace\nportion, the path entry finder sets submodule_search_locations\nto\na list containing the portion.\nChanged in version 3.4: find_spec()\nreplaced\nfind_loader()\nand\nfind_module()\n, both of which\nare now deprecated, but will be used if find_spec()\nis not defined.\nOlder path entry finders may implement one of these two deprecated methods\ninstead of find_spec()\n. The methods are still respected for the\nsake of backward compatibility. However, if find_spec()\nis\nimplemented on the path entry finder, the legacy methods are ignored.\nfind_loader()\ntakes one argument, the\nfully qualified name of the module being imported. find_loader()\nreturns a 2-tuple where the first item is the loader and the second item\nis a namespace portion.\nFor backwards compatibility with other implementations of the import\nprotocol, many path entry finders also support the same,\ntraditional find_module()\nmethod that meta path finders support.\nHowever path entry finder find_module()\nmethods are never called\nwith a path\nargument (they are expected to record the appropriate\npath information from the initial call to the path hook).\nThe find_module()\nmethod on path entry finders is deprecated,\nas it does not allow the path entry finder to contribute portions to\nnamespace packages. If both find_loader()\nand find_module()\nexist on a path entry finder, the import system will always call\nfind_loader()\nin preference to find_module()\n.\nChanged in version 3.10: Calls to find_module()\nand\nfind_loader()\nby the import\nsystem will raise ImportWarning\n.\nChanged in version 3.12: find_module()\nand find_loader()\nhave been removed.\n5.6. Replacing the standard import system\u00b6\nThe most reliable mechanism for replacing the entire import system is to\ndelete the default contents of sys.meta_path\n, replacing them\nentirely with a custom meta path hook.\nIf it is acceptable to only alter the behaviour of import statements\nwithout affecting other APIs that access the import system, then replacing\nthe builtin __import__()\nfunction may be sufficient.\nTo selectively prevent the import of some modules from a hook early on the\nmeta path (rather than disabling the standard import system entirely),\nit is sufficient to raise ModuleNotFoundError\ndirectly from\nfind_spec()\ninstead of returning\nNone\n. The latter indicates that the meta path search should continue,\nwhile raising an exception terminates it immediately.\n5.7. Package Relative Imports\u00b6\nRelative imports use leading dots. A single leading dot indicates a relative import, starting with the current package. Two or more leading dots indicate a relative import to the parent(s) of the current package, one level per dot after the first. For example, given the following package layout:\npackage/\n__init__.py\nsubpackage1/\n__init__.py\nmoduleX.py\nmoduleY.py\nsubpackage2/\n__init__.py\nmoduleZ.py\nmoduleA.py\nIn either subpackage1/moduleX.py\nor subpackage1/__init__.py\n,\nthe following are valid relative imports:\nfrom .moduleY import spam\nfrom .moduleY import spam as ham\nfrom . import moduleY\nfrom ..subpackage1 import moduleY\nfrom ..subpackage2.moduleZ import eggs\nfrom ..moduleA import foo\nAbsolute imports may use either the import <>\nor from <> import <>\nsyntax, but relative imports may only use the second form; the reason\nfor this is that:\nimport XXX.YYY.ZZZ\nshould expose XXX.YYY.ZZZ\nas a usable expression, but .moduleY is\nnot a valid expression.\n5.8. Special considerations for __main__\u00b6\nThe __main__\nmodule is a special case relative to Python\u2019s import\nsystem. As noted elsewhere, the __main__\nmodule\nis directly initialized at interpreter startup, much like sys\nand\nbuiltins\n. However, unlike those two, it doesn\u2019t strictly\nqualify as a built-in module. This is because the manner in which\n__main__\nis initialized depends on the flags and other options with\nwhich the interpreter is invoked.\n5.8.1. __main__.__spec__\u00b6\nDepending on how __main__\nis initialized, __main__.__spec__\ngets set appropriately or to None\n.\nWhen Python is started with the -m\noption, __spec__\nis set\nto the module spec of the corresponding module or package. __spec__\nis\nalso populated when the __main__\nmodule is loaded as part of executing a\ndirectory, zipfile or other sys.path\nentry.\nIn the remaining cases\n__main__.__spec__\nis set to None\n, as the code used to populate the\n__main__\ndoes not correspond directly with an importable module:\ninteractive prompt\n-c\noptionrunning from stdin\nrunning directly from a source or bytecode file\nNote that __main__.__spec__\nis always None\nin the last case,\neven if the file could technically be imported directly as a module\ninstead. Use the -m\nswitch if valid module metadata is desired\nin __main__\n.\nNote also that even when __main__\ncorresponds with an importable module\nand __main__.__spec__\nis set accordingly, they\u2019re still considered\ndistinct modules. This is due to the fact that blocks guarded by\nif __name__ == \"__main__\":\nchecks only execute when the module is used\nto populate the __main__\nnamespace, and not during normal import.\n5.9. References\u00b6\nThe import machinery has evolved considerably since Python\u2019s early days. The original specification for packages is still available to read, although some details have changed since the writing of that document.\nThe original specification for sys.meta_path\nwas PEP 302, with\nsubsequent extension in PEP 420.\nPEP 420 introduced namespace packages for\nPython 3.3. PEP 420 also introduced the find_loader()\nprotocol as an\nalternative to find_module()\n.\nPEP 366 describes the addition of the __package__\nattribute for\nexplicit relative imports in main modules.\nPEP 328 introduced absolute and explicit relative imports and initially\nproposed __name__\nfor semantics PEP 366 would eventually specify for\n__package__\n.\nPEP 338 defines executing modules as scripts.\nPEP 451 adds the encapsulation of per-module import state in spec objects. It also off-loads most of the boilerplate responsibilities of loaders back onto the import machinery. These changes allow the deprecation of several APIs in the import system and also addition of new methods to finders and loaders.\nFootnotes", "code_snippets": ["\n ", "\n ", "\n ", "\n ", "\n ", "\n ", "\n ", "\n", " ", " ", "\n", " ", " ", " ", " ", " ", " ", " ", "\n ", "\n ", " ", " ", "\n", " ", " ", " ", "\n ", " ", " ", "\n", "\n", " ", "\n\n", " ", " ", " ", "\n ", "\n ", " ", "\n", " ", " ", " ", " ", " ", " ", " ", " ", "\n ", "\n ", " ", " ", "\n", " ", " ", " ", "\n ", " ", " ", "\n", "\n ", " ", " ", "\n ", "\n ", "\n ", " ", "\n ", "\n ", " ", "\n ", " ", "\n ", "\n ", "\n", " ", "\n", "\n ", "\n ", "\n", " ", "\n", "\n", "\n", "\n", "\n", "\n", "\n ", "\n ", "\n ", "\n ", "\n ", "\n ", "\n ", "\n ", "\n ", "\n", " ", "\n", " ", " ", " ", "\n", " ", "\n", " ", "\n", " ", "\n", " ", "\n", "\n"], "language": "Python", "source": "python.org", "token_count": 9279} +{"url": "https://docs.python.org/3/search.html", "title": "Search", "content": "Please activate JavaScript to enable the search functionality.\nSearching for multiple words only shows matches that contain all words.", "code_snippets": [], "language": "Python", "source": "python.org", "token_count": 33} +{"url": "https://docs.python.org/3/howto/descriptor.html", "title": null, "content": "Descriptor Guide\u00b6\n- Author:\nRaymond Hettinger\n- Contact:\n\nDescriptors let objects customize attribute lookup, storage, and deletion.\nThis guide has four major sections:\nThe \u201cprimer\u201d gives a basic overview, moving gently from simple examples, adding one feature at a time. Start here if you\u2019re new to descriptors.\nThe second section shows a complete, practical descriptor example. If you already know the basics, start there.\nThe third section provides a more technical tutorial that goes into the detailed mechanics of how descriptors work. Most people don\u2019t need this level of detail.\nThe last section has pure Python equivalents for built-in descriptors that are written in C. Read this if you\u2019re curious about how functions turn into bound methods or about the implementation of common tools like\nclassmethod()\n,staticmethod()\n,property()\n, and __slots__.\nPrimer\u00b6\nIn this primer, we start with the most basic possible example and then we\u2019ll add new capabilities one by one.\nSimple example: A descriptor that returns a constant\u00b6\nThe Ten\nclass is a descriptor whose __get__()\nmethod always\nreturns the constant 10\n:\nclass Ten:\ndef __get__(self, obj, objtype=None):\nreturn 10\nTo use the descriptor, it must be stored as a class variable in another class:\nclass A:\nx = 5 # Regular class attribute\ny = Ten() # Descriptor instance\nAn interactive session shows the difference between normal attribute lookup and descriptor lookup:\n>>> a = A() # Make an instance of class A\n>>> a.x # Normal attribute lookup\n5\n>>> a.y # Descriptor lookup\n10\nIn the a.x\nattribute lookup, the dot operator finds 'x': 5\nin the class dictionary. In the a.y\nlookup, the dot operator\nfinds a descriptor instance, recognized by its __get__\nmethod.\nCalling that method returns 10\n.\nNote that the value 10\nis not stored in either the class dictionary or the\ninstance dictionary. Instead, the value 10\nis computed on demand.\nThis example shows how a simple descriptor works, but it isn\u2019t very useful. For retrieving constants, normal attribute lookup would be better.\nIn the next section, we\u2019ll create something more useful, a dynamic lookup.\nDynamic lookups\u00b6\nInteresting descriptors typically run computations instead of returning constants:\nimport os\nclass DirectorySize:\ndef __get__(self, obj, objtype=None):\nreturn len(os.listdir(obj.dirname))\nclass Directory:\nsize = DirectorySize() # Descriptor instance\ndef __init__(self, dirname):\nself.dirname = dirname # Regular instance attribute\nAn interactive session shows that the lookup is dynamic \u2014 it computes different, updated answers each time:\n>>> s = Directory('songs')\n>>> g = Directory('games')\n>>> s.size # The songs directory has twenty files\n20\n>>> g.size # The games directory has three files\n3\n>>> os.remove('games/chess') # Delete a game\n>>> g.size # File count is automatically updated\n2\nBesides showing how descriptors can run computations, this example also\nreveals the purpose of the parameters to __get__()\n. The self\nparameter is size, an instance of DirectorySize. The obj parameter is\neither g or s, an instance of Directory. It is the obj parameter that\nlets the __get__()\nmethod learn the target directory. The objtype\nparameter is the class Directory.\nManaged attributes\u00b6\nA popular use for descriptors is managing access to instance data. The\ndescriptor is assigned to a public attribute in the class dictionary while the\nactual data is stored as a private attribute in the instance dictionary. The\ndescriptor\u2019s __get__()\nand __set__()\nmethods are triggered when\nthe public attribute is accessed.\nIn the following example, age is the public attribute and _age is the private attribute. When the public attribute is accessed, the descriptor logs the lookup or update:\nimport logging\nlogging.basicConfig(level=logging.INFO)\nclass LoggedAgeAccess:\ndef __get__(self, obj, objtype=None):\nvalue = obj._age\nlogging.info('Accessing %r giving %r', 'age', value)\nreturn value\ndef __set__(self, obj, value):\nlogging.info('Updating %r to %r', 'age', value)\nobj._age = value\nclass Person:\nage = LoggedAgeAccess() # Descriptor instance\ndef __init__(self, name, age):\nself.name = name # Regular instance attribute\nself.age = age # Calls __set__()\ndef birthday(self):\nself.age += 1 # Calls both __get__() and __set__()\nAn interactive session shows that all access to the managed attribute age is logged, but that the regular attribute name is not logged:\n>>> mary = Person('Mary M', 30) # The initial age update is logged\nINFO:root:Updating 'age' to 30\n>>> dave = Person('David D', 40)\nINFO:root:Updating 'age' to 40\n>>> vars(mary) # The actual data is in a private attribute\n{'name': 'Mary M', '_age': 30}\n>>> vars(dave)\n{'name': 'David D', '_age': 40}\n>>> mary.age # Access the data and log the lookup\nINFO:root:Accessing 'age' giving 30\n30\n>>> mary.birthday() # Updates are logged as well\nINFO:root:Accessing 'age' giving 30\nINFO:root:Updating 'age' to 31\n>>> dave.name # Regular attribute lookup isn't logged\n'David D'\n>>> dave.age # Only the managed attribute is logged\nINFO:root:Accessing 'age' giving 40\n40\nOne major issue with this example is that the private name _age is hardwired in the LoggedAgeAccess class. That means that each instance can only have one logged attribute and that its name is unchangeable. In the next example, we\u2019ll fix that problem.\nCustomized names\u00b6\nWhen a class uses descriptors, it can inform each descriptor about which variable name was used.\nIn this example, the Person\nclass has two descriptor instances,\nname and age. When the Person\nclass is defined, it makes a\ncallback to __set_name__()\nin LoggedAccess so that the field names can\nbe recorded, giving each descriptor its own public_name and private_name:\nimport logging\nlogging.basicConfig(level=logging.INFO)\nclass LoggedAccess:\ndef __set_name__(self, owner, name):\nself.public_name = name\nself.private_name = '_' + name\ndef __get__(self, obj, objtype=None):\nvalue = getattr(obj, self.private_name)\nlogging.info('Accessing %r giving %r', self.public_name, value)\nreturn value\ndef __set__(self, obj, value):\nlogging.info('Updating %r to %r', self.public_name, value)\nsetattr(obj, self.private_name, value)\nclass Person:\nname = LoggedAccess() # First descriptor instance\nage = LoggedAccess() # Second descriptor instance\ndef __init__(self, name, age):\nself.name = name # Calls the first descriptor\nself.age = age # Calls the second descriptor\ndef birthday(self):\nself.age += 1\nAn interactive session shows that the Person\nclass has called\n__set_name__()\nso that the field names would be recorded. Here\nwe call vars()\nto look up the descriptor without triggering it:\n>>> vars(vars(Person)['name'])\n{'public_name': 'name', 'private_name': '_name'}\n>>> vars(vars(Person)['age'])\n{'public_name': 'age', 'private_name': '_age'}\nThe new class now logs access to both name and age:\n>>> pete = Person('Peter P', 10)\nINFO:root:Updating 'name' to 'Peter P'\nINFO:root:Updating 'age' to 10\n>>> kate = Person('Catherine C', 20)\nINFO:root:Updating 'name' to 'Catherine C'\nINFO:root:Updating 'age' to 20\nThe two Person instances contain only the private names:\n>>> vars(pete)\n{'_name': 'Peter P', '_age': 10}\n>>> vars(kate)\n{'_name': 'Catherine C', '_age': 20}\nClosing thoughts\u00b6\nA descriptor is what we call any object that defines __get__()\n,\n__set__()\n, or __delete__()\n.\nOptionally, descriptors can have a __set_name__()\nmethod. This is only\nused in cases where a descriptor needs to know either the class where it was\ncreated or the name of class variable it was assigned to. (This method, if\npresent, is called even if the class is not a descriptor.)\nDescriptors get invoked by the dot operator during attribute lookup. If a\ndescriptor is accessed indirectly with vars(some_class)[descriptor_name]\n,\nthe descriptor instance is returned without invoking it.\nDescriptors only work when used as class variables. When put in instances, they have no effect.\nThe main motivation for descriptors is to provide a hook allowing objects stored in class variables to control what happens during attribute lookup.\nTraditionally, the calling class controls what happens during lookup. Descriptors invert that relationship and allow the data being looked-up to have a say in the matter.\nDescriptors are used throughout the language. It is how functions turn into\nbound methods. Common tools like classmethod()\n, staticmethod()\n,\nproperty()\n, and functools.cached_property()\nare all implemented as\ndescriptors.\nComplete Practical Example\u00b6\nIn this example, we create a practical and powerful tool for locating notoriously hard to find data corruption bugs.\nValidator class\u00b6\nA validator is a descriptor for managed attribute access. Prior to storing any data, it verifies that the new value meets various type and range restrictions. If those restrictions aren\u2019t met, it raises an exception to prevent data corruption at its source.\nThis Validator\nclass is both an abstract base class and a\nmanaged attribute descriptor:\nfrom abc import ABC, abstractmethod\nclass Validator(ABC):\ndef __set_name__(self, owner, name):\nself.private_name = '_' + name\ndef __get__(self, obj, objtype=None):\nreturn getattr(obj, self.private_name)\ndef __set__(self, obj, value):\nself.validate(value)\nsetattr(obj, self.private_name, value)\n@abstractmethod\ndef validate(self, value):\npass\nCustom validators need to inherit from Validator\nand must supply a\nvalidate()\nmethod to test various restrictions as needed.\nCustom validators\u00b6\nHere are three practical data validation utilities:\nOneOf\nverifies that a value is one of a restricted set of options.Number\nverifies that a value is either anint\norfloat\n. Optionally, it verifies that a value is between a given minimum or maximum.String\nverifies that a value is astr\n. Optionally, it validates a given minimum or maximum length. It can validate a user-defined predicate as well.\nclass OneOf(Validator):\ndef __init__(self, *options):\nself.options = set(options)\ndef validate(self, value):\nif value not in self.options:\nraise ValueError(\nf'Expected {value!r} to be one of {self.options!r}'\n)\nclass Number(Validator):\ndef __init__(self, minvalue=None, maxvalue=None):\nself.minvalue = minvalue\nself.maxvalue = maxvalue\ndef validate(self, value):\nif not isinstance(value, (int, float)):\nraise TypeError(f'Expected {value!r} to be an int or float')\nif self.minvalue is not None and value < self.minvalue:\nraise ValueError(\nf'Expected {value!r} to be at least {self.minvalue!r}'\n)\nif self.maxvalue is not None and value > self.maxvalue:\nraise ValueError(\nf'Expected {value!r} to be no more than {self.maxvalue!r}'\n)\nclass String(Validator):\ndef __init__(self, minsize=None, maxsize=None, predicate=None):\nself.minsize = minsize\nself.maxsize = maxsize\nself.predicate = predicate\ndef validate(self, value):\nif not isinstance(value, str):\nraise TypeError(f'Expected {value!r} to be a str')\nif self.minsize is not None and len(value) < self.minsize:\nraise ValueError(\nf'Expected {value!r} to be no smaller than {self.minsize!r}'\n)\nif self.maxsize is not None and len(value) > self.maxsize:\nraise ValueError(\nf'Expected {value!r} to be no bigger than {self.maxsize!r}'\n)\nif self.predicate is not None and not self.predicate(value):\nraise ValueError(\nf'Expected {self.predicate} to be true for {value!r}'\n)\nPractical application\u00b6\nHere\u2019s how the data validators can be used in a real class:\nclass Component:\nname = String(minsize=3, maxsize=10, predicate=str.isupper)\nkind = OneOf('wood', 'metal', 'plastic')\nquantity = Number(minvalue=0)\ndef __init__(self, name, kind, quantity):\nself.name = name\nself.kind = kind\nself.quantity = quantity\nThe descriptors prevent invalid instances from being created:\n>>> Component('Widget', 'metal', 5) # Blocked: 'Widget' is not all uppercase\nTraceback (most recent call last):\n...\nValueError: Expected to be true for 'Widget'\n>>> Component('WIDGET', 'metle', 5) # Blocked: 'metle' is misspelled\nTraceback (most recent call last):\n...\nValueError: Expected 'metle' to be one of {'metal', 'plastic', 'wood'}\n>>> Component('WIDGET', 'metal', -5) # Blocked: -5 is negative\nTraceback (most recent call last):\n...\nValueError: Expected -5 to be at least 0\n>>> Component('WIDGET', 'metal', 'V') # Blocked: 'V' isn't a number\nTraceback (most recent call last):\n...\nTypeError: Expected 'V' to be an int or float\n>>> c = Component('WIDGET', 'metal', 5) # Allowed: The inputs are valid\nTechnical Tutorial\u00b6\nWhat follows is a more technical tutorial for the mechanics and details of how descriptors work.\nAbstract\u00b6\nDefines descriptors, summarizes the protocol, and shows how descriptors are called. Provides an example showing how object relational mappings work.\nLearning about descriptors not only provides access to a larger toolset, it creates a deeper understanding of how Python works.\nDefinition and introduction\u00b6\nIn general, a descriptor is an attribute value that has one of the methods in\nthe descriptor protocol. Those methods are __get__()\n, __set__()\n,\nand __delete__()\n. If any of those methods are defined for an\nattribute, it is said to be a descriptor.\nThe default behavior for attribute access is to get, set, or delete the\nattribute from an object\u2019s dictionary. For instance, a.x\nhas a lookup chain\nstarting with a.__dict__['x']\n, then type(a).__dict__['x']\n, and\ncontinuing through the method resolution order of type(a)\n. If the\nlooked-up value is an object defining one of the descriptor methods, then Python\nmay override the default behavior and invoke the descriptor method instead.\nWhere this occurs in the precedence chain depends on which descriptor methods\nwere defined.\nDescriptors are a powerful, general purpose protocol. They are the mechanism\nbehind properties, methods, static methods, class methods, and\nsuper()\n. They are used throughout Python itself. Descriptors\nsimplify the underlying C code and offer a flexible set of new tools for\neveryday Python programs.\nDescriptor protocol\u00b6\ndescr.__get__(self, obj, type=None)\ndescr.__set__(self, obj, value)\ndescr.__delete__(self, obj)\nThat is all there is to it. Define any of these methods and an object is considered a descriptor and can override default behavior upon being looked up as an attribute.\nIf an object defines __set__()\nor __delete__()\n, it is considered\na data descriptor. Descriptors that only define __get__()\nare called\nnon-data descriptors (they are often used for methods but other uses are\npossible).\nData and non-data descriptors differ in how overrides are calculated with respect to entries in an instance\u2019s dictionary. If an instance\u2019s dictionary has an entry with the same name as a data descriptor, the data descriptor takes precedence. If an instance\u2019s dictionary has an entry with the same name as a non-data descriptor, the dictionary entry takes precedence.\nTo make a read-only data descriptor, define both __get__()\nand\n__set__()\nwith the __set__()\nraising an AttributeError\nwhen\ncalled. Defining the __set__()\nmethod with an exception raising\nplaceholder is enough to make it a data descriptor.\nOverview of descriptor invocation\u00b6\nA descriptor can be called directly with desc.__get__(obj)\nor\ndesc.__get__(None, cls)\n.\nBut it is more common for a descriptor to be invoked automatically from attribute access.\nThe expression obj.x\nlooks up the attribute x\nin the chain of\nnamespaces for obj\n. If the search finds a descriptor outside of the\ninstance __dict__\n, its __get__()\nmethod is\ninvoked according to the precedence rules listed below.\nThe details of invocation depend on whether obj\nis an object, class, or\ninstance of super.\nInvocation from an instance\u00b6\nInstance lookup scans through a chain of namespaces giving data descriptors\nthe highest priority, followed by instance variables, then non-data\ndescriptors, then class variables, and lastly __getattr__()\nif it is\nprovided.\nIf a descriptor is found for a.x\n, then it is invoked with:\ndesc.__get__(a, type(a))\n.\nThe logic for a dotted lookup is in object.__getattribute__()\n. Here is\na pure Python equivalent:\ndef find_name_in_mro(cls, name, default):\n\"Emulate _PyType_Lookup() in Objects/typeobject.c\"\nfor base in cls.__mro__:\nif name in vars(base):\nreturn vars(base)[name]\nreturn default\ndef object_getattribute(obj, name):\n\"Emulate PyObject_GenericGetAttr() in Objects/object.c\"\nnull = object()\nobjtype = type(obj)\ncls_var = find_name_in_mro(objtype, name, null)\ndescr_get = getattr(type(cls_var), '__get__', null)\nif descr_get is not null:\nif (hasattr(type(cls_var), '__set__')\nor hasattr(type(cls_var), '__delete__')):\nreturn descr_get(cls_var, obj, objtype) # data descriptor\nif hasattr(obj, '__dict__') and name in vars(obj):\nreturn vars(obj)[name] # instance variable\nif descr_get is not null:\nreturn descr_get(cls_var, obj, objtype) # non-data descriptor\nif cls_var is not null:\nreturn cls_var # class variable\nraise AttributeError(name)\nNote, there is no __getattr__()\nhook in the __getattribute__()\ncode. That is why calling __getattribute__()\ndirectly or with\nsuper().__getattribute__\nwill bypass __getattr__()\nentirely.\nInstead, it is the dot operator and the getattr()\nfunction that are\nresponsible for invoking __getattr__()\nwhenever __getattribute__()\nraises an AttributeError\n. Their logic is encapsulated in a helper\nfunction:\ndef getattr_hook(obj, name):\n\"Emulate slot_tp_getattr_hook() in Objects/typeobject.c\"\ntry:\nreturn obj.__getattribute__(name)\nexcept AttributeError:\nif not hasattr(type(obj), '__getattr__'):\nraise\nreturn type(obj).__getattr__(obj, name) # __getattr__\nInvocation from a class\u00b6\nThe logic for a dotted lookup such as A.x\nis in\ntype.__getattribute__()\n. The steps are similar to those for\nobject.__getattribute__()\nbut the instance dictionary lookup is replaced\nby a search through the class\u2019s method resolution order.\nIf a descriptor is found, it is invoked with desc.__get__(None, A)\n.\nThe full C implementation can be found in type_getattro()\nand\n_PyType_Lookup()\nin Objects/typeobject.c.\nInvocation from super\u00b6\nThe logic for super\u2019s dotted lookup is in the __getattribute__()\nmethod for\nobject returned by super()\n.\nA dotted lookup such as super(A, obj).m\nsearches obj.__class__.__mro__\nfor the base class B\nimmediately following A\nand then returns\nB.__dict__['m'].__get__(obj, A)\n. If not a descriptor, m\nis returned\nunchanged.\nThe full C implementation can be found in super_getattro()\nin\nObjects/typeobject.c. A pure Python equivalent can be found in\nGuido\u2019s Tutorial.\nSummary of invocation logic\u00b6\nThe mechanism for descriptors is embedded in the __getattribute__()\nmethods for object\n, type\n, and super()\n.\nThe important points to remember are:\nDescriptors are invoked by the\n__getattribute__()\nmethod.Classes inherit this machinery from\nobject\n,type\n, orsuper()\n.Overriding\n__getattribute__()\nprevents automatic descriptor calls because all the descriptor logic is in that method.object.__getattribute__()\nandtype.__getattribute__()\nmake different calls to__get__()\n. The first includes the instance and may include the class. The second puts inNone\nfor the instance and always includes the class.Data descriptors always override instance dictionaries.\nNon-data descriptors may be overridden by instance dictionaries.\nAutomatic name notification\u00b6\nSometimes it is desirable for a descriptor to know what class variable name it\nwas assigned to. When a new class is created, the type\nmetaclass\nscans the dictionary of the new class. If any of the entries are descriptors\nand if they define __set_name__()\n, that method is called with two\narguments. The owner is the class where the descriptor is used, and the\nname is the class variable the descriptor was assigned to.\nThe implementation details are in type_new()\nand\nset_names()\nin Objects/typeobject.c.\nSince the update logic is in type.__new__()\n, notifications only take\nplace at the time of class creation. If descriptors are added to the class\nafterwards, __set_name__()\nwill need to be called manually.\nORM example\u00b6\nThe following code is a simplified skeleton showing how data descriptors could be used to implement an object relational mapping.\nThe essential idea is that the data is stored in an external database. The Python instances only hold keys to the database\u2019s tables. Descriptors take care of lookups or updates:\nclass Field:\ndef __set_name__(self, owner, name):\nself.fetch = f'SELECT {name} FROM {owner.table} WHERE {owner.key}=?;'\nself.store = f'UPDATE {owner.table} SET {name}=? WHERE {owner.key}=?;'\ndef __get__(self, obj, objtype=None):\nreturn conn.execute(self.fetch, [obj.key]).fetchone()[0]\ndef __set__(self, obj, value):\nconn.execute(self.store, [value, obj.key])\nconn.commit()\nWe can use the Field\nclass to define models that describe the schema for\neach table in a database:\nclass Movie:\ntable = 'Movies' # Table name\nkey = 'title' # Primary key\ndirector = Field()\nyear = Field()\ndef __init__(self, key):\nself.key = key\nclass Song:\ntable = 'Music'\nkey = 'title'\nartist = Field()\nyear = Field()\ngenre = Field()\ndef __init__(self, key):\nself.key = key\nTo use the models, first connect to the database:\n>>> import sqlite3\n>>> conn = sqlite3.connect('entertainment.db')\nAn interactive session shows how data is retrieved from the database and how it can be updated:\n>>> Movie('Star Wars').director\n'George Lucas'\n>>> jaws = Movie('Jaws')\n>>> f'Released in {jaws.year} by {jaws.director}'\n'Released in 1975 by Steven Spielberg'\n>>> Song('Country Roads').artist\n'John Denver'\n>>> Movie('Star Wars').director = 'J.J. Abrams'\n>>> Movie('Star Wars').director\n'J.J. Abrams'\nPure Python Equivalents\u00b6\nThe descriptor protocol is simple and offers exciting possibilities. Several use cases are so common that they have been prepackaged into built-in tools. Properties, bound methods, static methods, class methods, and __slots__ are all based on the descriptor protocol.\nProperties\u00b6\nCalling property()\nis a succinct way of building a data descriptor that\ntriggers a function call upon access to an attribute. Its signature is:\nproperty(fget=None, fset=None, fdel=None, doc=None) -> property\nThe documentation shows a typical use to define a managed attribute x\n:\nclass C:\ndef getx(self): return self.__x\ndef setx(self, value): self.__x = value\ndef delx(self): del self.__x\nx = property(getx, setx, delx, \"I'm the 'x' property.\")\nTo see how property()\nis implemented in terms of the descriptor protocol,\nhere is a pure Python equivalent that implements most of the core functionality:\nclass Property:\n\"Emulate PyProperty_Type() in Objects/descrobject.c\"\ndef __init__(self, fget=None, fset=None, fdel=None, doc=None):\nself.fget = fget\nself.fset = fset\nself.fdel = fdel\nif doc is None and fget is not None:\ndoc = fget.__doc__\nself.__doc__ = doc\ndef __set_name__(self, owner, name):\nself.__name__ = name\ndef __get__(self, obj, objtype=None):\nif obj is None:\nreturn self\nif self.fget is None:\nraise AttributeError\nreturn self.fget(obj)\ndef __set__(self, obj, value):\nif self.fset is None:\nraise AttributeError\nself.fset(obj, value)\ndef __delete__(self, obj):\nif self.fdel is None:\nraise AttributeError\nself.fdel(obj)\ndef getter(self, fget):\nreturn type(self)(fget, self.fset, self.fdel, self.__doc__)\ndef setter(self, fset):\nreturn type(self)(self.fget, fset, self.fdel, self.__doc__)\ndef deleter(self, fdel):\nreturn type(self)(self.fget, self.fset, fdel, self.__doc__)\nThe property()\nbuiltin helps whenever a user interface has granted\nattribute access and then subsequent changes require the intervention of a\nmethod.\nFor instance, a spreadsheet class may grant access to a cell value through\nCell('b10').value\n. Subsequent improvements to the program require the cell\nto be recalculated on every access; however, the programmer does not want to\naffect existing client code accessing the attribute directly. The solution is\nto wrap access to the value attribute in a property data descriptor:\nclass Cell:\n...\n@property\ndef value(self):\n\"Recalculate the cell before returning value\"\nself.recalc()\nreturn self._value\nEither the built-in property()\nor our Property()\nequivalent would\nwork in this example.\nFunctions and methods\u00b6\nPython\u2019s object oriented features are built upon a function based environment. Using non-data descriptors, the two are merged seamlessly.\nFunctions stored in class dictionaries get turned into methods when invoked. Methods only differ from regular functions in that the object instance is prepended to the other arguments. By convention, the instance is called self but could be called this or any other variable name.\nMethods can be created manually with types.MethodType\nwhich is\nroughly equivalent to:\nclass MethodType:\n\"Emulate PyMethod_Type in Objects/classobject.c\"\ndef __init__(self, func, obj):\nself.__func__ = func\nself.__self__ = obj\ndef __call__(self, *args, **kwargs):\nfunc = self.__func__\nobj = self.__self__\nreturn func(obj, *args, **kwargs)\ndef __getattribute__(self, name):\n\"Emulate method_getset() in Objects/classobject.c\"\nif name == '__doc__':\nreturn self.__func__.__doc__\nreturn object.__getattribute__(self, name)\ndef __getattr__(self, name):\n\"Emulate method_getattro() in Objects/classobject.c\"\nreturn getattr(self.__func__, name)\ndef __get__(self, obj, objtype=None):\n\"Emulate method_descr_get() in Objects/classobject.c\"\nreturn self\nTo support automatic creation of methods, functions include the\n__get__()\nmethod for binding methods during attribute access. This\nmeans that functions are non-data descriptors that return bound methods\nduring dotted lookup from an instance. Here\u2019s how it works:\nclass Function:\n...\ndef __get__(self, obj, objtype=None):\n\"Simulate func_descr_get() in Objects/funcobject.c\"\nif obj is None:\nreturn self\nreturn MethodType(self, obj)\nRunning the following class in the interpreter shows how the function descriptor works in practice:\nclass D:\ndef f(self):\nreturn self\nclass D2:\npass\nThe function has a qualified name attribute to support introspection:\n>>> D.f.__qualname__\n'D.f'\nAccessing the function through the class dictionary does not invoke\n__get__()\n. Instead, it just returns the underlying function object:\n>>> D.__dict__['f']\n\nDotted access from a class calls __get__()\nwhich just returns the\nunderlying function unchanged:\n>>> D.f\n\nThe interesting behavior occurs during dotted access from an instance. The\ndotted lookup calls __get__()\nwhich returns a bound method object:\n>>> d = D()\n>>> d.f\n>\nInternally, the bound method stores the underlying function and the bound instance:\n>>> d.f.__func__\n\n>>> d.f.__self__\n<__main__.D object at 0x00B18C90>\nIf you have ever wondered where self comes from in regular methods or where cls comes from in class methods, this is it!\nKinds of methods\u00b6\nNon-data descriptors provide a simple mechanism for variations on the usual patterns of binding functions into methods.\nTo recap, functions have a __get__()\nmethod so that they can be converted\nto a method when accessed as attributes. The non-data descriptor transforms an\nobj.f(*args)\ncall into f(obj, *args)\n. Calling cls.f(*args)\nbecomes f(*args)\n.\nThis chart summarizes the binding and its two most useful variants:\nTransformation\nCalled from an object\nCalled from a class\nfunction\nf(obj, *args)\nf(*args)\nstaticmethod\nf(*args)\nf(*args)\nclassmethod\nf(type(obj), *args)\nf(cls, *args)\nStatic methods\u00b6\nStatic methods return the underlying function without changes. Calling either\nc.f\nor C.f\nis the equivalent of a direct lookup into\nobject.__getattribute__(c, \"f\")\nor object.__getattribute__(C, \"f\")\n. As a\nresult, the function becomes identically accessible from either an object or a\nclass.\nGood candidates for static methods are methods that do not reference the\nself\nvariable.\nFor instance, a statistics package may include a container class for\nexperimental data. The class provides normal methods for computing the average,\nmean, median, and other descriptive statistics that depend on the data. However,\nthere may be useful functions which are conceptually related but do not depend\non the data. For instance, erf(x)\nis handy conversion routine that comes up\nin statistical work but does not directly depend on a particular dataset.\nIt can be called either from an object or the class: s.erf(1.5) --> 0.9332\nor Sample.erf(1.5) --> 0.9332\n.\nSince static methods return the underlying function with no changes, the example calls are unexciting:\nclass E:\n@staticmethod\ndef f(x):\nreturn x * 10\n>>> E.f(3)\n30\n>>> E().f(3)\n30\nUsing the non-data descriptor protocol, a pure Python version of\nstaticmethod()\nwould look like this:\nimport functools\nclass StaticMethod:\n\"Emulate PyStaticMethod_Type() in Objects/funcobject.c\"\ndef __init__(self, f):\nself.f = f\nfunctools.update_wrapper(self, f)\ndef __get__(self, obj, objtype=None):\nreturn self.f\ndef __call__(self, *args, **kwds):\nreturn self.f(*args, **kwds)\n@property\ndef __annotations__(self):\nreturn self.f.__annotations__\nThe functools.update_wrapper()\ncall adds a __wrapped__\nattribute\nthat refers to the underlying function. Also it carries forward\nthe attributes necessary to make the wrapper look like the wrapped\nfunction, including __name__\n, __qualname__\n,\nand __doc__\n.\nClass methods\u00b6\nUnlike static methods, class methods prepend the class reference to the argument list before calling the function. This format is the same for whether the caller is an object or a class:\nclass F:\n@classmethod\ndef f(cls, x):\nreturn cls.__name__, x\n>>> F.f(3)\n('F', 3)\n>>> F().f(3)\n('F', 3)\nThis behavior is useful whenever the method only needs to have a class\nreference and does not rely on data stored in a specific instance. One use for\nclass methods is to create alternate class constructors. For example, the\nclassmethod dict.fromkeys()\ncreates a new dictionary from a list of\nkeys. The pure Python equivalent is:\nclass Dict(dict):\n@classmethod\ndef fromkeys(cls, iterable, value=None):\n\"Emulate dict_fromkeys() in Objects/dictobject.c\"\nd = cls()\nfor key in iterable:\nd[key] = value\nreturn d\nNow a new dictionary of unique keys can be constructed like this:\n>>> d = Dict.fromkeys('abracadabra')\n>>> type(d) is Dict\nTrue\n>>> d\n{'a': None, 'b': None, 'r': None, 'c': None, 'd': None}\nUsing the non-data descriptor protocol, a pure Python version of\nclassmethod()\nwould look like this:\nimport functools\nclass ClassMethod:\n\"Emulate PyClassMethod_Type() in Objects/funcobject.c\"\ndef __init__(self, f):\nself.f = f\nfunctools.update_wrapper(self, f)\ndef __get__(self, obj, cls=None):\nif cls is None:\ncls = type(obj)\nreturn MethodType(self.f, cls)\nThe functools.update_wrapper()\ncall in ClassMethod\nadds a\n__wrapped__\nattribute that refers to the underlying function. Also\nit carries forward the attributes necessary to make the wrapper look\nlike the wrapped function: __name__\n,\n__qualname__\n, __doc__\n,\nand __annotations__\n.\nMember objects and __slots__\u00b6\nWhen a class defines __slots__\n, it replaces instance dictionaries with a\nfixed-length array of slot values. From a user point of view that has\nseveral effects:\n1. Provides immediate detection of bugs due to misspelled attribute\nassignments. Only attribute names specified in __slots__\nare allowed:\nclass Vehicle:\n__slots__ = ('id_number', 'make', 'model')\n>>> auto = Vehicle()\n>>> auto.id_nubmer = 'VYE483814LQEX'\nTraceback (most recent call last):\n...\nAttributeError: 'Vehicle' object has no attribute 'id_nubmer'\n2. Helps create immutable objects where descriptors manage access to private\nattributes stored in __slots__\n:\nclass Immutable:\n__slots__ = ('_dept', '_name') # Replace the instance dictionary\ndef __init__(self, dept, name):\nself._dept = dept # Store to private attribute\nself._name = name # Store to private attribute\n@property # Read-only descriptor\ndef dept(self):\nreturn self._dept\n@property\ndef name(self): # Read-only descriptor\nreturn self._name\n>>> mark = Immutable('Botany', 'Mark Watney')\n>>> mark.dept\n'Botany'\n>>> mark.dept = 'Space Pirate'\nTraceback (most recent call last):\n...\nAttributeError: property 'dept' of 'Immutable' object has no setter\n>>> mark.location = 'Mars'\nTraceback (most recent call last):\n...\nAttributeError: 'Immutable' object has no attribute 'location'\n3. Saves memory. On a 64-bit Linux build, an instance with two attributes\ntakes 48 bytes with __slots__\nand 152 bytes without. This flyweight\ndesign pattern likely only\nmatters when a large number of instances are going to be created.\n4. Improves speed. Reading instance variables is 35% faster with\n__slots__\n(as measured with Python 3.10 on an Apple M1 processor).\n5. Blocks tools like functools.cached_property()\nwhich require an\ninstance dictionary to function correctly:\nfrom functools import cached_property\nclass CP:\n__slots__ = () # Eliminates the instance dict\n@cached_property # Requires an instance dict\ndef pi(self):\nreturn 4 * sum((-1.0)**n / (2.0*n + 1.0)\nfor n in reversed(range(100_000)))\n>>> CP().pi\nTraceback (most recent call last):\n...\nTypeError: No '__dict__' attribute on 'CP' instance to cache 'pi' property.\nIt is not possible to create an exact drop-in pure Python version of\n__slots__\nbecause it requires direct access to C structures and control\nover object memory allocation. However, we can build a mostly faithful\nsimulation where the actual C structure for slots is emulated by a private\n_slotvalues\nlist. Reads and writes to that private structure are managed\nby member descriptors:\nnull = object()\nclass Member:\ndef __init__(self, name, clsname, offset):\n'Emulate PyMemberDef in Include/structmember.h'\n# Also see descr_new() in Objects/descrobject.c\nself.name = name\nself.clsname = clsname\nself.offset = offset\ndef __get__(self, obj, objtype=None):\n'Emulate member_get() in Objects/descrobject.c'\n# Also see PyMember_GetOne() in Python/structmember.c\nif obj is None:\nreturn self\nvalue = obj._slotvalues[self.offset]\nif value is null:\nraise AttributeError(self.name)\nreturn value\ndef __set__(self, obj, value):\n'Emulate member_set() in Objects/descrobject.c'\nobj._slotvalues[self.offset] = value\ndef __delete__(self, obj):\n'Emulate member_delete() in Objects/descrobject.c'\nvalue = obj._slotvalues[self.offset]\nif value is null:\nraise AttributeError(self.name)\nobj._slotvalues[self.offset] = null\ndef __repr__(self):\n'Emulate member_repr() in Objects/descrobject.c'\nreturn f''\nThe type.__new__()\nmethod takes care of adding member objects to class\nvariables:\nclass Type(type):\n'Simulate how the type metaclass adds member objects for slots'\ndef __new__(mcls, clsname, bases, mapping, **kwargs):\n'Emulate type_new() in Objects/typeobject.c'\n# type_new() calls PyTypeReady() which calls add_methods()\nslot_names = mapping.get('slot_names', [])\nfor offset, name in enumerate(slot_names):\nmapping[name] = Member(name, clsname, offset)\nreturn type.__new__(mcls, clsname, bases, mapping, **kwargs)\nThe object.__new__()\nmethod takes care of creating instances that have\nslots instead of an instance dictionary. Here is a rough simulation in pure\nPython:\nclass Object:\n'Simulate how object.__new__() allocates memory for __slots__'\ndef __new__(cls, *args, **kwargs):\n'Emulate object_new() in Objects/typeobject.c'\ninst = super().__new__(cls)\nif hasattr(cls, 'slot_names'):\nempty_slots = [null] * len(cls.slot_names)\nobject.__setattr__(inst, '_slotvalues', empty_slots)\nreturn inst\ndef __setattr__(self, name, value):\n'Emulate _PyObject_GenericSetAttrWithDict() Objects/object.c'\ncls = type(self)\nif hasattr(cls, 'slot_names') and name not in cls.slot_names:\nraise AttributeError(\nf'{cls.__name__!r} object has no attribute {name!r}'\n)\nsuper().__setattr__(name, value)\ndef __delattr__(self, name):\n'Emulate _PyObject_GenericSetAttrWithDict() Objects/object.c'\ncls = type(self)\nif hasattr(cls, 'slot_names') and name not in cls.slot_names:\nraise AttributeError(\nf'{cls.__name__!r} object has no attribute {name!r}'\n)\nsuper().__delattr__(name)\nTo use the simulation in a real class, just inherit from Object\nand\nset the metaclass to Type\n:\nclass H(Object, metaclass=Type):\n'Instance variables stored in slots'\nslot_names = ['x', 'y']\ndef __init__(self, x, y):\nself.x = x\nself.y = y\nAt this point, the metaclass has loaded member objects for x and y:\n>>> from pprint import pp\n>>> pp(dict(vars(H)))\n{'__module__': '__main__',\n'__doc__': 'Instance variables stored in slots',\n'slot_names': ['x', 'y'],\n'__init__': ,\n'x': ,\n'y': }\nWhen instances are created, they have a slot_values\nlist where the\nattributes are stored:\n>>> h = H(10, 20)\n>>> vars(h)\n{'_slotvalues': [10, 20]}\n>>> h.x = 55\n>>> vars(h)\n{'_slotvalues': [55, 20]}\nMisspelled or unassigned attributes will raise an exception:\n>>> h.xz\nTraceback (most recent call last):\n...\nAttributeError: 'H' object has no attribute 'xz'", "code_snippets": [" ", " ", "\n", " ", " ", "\n", " ", "\n", "\n", " ", "\n", "\n", " ", "\n", " ", "\n", "\n", "\n", " ", " ", "\n", " ", " ", " ", " ", " ", "\n", "\n", "\n", "\n", "\n", " ", " ", "\n", "\n", "\n", "\n", "\n\n", "\n", "\n", " ", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n"], "language": "Python", "source": "python.org", "token_count": 9176} +{"url": "https://docs.python.org/3/library/ast.html", "title": " \u2014 Abstract syntax trees", "content": "ast\n\u2014 Abstract syntax trees\u00b6\nSource code: Lib/ast.py\nThe ast\nmodule helps Python applications to process trees of the Python\nabstract syntax grammar. The abstract syntax itself might change with each\nPython release; this module helps to find out programmatically what the current\ngrammar looks like.\nAn abstract syntax tree can be generated by passing ast.PyCF_ONLY_AST\nas\na flag to the compile()\nbuilt-in function, or using the parse()\nhelper provided in this module. The result will be a tree of objects whose\nclasses all inherit from ast.AST\n. An abstract syntax tree can be\ncompiled into a Python code object using the built-in compile()\nfunction.\nAbstract grammar\u00b6\nThe abstract grammar is currently defined as follows:\n-- ASDL's 4 builtin types are:\n-- identifier, int, string, constant\nmodule Python\n{\nmod = Module(stmt* body, type_ignore* type_ignores)\n| Interactive(stmt* body)\n| Expression(expr body)\n| FunctionType(expr* argtypes, expr returns)\nstmt = FunctionDef(identifier name, arguments args,\nstmt* body, expr* decorator_list, expr? returns,\nstring? type_comment, type_param* type_params)\n| AsyncFunctionDef(identifier name, arguments args,\nstmt* body, expr* decorator_list, expr? returns,\nstring? type_comment, type_param* type_params)\n| ClassDef(identifier name,\nexpr* bases,\nkeyword* keywords,\nstmt* body,\nexpr* decorator_list,\ntype_param* type_params)\n| Return(expr? value)\n| Delete(expr* targets)\n| Assign(expr* targets, expr value, string? type_comment)\n| TypeAlias(expr name, type_param* type_params, expr value)\n| AugAssign(expr target, operator op, expr value)\n-- 'simple' indicates that we annotate simple name without parens\n| AnnAssign(expr target, expr annotation, expr? value, int simple)\n-- use 'orelse' because else is a keyword in target languages\n| For(expr target, expr iter, stmt* body, stmt* orelse, string? type_comment)\n| AsyncFor(expr target, expr iter, stmt* body, stmt* orelse, string? type_comment)\n| While(expr test, stmt* body, stmt* orelse)\n| If(expr test, stmt* body, stmt* orelse)\n| With(withitem* items, stmt* body, string? type_comment)\n| AsyncWith(withitem* items, stmt* body, string? type_comment)\n| Match(expr subject, match_case* cases)\n| Raise(expr? exc, expr? cause)\n| Try(stmt* body, excepthandler* handlers, stmt* orelse, stmt* finalbody)\n| TryStar(stmt* body, excepthandler* handlers, stmt* orelse, stmt* finalbody)\n| Assert(expr test, expr? msg)\n| Import(alias* names)\n| ImportFrom(identifier? module, alias* names, int? level)\n| Global(identifier* names)\n| Nonlocal(identifier* names)\n| Expr(expr value)\n| Pass | Break | Continue\n-- col_offset is the byte offset in the utf8 string the parser uses\nattributes (int lineno, int col_offset, int? end_lineno, int? end_col_offset)\n-- BoolOp() can use left & right?\nexpr = BoolOp(boolop op, expr* values)\n| NamedExpr(expr target, expr value)\n| BinOp(expr left, operator op, expr right)\n| UnaryOp(unaryop op, expr operand)\n| Lambda(arguments args, expr body)\n| IfExp(expr test, expr body, expr orelse)\n| Dict(expr?* keys, expr* values)\n| Set(expr* elts)\n| ListComp(expr elt, comprehension* generators)\n| SetComp(expr elt, comprehension* generators)\n| DictComp(expr key, expr value, comprehension* generators)\n| GeneratorExp(expr elt, comprehension* generators)\n-- the grammar constrains where yield expressions can occur\n| Await(expr value)\n| Yield(expr? value)\n| YieldFrom(expr value)\n-- need sequences for compare to distinguish between\n-- x < 4 < 3 and (x < 4) < 3\n| Compare(expr left, cmpop* ops, expr* comparators)\n| Call(expr func, expr* args, keyword* keywords)\n| FormattedValue(expr value, int conversion, expr? format_spec)\n| Interpolation(expr value, constant str, int conversion, expr? format_spec)\n| JoinedStr(expr* values)\n| TemplateStr(expr* values)\n| Constant(constant value, string? kind)\n-- the following expression can appear in assignment context\n| Attribute(expr value, identifier attr, expr_context ctx)\n| Subscript(expr value, expr slice, expr_context ctx)\n| Starred(expr value, expr_context ctx)\n| Name(identifier id, expr_context ctx)\n| List(expr* elts, expr_context ctx)\n| Tuple(expr* elts, expr_context ctx)\n-- can appear only in Subscript\n| Slice(expr? lower, expr? upper, expr? step)\n-- col_offset is the byte offset in the utf8 string the parser uses\nattributes (int lineno, int col_offset, int? end_lineno, int? end_col_offset)\nexpr_context = Load | Store | Del\nboolop = And | Or\noperator = Add | Sub | Mult | MatMult | Div | Mod | Pow | LShift\n| RShift | BitOr | BitXor | BitAnd | FloorDiv\nunaryop = Invert | Not | UAdd | USub\ncmpop = Eq | NotEq | Lt | LtE | Gt | GtE | Is | IsNot | In | NotIn\ncomprehension = (expr target, expr iter, expr* ifs, int is_async)\nexcepthandler = ExceptHandler(expr? type, identifier? name, stmt* body)\nattributes (int lineno, int col_offset, int? end_lineno, int? end_col_offset)\narguments = (arg* posonlyargs, arg* args, arg? vararg, arg* kwonlyargs,\nexpr?* kw_defaults, arg? kwarg, expr* defaults)\narg = (identifier arg, expr? annotation, string? type_comment)\nattributes (int lineno, int col_offset, int? end_lineno, int? end_col_offset)\n-- keyword arguments supplied to call (NULL identifier for **kwargs)\nkeyword = (identifier? arg, expr value)\nattributes (int lineno, int col_offset, int? end_lineno, int? end_col_offset)\n-- import name with optional 'as' alias.\nalias = (identifier name, identifier? asname)\nattributes (int lineno, int col_offset, int? end_lineno, int? end_col_offset)\nwithitem = (expr context_expr, expr? optional_vars)\nmatch_case = (pattern pattern, expr? guard, stmt* body)\npattern = MatchValue(expr value)\n| MatchSingleton(constant value)\n| MatchSequence(pattern* patterns)\n| MatchMapping(expr* keys, pattern* patterns, identifier? rest)\n| MatchClass(expr cls, pattern* patterns, identifier* kwd_attrs, pattern* kwd_patterns)\n| MatchStar(identifier? name)\n-- The optional \"rest\" MatchMapping parameter handles capturing extra mapping keys\n| MatchAs(pattern? pattern, identifier? name)\n| MatchOr(pattern* patterns)\nattributes (int lineno, int col_offset, int end_lineno, int end_col_offset)\ntype_ignore = TypeIgnore(int lineno, string tag)\ntype_param = TypeVar(identifier name, expr? bound, expr? default_value)\n| ParamSpec(identifier name, expr? default_value)\n| TypeVarTuple(identifier name, expr? default_value)\nattributes (int lineno, int col_offset, int end_lineno, int end_col_offset)\n}\nNode classes\u00b6\n- class ast.AST\u00b6\nThis is the base of all AST node classes. The actual node classes are derived from the\nParser/Python.asdl\nfile, which is reproduced above. They are defined in the_ast\nC module and re-exported inast\n.There is one class defined for each left-hand side symbol in the abstract grammar (for example,\nast.stmt\norast.expr\n). In addition, there is one class defined for each constructor on the right-hand side; these classes inherit from the classes for the left-hand side trees. For example,ast.BinOp\ninherits fromast.expr\n. For production rules with alternatives (aka \u201csums\u201d), the left-hand side class is abstract: only instances of specific constructor nodes are ever created.- _fields\u00b6\nEach concrete class has an attribute\n_fields\nwhich gives the names of all child nodes.Each instance of a concrete class has one attribute for each child node, of the type as defined in the grammar. For example,\nast.BinOp\ninstances have an attributeleft\nof typeast.expr\n.If these attributes are marked as optional in the grammar (using a question mark), the value might be\nNone\n. If the attributes can have zero-or-more values (marked with an asterisk), the values are represented as Python lists. All possible attributes must be present and have valid values when compiling an AST withcompile()\n.\n- _field_types\u00b6\nThe\n_field_types\nattribute on each concrete class is a dictionary mapping field names (as also listed in_fields\n) to their types.>>> ast.TypeVar._field_types {'name': , 'bound': ast.expr | None, 'default_value': ast.expr | None}\nAdded in version 3.13.\n- lineno\u00b6\n- col_offset\u00b6\n- end_lineno\u00b6\n- end_col_offset\u00b6\nInstances of\nast.expr\nandast.stmt\nsubclasses havelineno\n,col_offset\n,end_lineno\n, andend_col_offset\nattributes. Thelineno\nandend_lineno\nare the first and last line numbers of source text span (1-indexed so the first line is line 1) and thecol_offset\nandend_col_offset\nare the corresponding UTF-8 byte offsets of the first and last tokens that generated the node. The UTF-8 offset is recorded because the parser uses UTF-8 internally.Note that the end positions are not required by the compiler and are therefore optional. The end offset is after the last symbol, for example one can get the source segment of a one-line expression node using\nsource_line[node.col_offset : node.end_col_offset]\n.\nThe constructor of a class\nast.T\nparses its arguments as follows:If there are positional arguments, there must be as many as there are items in\nT._fields\n; they will be assigned as attributes of these names.If there are keyword arguments, they will set the attributes of the same names to the given values.\nFor example, to create and populate an\nast.UnaryOp\nnode, you could usenode = ast.UnaryOp(ast.USub(), ast.Constant(5, lineno=0, col_offset=0), lineno=0, col_offset=0)\nIf a field that is optional in the grammar is omitted from the constructor, it defaults to\nNone\n. If a list field is omitted, it defaults to the empty list. If a field of typeast.expr_context\nis omitted, it defaults toLoad()\n. If any other field is omitted, aDeprecationWarning\nis raised and the AST node will not have this field. In Python 3.15, this condition will raise an error.\nChanged in version 3.8: Class ast.Constant\nis now used for all constants.\nChanged in version 3.9: Simple indices are represented by their value, extended slices are represented as tuples.\nChanged in version 3.14: The __repr__()\noutput of AST\nnodes includes\nthe values of the node fields.\nDeprecated since version 3.8, removed in version 3.14: Previous versions of Python provided the AST classes ast.Num\n,\nast.Str\n, ast.Bytes\n, ast.NameConstant\nand\nast.Ellipsis\n, which were deprecated in Python 3.8. These classes\nwere removed in Python 3.14, and their functionality has been replaced with\nast.Constant\n.\nDeprecated since version 3.9: Old classes ast.Index\nand ast.ExtSlice\nare still\navailable, but they will be removed in future Python releases.\nIn the meantime, instantiating them will return an instance of\na different class.\nDeprecated since version 3.13, will be removed in version 3.15: Previous versions of Python allowed the creation of AST nodes that were missing required fields. Similarly, AST node constructors allowed arbitrary keyword arguments that were set as attributes of the AST node, even if they did not match any of the fields of the AST node. This behavior is deprecated and will be removed in Python 3.15.\nNote\nThe descriptions of the specific node classes displayed here were initially adapted from the fantastic Green Tree Snakes project and all its contributors.\nRoot nodes\u00b6\n- class ast.Module(body, type_ignores)\u00b6\nA Python module, as with file input. Node type generated by\nast.parse()\nin the default\"exec\"\nmode.body\nis alist\nof the module\u2019s Statements.type_ignores\nis alist\nof the module\u2019s type ignore comments; seeast.parse()\nfor more details.>>> print(ast.dump(ast.parse('x = 1'), indent=4)) Module( body=[ Assign( targets=[ Name(id='x', ctx=Store())], value=Constant(value=1))])\n- class ast.Expression(body)\u00b6\nA single Python expression input. Node type generated by\nast.parse()\nwhen mode is\"eval\"\n.body\nis a single node, one of the expression types.>>> print(ast.dump(ast.parse('123', mode='eval'), indent=4)) Expression( body=Constant(value=123))\n- class ast.Interactive(body)\u00b6\nA single interactive input, like in Interactive Mode. Node type generated by\nast.parse()\nwhen mode is\"single\"\n.body\nis alist\nof statement nodes.>>> print(ast.dump(ast.parse('x = 1; y = 2', mode='single'), indent=4)) Interactive( body=[ Assign( targets=[ Name(id='x', ctx=Store())], value=Constant(value=1)), Assign( targets=[ Name(id='y', ctx=Store())], value=Constant(value=2))])\n- class ast.FunctionType(argtypes, returns)\u00b6\nA representation of an old-style type comments for functions, as Python versions prior to 3.5 didn\u2019t support PEP 484 annotations. Node type generated by\nast.parse()\nwhen mode is\"func_type\"\n.Such type comments would look like this:\ndef sum_two_number(a, b): # type: (int, int) -> int return a + b\nargtypes\nis alist\nof expression nodes.returns\nis a single expression node.>>> print(ast.dump(ast.parse('(int, str) -> List[int]', mode='func_type'), indent=4)) FunctionType( argtypes=[ Name(id='int', ctx=Load()), Name(id='str', ctx=Load())], returns=Subscript( value=Name(id='List', ctx=Load()), slice=Name(id='int', ctx=Load()), ctx=Load()))\nAdded in version 3.8.\nLiterals\u00b6\n- class ast.Constant(value)\u00b6\nA constant value. The\nvalue\nattribute of theConstant\nliteral contains the Python object it represents. The values represented can be instances ofstr\n,bytes\n,int\n,float\n,complex\n, andbool\n, and the constantsNone\nandEllipsis\n.>>> print(ast.dump(ast.parse('123', mode='eval'), indent=4)) Expression( body=Constant(value=123))\n- class ast.FormattedValue(value, conversion, format_spec)\u00b6\nNode representing a single formatting field in an f-string. If the string contains a single formatting field and nothing else the node can be isolated otherwise it appears in\nJoinedStr\n.value\nis any expression node (such as a literal, a variable, or a function call).conversion\nis an integer:format_spec\nis aJoinedStr\nnode representing the formatting of the value, orNone\nif no format was specified. Bothconversion\nandformat_spec\ncan be set at the same time.\n- class ast.JoinedStr(values)\u00b6\nAn f-string, comprising a series of\nFormattedValue\nandConstant\nnodes.>>> print(ast.dump(ast.parse('f\"sin({a}) is {sin(a):.3}\"', mode='eval'), indent=4)) Expression( body=JoinedStr( values=[ Constant(value='sin('), FormattedValue( value=Name(id='a', ctx=Load()), conversion=-1), Constant(value=') is '), FormattedValue( value=Call( func=Name(id='sin', ctx=Load()), args=[ Name(id='a', ctx=Load())]), conversion=-1, format_spec=JoinedStr( values=[ Constant(value='.3')]))]))\n- class ast.TemplateStr(values, /)\u00b6\nAdded in version 3.14.\nNode representing a template string literal, comprising a series of\nInterpolation\nandConstant\nnodes. These nodes may be any order, and do not need to be interleaved.>>> expr = ast.parse('t\"{name} finished {place:ordinal}\"', mode='eval') >>> print(ast.dump(expr, indent=4)) Expression( body=TemplateStr( values=[ Interpolation( value=Name(id='name', ctx=Load()), str='name', conversion=-1), Constant(value=' finished '), Interpolation( value=Name(id='place', ctx=Load()), str='place', conversion=-1, format_spec=JoinedStr( values=[ Constant(value='ordinal')]))]))\n- class ast.Interpolation(value, str, conversion, format_spec=None)\u00b6\nAdded in version 3.14.\nNode representing a single interpolation field in a template string literal.\nvalue\nis any expression node (such as a literal, a variable, or a function call). This has the same meaning asFormattedValue.value\n.str\nis a constant containing the text of the interpolation expression.If\nstr\nis set toNone\n, thenvalue\nis used to generate code when callingast.unparse()\n. This no longer guarantees that the generated code is identical to the original and is intended for code generation.conversion\nis an integer:-1: no conversion\n97 (\nord('a')\n):!a\nASCII\nconversion114 (\nord('r')\n):!r\nrepr()\nconversion115 (\nord('s')\n):!s\nstring\nconversion\nThis has the same meaning as\nFormattedValue.conversion\n.format_spec\nis aJoinedStr\nnode representing the formatting of the value, orNone\nif no format was specified. Bothconversion\nandformat_spec\ncan be set at the same time. This has the same meaning asFormattedValue.format_spec\n.\n- class ast.List(elts, ctx)\u00b6\n- class ast.Tuple(elts, ctx)\u00b6\nA list or tuple.\nelts\nholds a list of nodes representing the elements.ctx\nisStore\nif the container is an assignment target (i.e.(x,y)=something\n), andLoad\notherwise.>>> print(ast.dump(ast.parse('[1, 2, 3]', mode='eval'), indent=4)) Expression( body=List( elts=[ Constant(value=1), Constant(value=2), Constant(value=3)], ctx=Load())) >>> print(ast.dump(ast.parse('(1, 2, 3)', mode='eval'), indent=4)) Expression( body=Tuple( elts=[ Constant(value=1), Constant(value=2), Constant(value=3)], ctx=Load()))\n- class ast.Set(elts)\u00b6\nA set.\nelts\nholds a list of nodes representing the set\u2019s elements.>>> print(ast.dump(ast.parse('{1, 2, 3}', mode='eval'), indent=4)) Expression( body=Set( elts=[ Constant(value=1), Constant(value=2), Constant(value=3)]))\n- class ast.Dict(keys, values)\u00b6\nA dictionary.\nkeys\nandvalues\nhold lists of nodes representing the keys and the values respectively, in matching order (what would be returned when callingdictionary.keys()\nanddictionary.values()\n).When doing dictionary unpacking using dictionary literals the expression to be expanded goes in the\nvalues\nlist, with aNone\nat the corresponding position inkeys\n.>>> print(ast.dump(ast.parse('{\"a\":1, **d}', mode='eval'), indent=4)) Expression( body=Dict( keys=[ Constant(value='a'), None], values=[ Constant(value=1), Name(id='d', ctx=Load())]))\nVariables\u00b6\n- class ast.Name(id, ctx)\u00b6\nA variable name.\nid\nholds the name as a string, andctx\nis one of the following types.\n- class ast.Load\u00b6\n- class ast.Store\u00b6\n- class ast.Del\u00b6\nVariable references can be used to load the value of a variable, to assign a new value to it, or to delete it. Variable references are given a context to distinguish these cases.\n>>> print(ast.dump(ast.parse('a'), indent=4)) Module( body=[ Expr( value=Name(id='a', ctx=Load()))]) >>> print(ast.dump(ast.parse('a = 1'), indent=4)) Module( body=[ Assign( targets=[ Name(id='a', ctx=Store())], value=Constant(value=1))]) >>> print(ast.dump(ast.parse('del a'), indent=4)) Module( body=[ Delete( targets=[ Name(id='a', ctx=Del())])])\n- class ast.Starred(value, ctx)\u00b6\nA\n*var\nvariable reference.value\nholds the variable, typically aName\nnode. This type must be used when building aCall\nnode with*args\n.>>> print(ast.dump(ast.parse('a, *b = it'), indent=4)) Module( body=[ Assign( targets=[ Tuple( elts=[ Name(id='a', ctx=Store()), Starred( value=Name(id='b', ctx=Store()), ctx=Store())], ctx=Store())], value=Name(id='it', ctx=Load()))])\nExpressions\u00b6\n- class ast.Expr(value)\u00b6\nWhen an expression, such as a function call, appears as a statement by itself with its return value not used or stored, it is wrapped in this container.\nvalue\nholds one of the other nodes in this section, aConstant\n, aName\n, aLambda\n, aYield\norYieldFrom\nnode.>>> print(ast.dump(ast.parse('-a'), indent=4)) Module( body=[ Expr( value=UnaryOp( op=USub(), operand=Name(id='a', ctx=Load())))])\n- class ast.UnaryOp(op, operand)\u00b6\nA unary operation.\nop\nis the operator, andoperand\nany expression node.\n- class ast.UAdd\u00b6\n- class ast.USub\u00b6\n- class ast.Not\u00b6\n- class ast.Invert\u00b6\nUnary operator tokens.\nNot\nis thenot\nkeyword,Invert\nis the~\noperator.>>> print(ast.dump(ast.parse('not x', mode='eval'), indent=4)) Expression( body=UnaryOp( op=Not(), operand=Name(id='x', ctx=Load())))\n- class ast.BinOp(left, op, right)\u00b6\nA binary operation (like addition or division).\nop\nis the operator, andleft\nandright\nare any expression nodes.>>> print(ast.dump(ast.parse('x + y', mode='eval'), indent=4)) Expression( body=BinOp( left=Name(id='x', ctx=Load()), op=Add(), right=Name(id='y', ctx=Load())))\n- class ast.Add\u00b6\n- class ast.Sub\u00b6\n- class ast.Mult\u00b6\n- class ast.Div\u00b6\n- class ast.FloorDiv\u00b6\n- class ast.Mod\u00b6\n- class ast.Pow\u00b6\n- class ast.LShift\u00b6\n- class ast.RShift\u00b6\n- class ast.BitOr\u00b6\n- class ast.BitXor\u00b6\n- class ast.BitAnd\u00b6\n- class ast.MatMult\u00b6\nBinary operator tokens.\n- class ast.BoolOp(op, values)\u00b6\nA boolean operation, \u2018or\u2019 or \u2018and\u2019.\nop\nisOr\norAnd\n.values\nare the values involved. Consecutive operations with the same operator, such asa or b or c\n, are collapsed into one node with several values.This doesn\u2019t include\nnot\n, which is aUnaryOp\n.>>> print(ast.dump(ast.parse('x or y', mode='eval'), indent=4)) Expression( body=BoolOp( op=Or(), values=[ Name(id='x', ctx=Load()), Name(id='y', ctx=Load())]))\n- class ast.Compare(left, ops, comparators)\u00b6\nA comparison of two or more values.\nleft\nis the first value in the comparison,ops\nthe list of operators, andcomparators\nthe list of values after the first element in the comparison.>>> print(ast.dump(ast.parse('1 <= a < 10', mode='eval'), indent=4)) Expression( body=Compare( left=Constant(value=1), ops=[ LtE(), Lt()], comparators=[ Name(id='a', ctx=Load()), Constant(value=10)]))\n- class ast.Eq\u00b6\n- class ast.NotEq\u00b6\n- class ast.Lt\u00b6\n- class ast.LtE\u00b6\n- class ast.Gt\u00b6\n- class ast.GtE\u00b6\n- class ast.Is\u00b6\n- class ast.IsNot\u00b6\n- class ast.In\u00b6\n- class ast.NotIn\u00b6\nComparison operator tokens.\n- class ast.Call(func, args, keywords)\u00b6\nA function call.\nfunc\nis the function, which will often be aName\norAttribute\nobject. Of the arguments:args\nholds a list of the arguments passed by position.keywords\nholds a list ofkeyword\nobjects representing arguments passed by keyword.\nThe\nargs\nandkeywords\narguments are optional and default to empty lists.>>> print(ast.dump(ast.parse('func(a, b=c, *d, **e)', mode='eval'), indent=4)) Expression( body=Call( func=Name(id='func', ctx=Load()), args=[ Name(id='a', ctx=Load()), Starred( value=Name(id='d', ctx=Load()), ctx=Load())], keywords=[ keyword( arg='b', value=Name(id='c', ctx=Load())), keyword( value=Name(id='e', ctx=Load()))]))\n- class ast.keyword(arg, value)\u00b6\nA keyword argument to a function call or class definition.\narg\nis a raw string of the parameter name,value\nis a node to pass in.\n- class ast.IfExp(test, body, orelse)\u00b6\nAn expression such as\na if b else c\n. Each field holds a single node, so in the following example, all three areName\nnodes.>>> print(ast.dump(ast.parse('a if b else c', mode='eval'), indent=4)) Expression( body=IfExp( test=Name(id='b', ctx=Load()), body=Name(id='a', ctx=Load()), orelse=Name(id='c', ctx=Load())))\n- class ast.Attribute(value, attr, ctx)\u00b6\nAttribute access, e.g.\nd.keys\n.value\nis a node, typically aName\n.attr\nis a bare string giving the name of the attribute, andctx\nisLoad\n,Store\norDel\naccording to how the attribute is acted on.>>> print(ast.dump(ast.parse('snake.colour', mode='eval'), indent=4)) Expression( body=Attribute( value=Name(id='snake', ctx=Load()), attr='colour', ctx=Load()))\n- class ast.NamedExpr(target, value)\u00b6\nA named expression. This AST node is produced by the assignment expressions operator (also known as the walrus operator). As opposed to the\nAssign\nnode in which the first argument can be multiple nodes, in this case bothtarget\nandvalue\nmust be single nodes.>>> print(ast.dump(ast.parse('(x := 4)', mode='eval'), indent=4)) Expression( body=NamedExpr( target=Name(id='x', ctx=Store()), value=Constant(value=4)))\nAdded in version 3.8.\nSubscripting\u00b6\n- class ast.Subscript(value, slice, ctx)\u00b6\nA subscript, such as\nl[1]\n.value\nis the subscripted object (usually sequence or mapping).slice\nis an index, slice or key. It can be aTuple\nand contain aSlice\n.ctx\nisLoad\n,Store\norDel\naccording to the action performed with the subscript.>>> print(ast.dump(ast.parse('l[1:2, 3]', mode='eval'), indent=4)) Expression( body=Subscript( value=Name(id='l', ctx=Load()), slice=Tuple( elts=[ Slice( lower=Constant(value=1), upper=Constant(value=2)), Constant(value=3)], ctx=Load()), ctx=Load()))\n- class ast.Slice(lower, upper, step)\u00b6\nRegular slicing (on the form\nlower:upper\norlower:upper:step\n). Can occur only inside the slice field ofSubscript\n, either directly or as an element ofTuple\n.>>> print(ast.dump(ast.parse('l[1:2]', mode='eval'), indent=4)) Expression( body=Subscript( value=Name(id='l', ctx=Load()), slice=Slice( lower=Constant(value=1), upper=Constant(value=2)), ctx=Load()))\nComprehensions\u00b6\n- class ast.ListComp(elt, generators)\u00b6\n- class ast.SetComp(elt, generators)\u00b6\n- class ast.GeneratorExp(elt, generators)\u00b6\n- class ast.DictComp(key, value, generators)\u00b6\nList and set comprehensions, generator expressions, and dictionary comprehensions.\nelt\n(orkey\nandvalue\n) is a single node representing the part that will be evaluated for each item.generators\nis a list ofcomprehension\nnodes.>>> print(ast.dump( ... ast.parse('[x for x in numbers]', mode='eval'), ... indent=4, ... )) Expression( body=ListComp( elt=Name(id='x', ctx=Load()), generators=[ comprehension( target=Name(id='x', ctx=Store()), iter=Name(id='numbers', ctx=Load()), is_async=0)])) >>> print(ast.dump( ... ast.parse('{x: x**2 for x in numbers}', mode='eval'), ... indent=4, ... )) Expression( body=DictComp( key=Name(id='x', ctx=Load()), value=BinOp( left=Name(id='x', ctx=Load()), op=Pow(), right=Constant(value=2)), generators=[ comprehension( target=Name(id='x', ctx=Store()), iter=Name(id='numbers', ctx=Load()), is_async=0)])) >>> print(ast.dump( ... ast.parse('{x for x in numbers}', mode='eval'), ... indent=4, ... )) Expression( body=SetComp( elt=Name(id='x', ctx=Load()), generators=[ comprehension( target=Name(id='x', ctx=Store()), iter=Name(id='numbers', ctx=Load()), is_async=0)]))\n- class ast.comprehension(target, iter, ifs, is_async)\u00b6\nOne\nfor\nclause in a comprehension.target\nis the reference to use for each element - typically aName\norTuple\nnode.iter\nis the object to iterate over.ifs\nis a list of test expressions: eachfor\nclause can have multipleifs\n.is_async\nindicates a comprehension is asynchronous (using anasync for\ninstead offor\n). The value is an integer (0 or 1).>>> print(ast.dump(ast.parse('[ord(c) for line in file for c in line]', mode='eval'), ... indent=4)) # Multiple comprehensions in one. Expression( body=ListComp( elt=Call( func=Name(id='ord', ctx=Load()), args=[ Name(id='c', ctx=Load())]), generators=[ comprehension( target=Name(id='line', ctx=Store()), iter=Name(id='file', ctx=Load()), is_async=0), comprehension( target=Name(id='c', ctx=Store()), iter=Name(id='line', ctx=Load()), is_async=0)])) >>> print(ast.dump(ast.parse('(n**2 for n in it if n>5 if n<10)', mode='eval'), ... indent=4)) # generator comprehension Expression( body=GeneratorExp( elt=BinOp( left=Name(id='n', ctx=Load()), op=Pow(), right=Constant(value=2)), generators=[ comprehension( target=Name(id='n', ctx=Store()), iter=Name(id='it', ctx=Load()), ifs=[ Compare( left=Name(id='n', ctx=Load()), ops=[ Gt()], comparators=[ Constant(value=5)]), Compare( left=Name(id='n', ctx=Load()), ops=[ Lt()], comparators=[ Constant(value=10)])], is_async=0)])) >>> print(ast.dump(ast.parse('[i async for i in soc]', mode='eval'), ... indent=4)) # Async comprehension Expression( body=ListComp( elt=Name(id='i', ctx=Load()), generators=[ comprehension( target=Name(id='i', ctx=Store()), iter=Name(id='soc', ctx=Load()), is_async=1)]))\nStatements\u00b6\n- class ast.Assign(targets, value, type_comment)\u00b6\nAn assignment.\ntargets\nis a list of nodes, andvalue\nis a single node.Multiple nodes in\ntargets\nrepresents assigning the same value to each. Unpacking is represented by putting aTuple\norList\nwithintargets\n.- type_comment\u00b6\ntype_comment\nis an optional string with the type annotation as a comment.\n>>> print(ast.dump(ast.parse('a = b = 1'), indent=4)) # Multiple assignment Module( body=[ Assign( targets=[ Name(id='a', ctx=Store()), Name(id='b', ctx=Store())], value=Constant(value=1))]) >>> print(ast.dump(ast.parse('a,b = c'), indent=4)) # Unpacking Module( body=[ Assign( targets=[ Tuple( elts=[ Name(id='a', ctx=Store()), Name(id='b', ctx=Store())], ctx=Store())], value=Name(id='c', ctx=Load()))])\n- class ast.AnnAssign(target, annotation, value, simple)\u00b6\nAn assignment with a type annotation.\ntarget\nis a single node and can be aName\n, anAttribute\nor aSubscript\n.annotation\nis the annotation, such as aConstant\norName\nnode.value\nis a single optional node.simple\nis always either 0 (indicating a \u201ccomplex\u201d target) or 1 (indicating a \u201csimple\u201d target). A \u201csimple\u201d target consists solely of aName\nnode that does not appear between parentheses; all other targets are considered complex. Only simple targets appear in the__annotations__\ndictionary of modules and classes.>>> print(ast.dump(ast.parse('c: int'), indent=4)) Module( body=[ AnnAssign( target=Name(id='c', ctx=Store()), annotation=Name(id='int', ctx=Load()), simple=1)]) >>> print(ast.dump(ast.parse('(a): int = 1'), indent=4)) # Annotation with parenthesis Module( body=[ AnnAssign( target=Name(id='a', ctx=Store()), annotation=Name(id='int', ctx=Load()), value=Constant(value=1), simple=0)]) >>> print(ast.dump(ast.parse('a.b: int'), indent=4)) # Attribute annotation Module( body=[ AnnAssign( target=Attribute( value=Name(id='a', ctx=Load()), attr='b', ctx=Store()), annotation=Name(id='int', ctx=Load()), simple=0)]) >>> print(ast.dump(ast.parse('a[1]: int'), indent=4)) # Subscript annotation Module( body=[ AnnAssign( target=Subscript( value=Name(id='a', ctx=Load()), slice=Constant(value=1), ctx=Store()), annotation=Name(id='int', ctx=Load()), simple=0)])\n- class ast.AugAssign(target, op, value)\u00b6\nAugmented assignment, such as\na += 1\n. In the following example,target\nis aName\nnode forx\n(with theStore\ncontext),op\nisAdd\n, andvalue\nis aConstant\nwith value for 1.The\ntarget\nattribute cannot be of classTuple\norList\n, unlike the targets ofAssign\n.>>> print(ast.dump(ast.parse('x += 2'), indent=4)) Module( body=[ AugAssign( target=Name(id='x', ctx=Store()), op=Add(), value=Constant(value=2))])\n- class ast.Raise(exc, cause)\u00b6\nA\nraise\nstatement.exc\nis the exception object to be raised, normally aCall\norName\n, orNone\nfor a standaloneraise\n.cause\nis the optional part fory\ninraise x from y\n.>>> print(ast.dump(ast.parse('raise x from y'), indent=4)) Module( body=[ Raise( exc=Name(id='x', ctx=Load()), cause=Name(id='y', ctx=Load()))])\n- class ast.Assert(test, msg)\u00b6\nAn assertion.\ntest\nholds the condition, such as aCompare\nnode.msg\nholds the failure message.>>> print(ast.dump(ast.parse('assert x,y'), indent=4)) Module( body=[ Assert( test=Name(id='x', ctx=Load()), msg=Name(id='y', ctx=Load()))])\n- class ast.Delete(targets)\u00b6\nRepresents a\ndel\nstatement.targets\nis a list of nodes, such asName\n,Attribute\norSubscript\nnodes.>>> print(ast.dump(ast.parse('del x,y,z'), indent=4)) Module( body=[ Delete( targets=[ Name(id='x', ctx=Del()), Name(id='y', ctx=Del()), Name(id='z', ctx=Del())])])\n- class ast.Pass\u00b6\nA\npass\nstatement.>>> print(ast.dump(ast.parse('pass'), indent=4)) Module( body=[ Pass()])\n- class ast.TypeAlias(name, type_params, value)\u00b6\nA type alias created through the\ntype\nstatement.name\nis the name of the alias,type_params\nis a list of type parameters, andvalue\nis the value of the type alias.>>> print(ast.dump(ast.parse('type Alias = int'), indent=4)) Module( body=[ TypeAlias( name=Name(id='Alias', ctx=Store()), value=Name(id='int', ctx=Load()))])\nAdded in version 3.12.\nOther statements which are only applicable inside functions or loops are described in other sections.\nImports\u00b6\n- class ast.Import(names)\u00b6\nAn import statement.\nnames\nis a list ofalias\nnodes.>>> print(ast.dump(ast.parse('import x,y,z'), indent=4)) Module( body=[ Import( names=[ alias(name='x'), alias(name='y'), alias(name='z')])])\n- class ast.ImportFrom(module, names, level)\u00b6\nRepresents\nfrom x import y\n.module\nis a raw string of the \u2018from\u2019 name, without any leading dots, orNone\nfor statements such asfrom . import foo\n.level\nis an integer holding the level of the relative import (0 means absolute import).>>> print(ast.dump(ast.parse('from y import x,y,z'), indent=4)) Module( body=[ ImportFrom( module='y', names=[ alias(name='x'), alias(name='y'), alias(name='z')], level=0)])\n- class ast.alias(name, asname)\u00b6\nBoth parameters are raw strings of the names.\nasname\ncan beNone\nif the regular name is to be used.>>> print(ast.dump(ast.parse('from ..foo.bar import a as b, c'), indent=4)) Module( body=[ ImportFrom( module='foo.bar', names=[ alias(name='a', asname='b'), alias(name='c')], level=2)])\nControl flow\u00b6\nNote\nOptional clauses such as else\nare stored as an empty list if they\u2019re\nnot present.\n- class ast.If(test, body, orelse)\u00b6\nAn\nif\nstatement.test\nholds a single node, such as aCompare\nnode.body\nandorelse\neach hold a list of nodes.elif\nclauses don\u2019t have a special representation in the AST, but rather appear as extraIf\nnodes within theorelse\nsection of the previous one.>>> print(ast.dump(ast.parse(\"\"\" ... if x: ... ... ... elif y: ... ... ... else: ... ... ... \"\"\"), indent=4)) Module( body=[ If( test=Name(id='x', ctx=Load()), body=[ Expr( value=Constant(value=Ellipsis))], orelse=[ If( test=Name(id='y', ctx=Load()), body=[ Expr( value=Constant(value=Ellipsis))], orelse=[ Expr( value=Constant(value=Ellipsis))])])])\n- class ast.For(target, iter, body, orelse, type_comment)\u00b6\nA\nfor\nloop.target\nholds the variable(s) the loop assigns to, as a singleName\n,Tuple\n,List\n,Attribute\norSubscript\nnode.iter\nholds the item to be looped over, again as a single node.body\nandorelse\ncontain lists of nodes to execute. Those inorelse\nare executed if the loop finishes normally, rather than via abreak\nstatement.- type_comment\u00b6\ntype_comment\nis an optional string with the type annotation as a comment.\n>>> print(ast.dump(ast.parse(\"\"\" ... for x in y: ... ... ... else: ... ... ... \"\"\"), indent=4)) Module( body=[ For( target=Name(id='x', ctx=Store()), iter=Name(id='y', ctx=Load()), body=[ Expr( value=Constant(value=Ellipsis))], orelse=[ Expr( value=Constant(value=Ellipsis))])])\n- class ast.While(test, body, orelse)\u00b6\nA\nwhile\nloop.test\nholds the condition, such as aCompare\nnode.>>> print(ast.dump(ast.parse(\"\"\" ... while x: ... ... ... else: ... ... ... \"\"\"), indent=4)) Module( body=[ While( test=Name(id='x', ctx=Load()), body=[ Expr( value=Constant(value=Ellipsis))], orelse=[ Expr( value=Constant(value=Ellipsis))])])\n- class ast.Break\u00b6\n- class ast.Continue\u00b6\nThe\nbreak\nandcontinue\nstatements.>>> print(ast.dump(ast.parse(\"\"\"\\ ... for a in b: ... if a > 5: ... break ... else: ... continue ... ... \"\"\"), indent=4)) Module( body=[ For( target=Name(id='a', ctx=Store()), iter=Name(id='b', ctx=Load()), body=[ If( test=Compare( left=Name(id='a', ctx=Load()), ops=[ Gt()], comparators=[ Constant(value=5)]), body=[ Break()], orelse=[ Continue()])])])\n- class ast.Try(body, handlers, orelse, finalbody)\u00b6\ntry\nblocks. All attributes are list of nodes to execute, except forhandlers\n, which is a list ofExceptHandler\nnodes.>>> print(ast.dump(ast.parse(\"\"\" ... try: ... ... ... except Exception: ... ... ... except OtherException as e: ... ... ... else: ... ... ... finally: ... ... ... \"\"\"), indent=4)) Module( body=[ Try( body=[ Expr( value=Constant(value=Ellipsis))], handlers=[ ExceptHandler( type=Name(id='Exception', ctx=Load()), body=[ Expr( value=Constant(value=Ellipsis))]), ExceptHandler( type=Name(id='OtherException', ctx=Load()), name='e', body=[ Expr( value=Constant(value=Ellipsis))])], orelse=[ Expr( value=Constant(value=Ellipsis))], finalbody=[ Expr( value=Constant(value=Ellipsis))])])\n- class ast.TryStar(body, handlers, orelse, finalbody)\u00b6\ntry\nblocks which are followed byexcept*\nclauses. The attributes are the same as forTry\nbut theExceptHandler\nnodes inhandlers\nare interpreted asexcept*\nblocks rather thenexcept\n.>>> print(ast.dump(ast.parse(\"\"\" ... try: ... ... ... except* Exception: ... ... ... \"\"\"), indent=4)) Module( body=[ TryStar( body=[ Expr( value=Constant(value=Ellipsis))], handlers=[ ExceptHandler( type=Name(id='Exception', ctx=Load()), body=[ Expr( value=Constant(value=Ellipsis))])])])\nAdded in version 3.11.\n- class ast.ExceptHandler(type, name, body)\u00b6\nA single\nexcept\nclause.type\nis the exception type it will match, typically aName\nnode (orNone\nfor a catch-allexcept:\nclause).name\nis a raw string for the name to hold the exception, orNone\nif the clause doesn\u2019t haveas foo\n.body\nis a list of nodes.>>> print(ast.dump(ast.parse(\"\"\"\\ ... try: ... a + 1 ... except TypeError: ... pass ... \"\"\"), indent=4)) Module( body=[ Try( body=[ Expr( value=BinOp( left=Name(id='a', ctx=Load()), op=Add(), right=Constant(value=1)))], handlers=[ ExceptHandler( type=Name(id='TypeError', ctx=Load()), body=[ Pass()])])])\n- class ast.With(items, body, type_comment)\u00b6\nA\nwith\nblock.items\nis a list ofwithitem\nnodes representing the context managers, andbody\nis the indented block inside the context.- type_comment\u00b6\ntype_comment\nis an optional string with the type annotation as a comment.\n- class ast.withitem(context_expr, optional_vars)\u00b6\nA single context manager in a\nwith\nblock.context_expr\nis the context manager, often aCall\nnode.optional_vars\nis aName\n,Tuple\norList\nfor theas foo\npart, orNone\nif that isn\u2019t used.>>> print(ast.dump(ast.parse(\"\"\"\\ ... with a as b, c as d: ... something(b, d) ... \"\"\"), indent=4)) Module( body=[ With( items=[ withitem( context_expr=Name(id='a', ctx=Load()), optional_vars=Name(id='b', ctx=Store())), withitem( context_expr=Name(id='c', ctx=Load()), optional_vars=Name(id='d', ctx=Store()))], body=[ Expr( value=Call( func=Name(id='something', ctx=Load()), args=[ Name(id='b', ctx=Load()), Name(id='d', ctx=Load())]))])])\nPattern matching\u00b6\n- class ast.Match(subject, cases)\u00b6\nA\nmatch\nstatement.subject\nholds the subject of the match (the object that is being matched against the cases) andcases\ncontains an iterable ofmatch_case\nnodes with the different cases.Added in version 3.10.\n- class ast.match_case(pattern, guard, body)\u00b6\nA single case pattern in a\nmatch\nstatement.pattern\ncontains the match pattern that the subject will be matched against. Note that theAST\nnodes produced for patterns differ from those produced for expressions, even when they share the same syntax.The\nguard\nattribute contains an expression that will be evaluated if the pattern matches the subject.body\ncontains a list of nodes to execute if the pattern matches and the result of evaluating the guard expression is true.>>> print(ast.dump(ast.parse(\"\"\" ... match x: ... case [x] if x>0: ... ... ... case tuple(): ... ... ... \"\"\"), indent=4)) Module( body=[ Match( subject=Name(id='x', ctx=Load()), cases=[ match_case( pattern=MatchSequence( patterns=[ MatchAs(name='x')]), guard=Compare( left=Name(id='x', ctx=Load()), ops=[ Gt()], comparators=[ Constant(value=0)]), body=[ Expr( value=Constant(value=Ellipsis))]), match_case( pattern=MatchClass( cls=Name(id='tuple', ctx=Load())), body=[ Expr( value=Constant(value=Ellipsis))])])])\nAdded in version 3.10.\n- class ast.MatchValue(value)\u00b6\nA match literal or value pattern that compares by equality.\nvalue\nis an expression node. Permitted value nodes are restricted as described in the match statement documentation. This pattern succeeds if the match subject is equal to the evaluated value.>>> print(ast.dump(ast.parse(\"\"\" ... match x: ... case \"Relevant\": ... ... ... \"\"\"), indent=4)) Module( body=[ Match( subject=Name(id='x', ctx=Load()), cases=[ match_case( pattern=MatchValue( value=Constant(value='Relevant')), body=[ Expr( value=Constant(value=Ellipsis))])])])\nAdded in version 3.10.\n- class ast.MatchSingleton(value)\u00b6\nA match literal pattern that compares by identity.\nvalue\nis the singleton to be compared against:None\n,True\n, orFalse\n. This pattern succeeds if the match subject is the given constant.>>> print(ast.dump(ast.parse(\"\"\" ... match x: ... case None: ... ... ... \"\"\"), indent=4)) Module( body=[ Match( subject=Name(id='x', ctx=Load()), cases=[ match_case( pattern=MatchSingleton(value=None), body=[ Expr( value=Constant(value=Ellipsis))])])])\nAdded in version 3.10.\n- class ast.MatchSequence(patterns)\u00b6\nA match sequence pattern.\npatterns\ncontains the patterns to be matched against the subject elements if the subject is a sequence. Matches a variable length sequence if one of the subpatterns is aMatchStar\nnode, otherwise matches a fixed length sequence.>>> print(ast.dump(ast.parse(\"\"\" ... match x: ... case [1, 2]: ... ... ... \"\"\"), indent=4)) Module( body=[ Match( subject=Name(id='x', ctx=Load()), cases=[ match_case( pattern=MatchSequence( patterns=[ MatchValue( value=Constant(value=1)), MatchValue( value=Constant(value=2))]), body=[ Expr( value=Constant(value=Ellipsis))])])])\nAdded in version 3.10.\n- class ast.MatchStar(name)\u00b6\nMatches the rest of the sequence in a variable length match sequence pattern. If\nname\nis notNone\n, a list containing the remaining sequence elements is bound to that name if the overall sequence pattern is successful.>>> print(ast.dump(ast.parse(\"\"\" ... match x: ... case [1, 2, *rest]: ... ... ... case [*_]: ... ... ... \"\"\"), indent=4)) Module( body=[ Match( subject=Name(id='x', ctx=Load()), cases=[ match_case( pattern=MatchSequence( patterns=[ MatchValue( value=Constant(value=1)), MatchValue( value=Constant(value=2)), MatchStar(name='rest')]), body=[ Expr( value=Constant(value=Ellipsis))]), match_case( pattern=MatchSequence( patterns=[ MatchStar()]), body=[ Expr( value=Constant(value=Ellipsis))])])])\nAdded in version 3.10.\n- class ast.MatchMapping(keys, patterns, rest)\u00b6\nA match mapping pattern.\nkeys\nis a sequence of expression nodes.patterns\nis a corresponding sequence of pattern nodes.rest\nis an optional name that can be specified to capture the remaining mapping elements. Permitted key expressions are restricted as described in the match statement documentation.This pattern succeeds if the subject is a mapping, all evaluated key expressions are present in the mapping, and the value corresponding to each key matches the corresponding subpattern. If\nrest\nis notNone\n, a dict containing the remaining mapping elements is bound to that name if the overall mapping pattern is successful.>>> print(ast.dump(ast.parse(\"\"\" ... match x: ... case {1: _, 2: _}: ... ... ... case {**rest}: ... ... ... \"\"\"), indent=4)) Module( body=[ Match( subject=Name(id='x', ctx=Load()), cases=[ match_case( pattern=MatchMapping( keys=[ Constant(value=1), Constant(value=2)], patterns=[ MatchAs(), MatchAs()]), body=[ Expr( value=Constant(value=Ellipsis))]), match_case( pattern=MatchMapping(rest='rest'), body=[ Expr( value=Constant(value=Ellipsis))])])])\nAdded in version 3.10.\n- class ast.MatchClass(cls, patterns, kwd_attrs, kwd_patterns)\u00b6\nA match class pattern.\ncls\nis an expression giving the nominal class to be matched.patterns\nis a sequence of pattern nodes to be matched against the class defined sequence of pattern matching attributes.kwd_attrs\nis a sequence of additional attributes to be matched (specified as keyword arguments in the class pattern),kwd_patterns\nare the corresponding patterns (specified as keyword values in the class pattern).This pattern succeeds if the subject is an instance of the nominated class, all positional patterns match the corresponding class-defined attributes, and any specified keyword attributes match their corresponding pattern.\nNote: classes may define a property that returns self in order to match a pattern node against the instance being matched. Several builtin types are also matched that way, as described in the match statement documentation.\n>>> print(ast.dump(ast.parse(\"\"\" ... match x: ... case Point2D(0, 0): ... ... ... case Point3D(x=0, y=0, z=0): ... ... ... \"\"\"), indent=4)) Module( body=[ Match( subject=Name(id='x', ctx=Load()), cases=[ match_case( pattern=MatchClass( cls=Name(id='Point2D', ctx=Load()), patterns=[ MatchValue( value=Constant(value=0)), MatchValue( value=Constant(value=0))]), body=[ Expr( value=Constant(value=Ellipsis))]), match_case( pattern=MatchClass( cls=Name(id='Point3D', ctx=Load()), kwd_attrs=[ 'x', 'y', 'z'], kwd_patterns=[ MatchValue( value=Constant(value=0)), MatchValue( value=Constant(value=0)), MatchValue( value=Constant(value=0))]), body=[ Expr( value=Constant(value=Ellipsis))])])])\nAdded in version 3.10.\n- class ast.MatchAs(pattern, name)\u00b6\nA match \u201cas-pattern\u201d, capture pattern or wildcard pattern.\npattern\ncontains the match pattern that the subject will be matched against. If the pattern isNone\n, the node represents a capture pattern (i.e a bare name) and will always succeed.The\nname\nattribute contains the name that will be bound if the pattern is successful. Ifname\nisNone\n,pattern\nmust also beNone\nand the node represents the wildcard pattern.>>> print(ast.dump(ast.parse(\"\"\" ... match x: ... case [x] as y: ... ... ... case _: ... ... ... \"\"\"), indent=4)) Module( body=[ Match( subject=Name(id='x', ctx=Load()), cases=[ match_case( pattern=MatchAs( pattern=MatchSequence( patterns=[ MatchAs(name='x')]), name='y'), body=[ Expr( value=Constant(value=Ellipsis))]), match_case( pattern=MatchAs(), body=[ Expr( value=Constant(value=Ellipsis))])])])\nAdded in version 3.10.\n- class ast.MatchOr(patterns)\u00b6\nA match \u201cor-pattern\u201d. An or-pattern matches each of its subpatterns in turn to the subject, until one succeeds. The or-pattern is then deemed to succeed. If none of the subpatterns succeed the or-pattern fails. The\npatterns\nattribute contains a list of match pattern nodes that will be matched against the subject.>>> print(ast.dump(ast.parse(\"\"\" ... match x: ... case [x] | (y): ... ... ... \"\"\"), indent=4)) Module( body=[ Match( subject=Name(id='x', ctx=Load()), cases=[ match_case( pattern=MatchOr( patterns=[ MatchSequence( patterns=[ MatchAs(name='x')]), MatchAs(name='y')]), body=[ Expr( value=Constant(value=Ellipsis))])])])\nAdded in version 3.10.\nType annotations\u00b6\n- class ast.TypeIgnore(lineno, tag)\u00b6\nA\n# type: ignore\ncomment located at lineno. tag is the optional tag specified by the form# type: ignore \n.>>> print(ast.dump(ast.parse('x = 1 # type: ignore', type_comments=True), indent=4)) Module( body=[ Assign( targets=[ Name(id='x', ctx=Store())], value=Constant(value=1))], type_ignores=[ TypeIgnore(lineno=1, tag='')]) >>> print(ast.dump(ast.parse('x: bool = 1 # type: ignore[assignment]', type_comments=True), indent=4)) Module( body=[ AnnAssign( target=Name(id='x', ctx=Store()), annotation=Name(id='bool', ctx=Load()), value=Constant(value=1), simple=1)], type_ignores=[ TypeIgnore(lineno=1, tag='[assignment]')])\nNote\nTypeIgnore\nnodes are not generated when the type_comments parameter is set toFalse\n(default). Seeast.parse()\nfor more details.Added in version 3.8.\nType parameters\u00b6\nType parameters can exist on classes, functions, and type aliases.\n- class ast.TypeVar(name, bound, default_value)\u00b6\nA\ntyping.TypeVar\n.name\nis the name of the type variable.bound\nis the bound or constraints, if any. Ifbound\nis aTuple\n, it represents constraints; otherwise it represents the bound.default_value\nis the default value; if theTypeVar\nhas no default, this attribute will be set toNone\n.>>> print(ast.dump(ast.parse(\"type Alias[T: int = bool] = list[T]\"), indent=4)) Module( body=[ TypeAlias( name=Name(id='Alias', ctx=Store()), type_params=[ TypeVar( name='T', bound=Name(id='int', ctx=Load()), default_value=Name(id='bool', ctx=Load()))], value=Subscript( value=Name(id='list', ctx=Load()), slice=Name(id='T', ctx=Load()), ctx=Load()))])\nAdded in version 3.12.\nChanged in version 3.13: Added the default_value parameter.\n- class ast.ParamSpec(name, default_value)\u00b6\nA\ntyping.ParamSpec\n.name\nis the name of the parameter specification.default_value\nis the default value; if theParamSpec\nhas no default, this attribute will be set toNone\n.>>> print(ast.dump(ast.parse(\"type Alias[**P = [int, str]] = Callable[P, int]\"), indent=4)) Module( body=[ TypeAlias( name=Name(id='Alias', ctx=Store()), type_params=[ ParamSpec( name='P', default_value=List( elts=[ Name(id='int', ctx=Load()), Name(id='str', ctx=Load())], ctx=Load()))], value=Subscript( value=Name(id='Callable', ctx=Load()), slice=Tuple( elts=[ Name(id='P', ctx=Load()), Name(id='int', ctx=Load())], ctx=Load()), ctx=Load()))])\nAdded in version 3.12.\nChanged in version 3.13: Added the default_value parameter.\n- class ast.TypeVarTuple(name, default_value)\u00b6\nA\ntyping.TypeVarTuple\n.name\nis the name of the type variable tuple.default_value\nis the default value; if theTypeVarTuple\nhas no default, this attribute will be set toNone\n.>>> print(ast.dump(ast.parse(\"type Alias[*Ts = ()] = tuple[*Ts]\"), indent=4)) Module( body=[ TypeAlias( name=Name(id='Alias', ctx=Store()), type_params=[ TypeVarTuple( name='Ts', default_value=Tuple(ctx=Load()))], value=Subscript( value=Name(id='tuple', ctx=Load()), slice=Tuple( elts=[ Starred( value=Name(id='Ts', ctx=Load()), ctx=Load())], ctx=Load()), ctx=Load()))])\nAdded in version 3.12.\nChanged in version 3.13: Added the default_value parameter.\nFunction and class definitions\u00b6\n- class ast.FunctionDef(name, args, body, decorator_list, returns, type_comment, type_params)\u00b6\nA function definition.\nname\nis a raw string of the function name.args\nis anarguments\nnode.body\nis the list of nodes inside the function.decorator_list\nis the list of decorators to be applied, stored outermost first (i.e. the first in the list will be applied last).returns\nis the return annotation.type_params\nis a list of type parameters.\n- type_comment\u00b6\ntype_comment\nis an optional string with the type annotation as a comment.\nChanged in version 3.12: Added\ntype_params\n.\n- class ast.Lambda(args, body)\u00b6\nlambda\nis a minimal function definition that can be used inside an expression. UnlikeFunctionDef\n,body\nholds a single node.>>> print(ast.dump(ast.parse('lambda x,y: ...'), indent=4)) Module( body=[ Expr( value=Lambda( args=arguments( args=[ arg(arg='x'), arg(arg='y')]), body=Constant(value=Ellipsis)))])\n- class ast.arguments(posonlyargs, args, vararg, kwonlyargs, kw_defaults, kwarg, defaults)\u00b6\nThe arguments for a function.\nposonlyargs\n,args\nandkwonlyargs\nare lists ofarg\nnodes.vararg\nandkwarg\nare singlearg\nnodes, referring to the*args, **kwargs\nparameters.kw_defaults\nis a list of default values for keyword-only arguments. If one isNone\n, the corresponding argument is required.defaults\nis a list of default values for arguments that can be passed positionally. If there are fewer defaults, they correspond to the last n arguments.\n- class ast.arg(arg, annotation, type_comment)\u00b6\nA single argument in a list.\narg\nis a raw string of the argument name;annotation\nis its annotation, such as aName\nnode.- type_comment\u00b6\ntype_comment\nis an optional string with the type annotation as a comment\n>>> print(ast.dump(ast.parse(\"\"\"\\ ... @decorator1 ... @decorator2 ... def f(a: 'annotation', b=1, c=2, *d, e, f=3, **g) -> 'return annotation': ... pass ... \"\"\"), indent=4)) Module( body=[ FunctionDef( name='f', args=arguments( args=[ arg( arg='a', annotation=Constant(value='annotation')), arg(arg='b'), arg(arg='c')], vararg=arg(arg='d'), kwonlyargs=[ arg(arg='e'), arg(arg='f')], kw_defaults=[ None, Constant(value=3)], kwarg=arg(arg='g'), defaults=[ Constant(value=1), Constant(value=2)]), body=[ Pass()], decorator_list=[ Name(id='decorator1', ctx=Load()), Name(id='decorator2', ctx=Load())], returns=Constant(value='return annotation'))])\n- class ast.Return(value)\u00b6\nA\nreturn\nstatement.>>> print(ast.dump(ast.parse('return 4'), indent=4)) Module( body=[ Return( value=Constant(value=4))])\n- class ast.Yield(value)\u00b6\n- class ast.YieldFrom(value)\u00b6\nA\nyield\noryield from\nexpression. Because these are expressions, they must be wrapped in anExpr\nnode if the value sent back is not used.>>> print(ast.dump(ast.parse('yield x'), indent=4)) Module( body=[ Expr( value=Yield( value=Name(id='x', ctx=Load())))]) >>> print(ast.dump(ast.parse('yield from x'), indent=4)) Module( body=[ Expr( value=YieldFrom( value=Name(id='x', ctx=Load())))])\n- class ast.Global(names)\u00b6\n- class ast.Nonlocal(names)\u00b6\nglobal\nandnonlocal\nstatements.names\nis a list of raw strings.>>> print(ast.dump(ast.parse('global x,y,z'), indent=4)) Module( body=[ Global( names=[ 'x', 'y', 'z'])]) >>> print(ast.dump(ast.parse('nonlocal x,y,z'), indent=4)) Module( body=[ Nonlocal( names=[ 'x', 'y', 'z'])])\n- class ast.ClassDef(name, bases, keywords, body, decorator_list, type_params)\u00b6\nA class definition.\nname\nis a raw string for the class namebases\nis a list of nodes for explicitly specified base classes.keywords\nis a list ofkeyword\nnodes, principally for \u2018metaclass\u2019. Other keywords will be passed to the metaclass, as per PEP 3115.body\nis a list of nodes representing the code within the class definition.decorator_list\nis a list of nodes, as inFunctionDef\n.type_params\nis a list of type parameters.\n>>> print(ast.dump(ast.parse(\"\"\"\\ ... @decorator1 ... @decorator2 ... class Foo(base1, base2, metaclass=meta): ... pass ... \"\"\"), indent=4)) Module( body=[ ClassDef( name='Foo', bases=[ Name(id='base1', ctx=Load()), Name(id='base2', ctx=Load())], keywords=[ keyword( arg='metaclass', value=Name(id='meta', ctx=Load()))], body=[ Pass()], decorator_list=[ Name(id='decorator1', ctx=Load()), Name(id='decorator2', ctx=Load())])])\nChanged in version 3.12: Added\ntype_params\n.\nAsync and await\u00b6\n- class ast.AsyncFunctionDef(name, args, body, decorator_list, returns, type_comment, type_params)\u00b6\nAn\nasync def\nfunction definition. Has the same fields asFunctionDef\n.Changed in version 3.12: Added\ntype_params\n.\n- class ast.Await(value)\u00b6\nAn\nawait\nexpression.value\nis what it waits for. Only valid in the body of anAsyncFunctionDef\n.\n>>> print(ast.dump(ast.parse(\"\"\"\\\n... async def f():\n... await other_func()\n... \"\"\"), indent=4))\nModule(\nbody=[\nAsyncFunctionDef(\nname='f',\nargs=arguments(),\nbody=[\nExpr(\nvalue=Await(\nvalue=Call(\nfunc=Name(id='other_func', ctx=Load()))))])])\n- class ast.AsyncFor(target, iter, body, orelse, type_comment)\u00b6\n- class ast.AsyncWith(items, body, type_comment)\u00b6\nasync for\nloops andasync with\ncontext managers. They have the same fields asFor\nandWith\n, respectively. Only valid in the body of anAsyncFunctionDef\n.\nNote\nWhen a string is parsed by ast.parse()\n, operator nodes (subclasses\nof ast.operator\n, ast.unaryop\n, ast.cmpop\n,\nast.boolop\nand ast.expr_context\n) on the returned tree\nwill be singletons. Changes to one will be reflected in all other\noccurrences of the same value (for example, ast.Add\n).\nast\nhelpers\u00b6\nApart from the node classes, the ast\nmodule defines these utility functions\nand classes for traversing abstract syntax trees:\n- ast.parse(source, filename='', mode='exec', *, type_comments=False, feature_version=None, optimize=-1)\u00b6\nParse the source into an AST node. Equivalent to\ncompile(source, filename, mode, flags=FLAGS_VALUE, optimize=optimize)\n, whereFLAGS_VALUE\nisast.PyCF_ONLY_AST\nifoptimize <= 0\nandast.PyCF_OPTIMIZED_AST\notherwise.If\ntype_comments=True\nis given, the parser is modified to check and return type comments as specified by PEP 484 and PEP 526. This is equivalent to addingast.PyCF_TYPE_COMMENTS\nto the flags passed tocompile()\n. This will report syntax errors for misplaced type comments. Without this flag, type comments will be ignored, and thetype_comment\nfield on selected AST nodes will always beNone\n. In addition, the locations of# type: ignore\ncomments will be returned as thetype_ignores\nattribute ofModule\n(otherwise it is always an empty list).In addition, if\nmode\nis'func_type'\n, the input syntax is modified to correspond to PEP 484 \u201csignature type comments\u201d, e.g.(str, int) -> List[str]\n.Setting\nfeature_version\nto a tuple(major, minor)\nwill result in a \u201cbest-effort\u201d attempt to parse using that Python version\u2019s grammar. For example, settingfeature_version=(3, 9)\nwill attempt to disallow parsing ofmatch\nstatements. Currentlymajor\nmust equal to3\n. The lowest supported version is(3, 7)\n(and this may increase in future Python versions); the highest issys.version_info[0:2]\n. \u201cBest-effort\u201d attempt means there is no guarantee that the parse (or success of the parse) is the same as when run on the Python version corresponding tofeature_version\n.If source contains a null character (\n\\0\n),ValueError\nis raised.Warning\nNote that successfully parsing source code into an AST object doesn\u2019t guarantee that the source code provided is valid Python code that can be executed as the compilation step can raise further\nSyntaxError\nexceptions. For instance, the sourcereturn 42\ngenerates a valid AST node for a return statement, but it cannot be compiled alone (it needs to be inside a function node).In particular,\nast.parse()\nwon\u2019t do any scoping checks, which the compilation step does.Warning\nIt is possible to crash the Python interpreter with a sufficiently large/complex string due to stack depth limitations in Python\u2019s AST compiler.\nChanged in version 3.8: Added\ntype_comments\n,mode='func_type'\nandfeature_version\n.Changed in version 3.13: The minimum supported version for\nfeature_version\nis now(3, 7)\n. Theoptimize\nargument was added.\n- ast.unparse(ast_obj)\u00b6\nUnparse an\nast.AST\nobject and generate a string with code that would produce an equivalentast.AST\nobject if parsed back withast.parse()\n.Warning\nThe produced code string will not necessarily be equal to the original code that generated the\nast.AST\nobject (without any compiler optimizations, such as constant tuples/frozensets).Warning\nTrying to unparse a highly complex expression would result with\nRecursionError\n.Added in version 3.9.\n- ast.literal_eval(node_or_string)\u00b6\nEvaluate an expression node or a string containing only a Python literal or container display. The string or node provided may only consist of the following Python literal structures: strings, bytes, numbers, tuples, lists, dicts, sets, booleans,\nNone\nandEllipsis\n.This can be used for evaluating strings containing Python values without the need to parse the values oneself. It is not capable of evaluating arbitrarily complex expressions, for example involving operators or indexing.\nThis function had been documented as \u201csafe\u201d in the past without defining what that meant. That was misleading. This is specifically designed not to execute Python code, unlike the more general\neval()\n. There is no namespace, no name lookups, or ability to call out. But it is not free from attack: A relatively small input can lead to memory exhaustion or to C stack exhaustion, crashing the process. There is also the possibility for excessive CPU consumption denial of service on some inputs. Calling it on untrusted data is thus not recommended.Warning\nIt is possible to crash the Python interpreter due to stack depth limitations in Python\u2019s AST compiler.\nIt can raise\nValueError\n,TypeError\n,SyntaxError\n,MemoryError\nandRecursionError\ndepending on the malformed input.Changed in version 3.2: Now allows bytes and set literals.\nChanged in version 3.9: Now supports creating empty sets with\n'set()'\n.Changed in version 3.10: For string inputs, leading spaces and tabs are now stripped.\n- ast.get_docstring(node, clean=True)\u00b6\nReturn the docstring of the given node (which must be a\nFunctionDef\n,AsyncFunctionDef\n,ClassDef\n, orModule\nnode), orNone\nif it has no docstring. If clean is true, clean up the docstring\u2019s indentation withinspect.cleandoc()\n.Changed in version 3.5:\nAsyncFunctionDef\nis now supported.\n- ast.get_source_segment(source, node, *, padded=False)\u00b6\nGet source code segment of the source that generated node. If some location information (\nlineno\n,end_lineno\n,col_offset\n, orend_col_offset\n) is missing, returnNone\n.If padded is\nTrue\n, the first line of a multi-line statement will be padded with spaces to match its original position.Added in version 3.8.\n- ast.fix_missing_locations(node)\u00b6\nWhen you compile a node tree with\ncompile()\n, the compiler expectslineno\nandcol_offset\nattributes for every node that supports them. This is rather tedious to fill in for generated nodes, so this helper adds these attributes recursively where not already set, by setting them to the values of the parent node. It works recursively starting at node.\n- ast.increment_lineno(node, n=1)\u00b6\nIncrement the line number and end line number of each node in the tree starting at node by n. This is useful to \u201cmove code\u201d to a different location in a file.\n- ast.copy_location(new_node, old_node)\u00b6\nCopy source location (\nlineno\n,col_offset\n,end_lineno\n, andend_col_offset\n) from old_node to new_node if possible, and return new_node.\n- ast.iter_fields(node)\u00b6\nYield a tuple of\n(fieldname, value)\nfor each field innode._fields\nthat is present on node.\n- ast.iter_child_nodes(node)\u00b6\nYield all direct child nodes of node, that is, all fields that are nodes and all items of fields that are lists of nodes.\n- ast.walk(node)\u00b6\nRecursively yield all descendant nodes in the tree starting at node (including node itself), in no specified order. This is useful if you only want to modify nodes in place and don\u2019t care about the context.\n- class ast.NodeVisitor\u00b6\nA node visitor base class that walks the abstract syntax tree and calls a visitor function for every node found. This function may return a value which is forwarded by the\nvisit()\nmethod.This class is meant to be subclassed, with the subclass adding visitor methods.\n- visit(node)\u00b6\nVisit a node. The default implementation calls the method called\nself.visit_classname\nwhere classname is the name of the node class, orgeneric_visit()\nif that method doesn\u2019t exist.\n- generic_visit(node)\u00b6\nThis visitor calls\nvisit()\non all children of the node.Note that child nodes of nodes that have a custom visitor method won\u2019t be visited unless the visitor calls\ngeneric_visit()\nor visits them itself.\n- visit_Constant(node)\u00b6\nHandles all constant nodes.\nDon\u2019t use the\nNodeVisitor\nif you want to apply changes to nodes during traversal. For this a special visitor exists (NodeTransformer\n) that allows modifications.Deprecated since version 3.8, removed in version 3.14: Methods\nvisit_Num()\n,visit_Str()\n,visit_Bytes()\n,visit_NameConstant()\nandvisit_Ellipsis()\nwill not be called in Python 3.14+. Add thevisit_Constant()\nmethod instead to handle all constant nodes.\n- class ast.NodeTransformer\u00b6\nA\nNodeVisitor\nsubclass that walks the abstract syntax tree and allows modification of nodes.The\nNodeTransformer\nwill walk the AST and use the return value of the visitor methods to replace or remove the old node. If the return value of the visitor method isNone\n, the node will be removed from its location, otherwise it is replaced with the return value. The return value may be the original node in which case no replacement takes place.Here is an example transformer that rewrites all occurrences of name lookups (\nfoo\n) todata['foo']\n:class RewriteName(NodeTransformer): def visit_Name(self, node): return Subscript( value=Name(id='data', ctx=Load()), slice=Constant(value=node.id), ctx=node.ctx )\nKeep in mind that if the node you\u2019re operating on has child nodes you must either transform the child nodes yourself or call the\ngeneric_visit()\nmethod for the node first.For nodes that were part of a collection of statements (that applies to all statement nodes), the visitor may also return a list of nodes rather than just a single node.\nIf\nNodeTransformer\nintroduces new nodes (that weren\u2019t part of original tree) without giving them location information (such aslineno\n),fix_missing_locations()\nshould be called with the new sub-tree to recalculate the location information:tree = ast.parse('foo', mode='eval') new_tree = fix_missing_locations(RewriteName().visit(tree))\nUsually you use the transformer like this:\nnode = YourTransformer().visit(node)\n- ast.dump(node, annotate_fields=True, include_attributes=False, *, indent=None, show_empty=False)\u00b6\nReturn a formatted dump of the tree in node. This is mainly useful for debugging purposes. If annotate_fields is true (by default), the returned string will show the names and the values for fields. If annotate_fields is false, the result string will be more compact by omitting unambiguous field names. Attributes such as line numbers and column offsets are not dumped by default. If this is wanted, include_attributes can be set to true.\nIf indent is a non-negative integer or string, then the tree will be pretty-printed with that indent level. An indent level of 0, negative, or\n\"\"\nwill only insert newlines.None\n(the default) selects the single line representation. Using a positive integer indent indents that many spaces per level. If indent is a string (such as\"\\t\"\n), that string is used to indent each level.If show_empty is false (the default), optional empty lists will be omitted from the output. Optional\nNone\nvalues are always omitted.Changed in version 3.9: Added the indent option.\nChanged in version 3.13: Added the show_empty option.\n>>> print(ast.dump(ast.parse(\"\"\"\\ ... async def f(): ... await other_func() ... \"\"\"), indent=4, show_empty=True)) Module( body=[ AsyncFunctionDef( name='f', args=arguments( posonlyargs=[], args=[], kwonlyargs=[], kw_defaults=[], defaults=[]), body=[ Expr( value=Await( value=Call( func=Name(id='other_func', ctx=Load()), args=[], keywords=[])))], decorator_list=[], type_params=[])], type_ignores=[])\nCompiler flags\u00b6\nThe following flags may be passed to compile()\nin order to change\neffects on the compilation of a program:\n- ast.PyCF_ALLOW_TOP_LEVEL_AWAIT\u00b6\nEnables support for top-level\nawait\n,async for\n,async with\nand async comprehensions.Added in version 3.8.\n- ast.PyCF_ONLY_AST\u00b6\nGenerates and returns an abstract syntax tree instead of returning a compiled code object.\n- ast.PyCF_OPTIMIZED_AST\u00b6\nThe returned AST is optimized according to the optimize argument in\ncompile()\norast.parse()\n.Added in version 3.13.\n- ast.PyCF_TYPE_COMMENTS\u00b6\nEnables support for PEP 484 and PEP 526 style type comments (\n# type: \n,# type: ignore \n).Added in version 3.8.\n- ast.compare(a, b, /, *, compare_attributes=False)\u00b6\nRecursively compares two ASTs.\ncompare_attributes affects whether AST attributes are considered in the comparison. If compare_attributes is\nFalse\n(default), then attributes are ignored. Otherwise they must all be equal. This option is useful to check whether the ASTs are structurally equal but differ in whitespace or similar details. Attributes include line numbers and column offsets.Added in version 3.14.\nCommand-line usage\u00b6\nAdded in version 3.9.\nThe ast\nmodule can be executed as a script from the command line.\nIt is as simple as:\npython -m ast [-m ] [-a] [infile]\nThe following options are accepted:\n- -h, --help\u00b6\nShow the help message and exit.\n- -m \u00b6\n- --mode \u00b6\nSpecify what kind of code must be compiled, like the mode argument in\nparse()\n.\n- --no-type-comments\u00b6\nDon\u2019t parse type comments.\n- -a, --include-attributes\u00b6\nInclude attributes such as line numbers and column offsets.\n- --feature-version \u00b6\nPython version in the format 3.x (for example, 3.10). Defaults to the current version of the interpreter.\nAdded in version 3.14.\n- -O \u00b6\n- --optimize \u00b6\nOptimization level for parser. Defaults to no optimization.\nAdded in version 3.14.\n- --show-empty\u00b6\nShow empty lists and fields that are\nNone\n. Defaults to not showing empty objects.Added in version 3.14.\nIf infile\nis specified its contents are parsed to AST and dumped\nto stdout. Otherwise, the content is read from stdin.\nSee also\nGreen Tree Snakes, an external documentation resource, has good details on working with Python ASTs.\nASTTokens annotates Python ASTs with the positions of tokens and text in the source code that generated them. This is helpful for tools that make source code transformations.\nleoAst.py unifies the token-based and parse-tree-based views of python programs by inserting two-way links between tokens and ast nodes.\nLibCST parses code as a Concrete Syntax Tree that looks like an ast tree and keeps all formatting details. It\u2019s useful for building automated refactoring (codemod) applications and linters.\nParso is a Python parser that supports error recovery and round-trip parsing for different Python versions (in multiple Python versions). Parso is also able to list multiple syntax errors in your Python file.", "code_snippets": [" ", " ", " ", " ", " ", "\n ", " ", "\n", " ", "\n ", "\n ", " ", " ", " ", "\n", "\n\n ", " ", "\n ", " ", "\n ", " ", "\n ", "\n ", "\n ", "\n", " ", " ", " ", "\n", " ", " ", "\n", " ", " ", "\n"], "language": "Python", "source": "python.org", "token_count": 16800} +{"url": "https://docs.python.org/3/whatsnew/3.10.html", "title": "What\u2019s New In Python 3.10", "content": "What\u2019s New In Python 3.10\u00b6\n- Editor:\nPablo Galindo Salgado\nThis article explains the new features in Python 3.10, compared to 3.9. Python 3.10 was released on October 4, 2021. For full details, see the changelog.\nSummary \u2013 Release highlights\u00b6\nNew syntax features:\nPEP 634, Structural Pattern Matching: Specification\nPEP 635, Structural Pattern Matching: Motivation and Rationale\nPEP 636, Structural Pattern Matching: Tutorial\nbpo-12782, Parenthesized context managers are now officially allowed.\nNew features in the standard library:\nPEP 618, Add Optional Length-Checking To zip.\nInterpreter improvements:\nPEP 626, Precise line numbers for debugging and other tools.\nNew typing features:\nPEP 604, Allow writing union types as X | Y\nPEP 612, Parameter Specification Variables\nPEP 613, Explicit Type Aliases\nPEP 647, User-Defined Type Guards\nImportant deprecations, removals or restrictions:\nNew Features\u00b6\nParenthesized context managers\u00b6\nUsing enclosing parentheses for continuation across multiple lines in context managers is now supported. This allows formatting a long collection of context managers in multiple lines in a similar way as it was previously possible with import statements. For instance, all these examples are now valid:\nwith (CtxManager() as example):\n...\nwith (\nCtxManager1(),\nCtxManager2()\n):\n...\nwith (CtxManager1() as example,\nCtxManager2()):\n...\nwith (CtxManager1(),\nCtxManager2() as example):\n...\nwith (\nCtxManager1() as example1,\nCtxManager2() as example2\n):\n...\nit is also possible to use a trailing comma at the end of the enclosed group:\nwith (\nCtxManager1() as example1,\nCtxManager2() as example2,\nCtxManager3() as example3,\n):\n...\nThis new syntax uses the non LL(1) capacities of the new parser. Check PEP 617 for more details.\n(Contributed by Guido van Rossum, Pablo Galindo and Lysandros Nikolaou in bpo-12782 and bpo-40334.)\nBetter error messages\u00b6\nSyntaxErrors\u00b6\nWhen parsing code that contains unclosed parentheses or brackets the interpreter now includes the location of the unclosed bracket of parentheses instead of displaying SyntaxError: unexpected EOF while parsing or pointing to some incorrect location. For instance, consider the following code (notice the unclosed \u2018{\u2018):\nexpected = {9: 1, 18: 2, 19: 2, 27: 3, 28: 3, 29: 3, 36: 4, 37: 4,\n38: 4, 39: 4, 45: 5, 46: 5, 47: 5, 48: 5, 49: 5, 54: 6,\nsome_other_code = foo()\nPrevious versions of the interpreter reported confusing places as the location of the syntax error:\nFile \"example.py\", line 3\nsome_other_code = foo()\n^\nSyntaxError: invalid syntax\nbut in Python 3.10 a more informative error is emitted:\nFile \"example.py\", line 1\nexpected = {9: 1, 18: 2, 19: 2, 27: 3, 28: 3, 29: 3, 36: 4, 37: 4,\n^\nSyntaxError: '{' was never closed\nIn a similar way, errors involving unclosed string literals (single and triple quoted) now point to the start of the string instead of reporting EOF/EOL.\nThese improvements are inspired by previous work in the PyPy interpreter.\n(Contributed by Pablo Galindo in bpo-42864 and Batuhan Taskaya in bpo-40176.)\nSyntaxError\nexceptions raised by the interpreter will now highlight the\nfull error range of the expression that constitutes the syntax error itself,\ninstead of just where the problem is detected. In this way, instead of displaying\n(before Python 3.10):\n>>> foo(x, z for z in range(10), t, w)\nFile \"\", line 1\nfoo(x, z for z in range(10), t, w)\n^\nSyntaxError: Generator expression must be parenthesized\nnow Python 3.10 will display the exception as:\n>>> foo(x, z for z in range(10), t, w)\nFile \"\", line 1\nfoo(x, z for z in range(10), t, w)\n^^^^^^^^^^^^^^^^^^^^\nSyntaxError: Generator expression must be parenthesized\nThis improvement was contributed by Pablo Galindo in bpo-43914.\nA considerable amount of new specialized messages for SyntaxError\nexceptions\nhave been incorporated. Some of the most notable ones are as follows:\nMissing\n:\nbefore blocks:>>> if rocket.position > event_horizon File \"\", line 1 if rocket.position > event_horizon ^ SyntaxError: expected ':'\n(Contributed by Pablo Galindo in bpo-42997.)\nUnparenthesised tuples in comprehensions targets:\n>>> {x,y for x,y in zip('abcd', '1234')} File \"\", line 1 {x,y for x,y in zip('abcd', '1234')} ^ SyntaxError: did you forget parentheses around the comprehension target?\n(Contributed by Pablo Galindo in bpo-43017.)\nMissing commas in collection literals and between expressions:\n>>> items = { ... x: 1, ... y: 2 ... z: 3, File \"\", line 3 y: 2 ^ SyntaxError: invalid syntax. Perhaps you forgot a comma?\n(Contributed by Pablo Galindo in bpo-43822.)\nMultiple Exception types without parentheses:\n>>> try: ... build_dyson_sphere() ... except NotEnoughScienceError, NotEnoughResourcesError: File \"\", line 3 except NotEnoughScienceError, NotEnoughResourcesError: ^ SyntaxError: multiple exception types must be parenthesized\n(Contributed by Pablo Galindo in bpo-43149.)\nMissing\n:\nand values in dictionary literals:>>> values = { ... x: 1, ... y: 2, ... z: ... } File \"\", line 4 z: ^ SyntaxError: expression expected after dictionary key and ':' >>> values = {x:1, y:2, z w:3} File \"\", line 1 values = {x:1, y:2, z w:3} ^ SyntaxError: ':' expected after dictionary key\n(Contributed by Pablo Galindo in bpo-43823.)\ntry\nblocks withoutexcept\norfinally\nblocks:>>> try: ... x = 2 ... something = 3 File \"\", line 3 something = 3 ^^^^^^^^^ SyntaxError: expected 'except' or 'finally' block\n(Contributed by Pablo Galindo in bpo-44305.)\nUsage of\n=\ninstead of==\nin comparisons:>>> if rocket.position = event_horizon: File \"\", line 1 if rocket.position = event_horizon: ^ SyntaxError: cannot assign to attribute here. Maybe you meant '==' instead of '='?\n(Contributed by Pablo Galindo in bpo-43797.)\nUsage of\n*\nin f-strings:>>> f\"Black holes {*all_black_holes} and revelations\" File \"\", line 1 (*all_black_holes) ^ SyntaxError: f-string: cannot use starred expression here\n(Contributed by Pablo Galindo in bpo-41064.)\nIndentationErrors\u00b6\nMany IndentationError\nexceptions now have more context regarding what kind of block\nwas expecting an indentation, including the location of the statement:\n>>> def foo():\n... if lel:\n... x = 2\nFile \"\", line 3\nx = 2\n^\nIndentationError: expected an indented block after 'if' statement in line 2\nAttributeErrors\u00b6\nWhen printing AttributeError\n, PyErr_Display()\nwill offer\nsuggestions of similar attribute names in the object that the exception was\nraised from:\n>>> collections.namedtoplo\nTraceback (most recent call last):\nFile \"\", line 1, in \nAttributeError: module 'collections' has no attribute 'namedtoplo'. Did you mean: namedtuple?\n(Contributed by Pablo Galindo in bpo-38530.)\nWarning\nNotice this won\u2019t work if PyErr_Display()\nis not called to display the error\nwhich can happen if some other custom error display function is used. This is a common\nscenario in some REPLs like IPython.\nNameErrors\u00b6\nWhen printing NameError\nraised by the interpreter, PyErr_Display()\nwill offer suggestions of similar variable names in the function that the exception\nwas raised from:\n>>> schwarzschild_black_hole = None\n>>> schwarschild_black_hole\nTraceback (most recent call last):\nFile \"\", line 1, in \nNameError: name 'schwarschild_black_hole' is not defined. Did you mean: schwarzschild_black_hole?\n(Contributed by Pablo Galindo in bpo-38530.)\nWarning\nNotice this won\u2019t work if PyErr_Display()\nis not called to display the error,\nwhich can happen if some other custom error display function is used. This is a common\nscenario in some REPLs like IPython.\nPEP 626: Precise line numbers for debugging and other tools\u00b6\nPEP 626 brings more precise and reliable line numbers for debugging, profiling and coverage tools. Tracing events, with the correct line number, are generated for all lines of code executed and only for lines of code that are executed.\nThe f_lineno\nattribute of frame objects will always contain the\nexpected line number.\nThe co_lnotab\nattribute of\ncode objects is deprecated and\nwill be removed in 3.12.\nCode that needs to convert from offset to line number should use the new\nco_lines()\nmethod instead.\nPEP 634: Structural Pattern Matching\u00b6\nStructural pattern matching has been added in the form of a match statement and case statements of patterns with associated actions. Patterns consist of sequences, mappings, primitive data types as well as class instances. Pattern matching enables programs to extract information from complex data types, branch on the structure of data, and apply specific actions based on different forms of data.\nSyntax and operations\u00b6\nThe generic syntax of pattern matching is:\nmatch subject:\ncase :\n\ncase :\n\ncase :\n\ncase _:\n\nA match statement takes an expression and compares its value to successive patterns given as one or more case blocks. Specifically, pattern matching operates by:\nusing data with type and shape (the\nsubject\n)evaluating the\nsubject\nin thematch\nstatementcomparing the subject with each pattern in a\ncase\nstatement from top to bottom until a match is confirmed.executing the action associated with the pattern of the confirmed match\nIf an exact match is not confirmed, the last case, a wildcard\n_\n, if provided, will be used as the matching case. If an exact match is not confirmed and a wildcard case does not exist, the entire match block is a no-op.\nDeclarative approach\u00b6\nReaders may be aware of pattern matching through the simple example of matching a subject (data object) to a literal (pattern) with the switch statement found in C, Java or JavaScript (and many other languages). Often the switch statement is used for comparison of an object/expression with case statements containing literals.\nMore powerful examples of pattern matching can be found in languages such as Scala and Elixir. With structural pattern matching, the approach is \u201cdeclarative\u201d and explicitly states the conditions (the patterns) for data to match.\nWhile an \u201cimperative\u201d series of instructions using nested \u201cif\u201d statements could be used to accomplish something similar to structural pattern matching, it is less clear than the \u201cdeclarative\u201d approach. Instead the \u201cdeclarative\u201d approach states the conditions to meet for a match and is more readable through its explicit patterns. While structural pattern matching can be used in its simplest form comparing a variable to a literal in a case statement, its true value for Python lies in its handling of the subject\u2019s type and shape.\nSimple pattern: match to a literal\u00b6\nLet\u2019s look at this example as pattern matching in its simplest form: a value,\nthe subject, being matched to several literals, the patterns. In the example\nbelow, status\nis the subject of the match statement. The patterns are\neach of the case statements, where literals represent request status codes.\nThe associated action to the case is executed after a match:\ndef http_error(status):\nmatch status:\ncase 400:\nreturn \"Bad request\"\ncase 404:\nreturn \"Not found\"\ncase 418:\nreturn \"I'm a teapot\"\ncase _:\nreturn \"Something's wrong with the internet\"\nIf the above function is passed a status\nof 418, \u201cI\u2019m a teapot\u201d is returned.\nIf the above function is passed a status\nof 500, the case statement with\n_\nwill match as a wildcard, and \u201cSomething\u2019s wrong with the internet\u201d is\nreturned.\nNote the last block: the variable name, _\n, acts as a wildcard and insures\nthe subject will always match. The use of _\nis optional.\nYou can combine several literals in a single pattern using |\n(\u201cor\u201d):\ncase 401 | 403 | 404:\nreturn \"Not allowed\"\nBehavior without the wildcard\u00b6\nIf we modify the above example by removing the last case block, the example becomes:\ndef http_error(status):\nmatch status:\ncase 400:\nreturn \"Bad request\"\ncase 404:\nreturn \"Not found\"\ncase 418:\nreturn \"I'm a teapot\"\nWithout the use of _\nin a case statement, a match may not exist. If no\nmatch exists, the behavior is a no-op. For example, if status\nof 500 is\npassed, a no-op occurs.\nPatterns with a literal and variable\u00b6\nPatterns can look like unpacking assignments, and a pattern may be used to bind variables. In this example, a data point can be unpacked to its x-coordinate and y-coordinate:\n# point is an (x, y) tuple\nmatch point:\ncase (0, 0):\nprint(\"Origin\")\ncase (0, y):\nprint(f\"Y={y}\")\ncase (x, 0):\nprint(f\"X={x}\")\ncase (x, y):\nprint(f\"X={x}, Y={y}\")\ncase _:\nraise ValueError(\"Not a point\")\nThe first pattern has two literals, (0, 0)\n, and may be thought of as an\nextension of the literal pattern shown above. The next two patterns combine a\nliteral and a variable, and the variable binds a value from the subject\n(point\n). The fourth pattern captures two values, which makes it\nconceptually similar to the unpacking assignment (x, y) = point\n.\nPatterns and classes\u00b6\nIf you are using classes to structure your data, you can use as a pattern the class name followed by an argument list resembling a constructor. This pattern has the ability to capture instance attributes into variables:\nclass Point:\ndef __init__(self, x, y):\nself.x = x\nself.y = y\ndef location(point):\nmatch point:\ncase Point(x=0, y=0):\nprint(\"Origin is the point's location.\")\ncase Point(x=0, y=y):\nprint(f\"Y={y} and the point is on the y-axis.\")\ncase Point(x=x, y=0):\nprint(f\"X={x} and the point is on the x-axis.\")\ncase Point():\nprint(\"The point is located somewhere else on the plane.\")\ncase _:\nprint(\"Not a point\")\nPatterns with positional parameters\u00b6\nYou can use positional parameters with some builtin classes that provide an\nordering for their attributes (e.g. dataclasses). You can also define a specific\nposition for attributes in patterns by setting the __match_args__\nspecial\nattribute in your classes. If it\u2019s set to (\u201cx\u201d, \u201cy\u201d), the following patterns\nare all equivalent (and all bind the y\nattribute to the var\nvariable):\nPoint(1, var)\nPoint(1, y=var)\nPoint(x=1, y=var)\nPoint(y=var, x=1)\nNested patterns\u00b6\nPatterns can be arbitrarily nested. For example, if our data is a short list of points, it could be matched like this:\nmatch points:\ncase []:\nprint(\"No points in the list.\")\ncase [Point(0, 0)]:\nprint(\"The origin is the only point in the list.\")\ncase [Point(x, y)]:\nprint(f\"A single point {x}, {y} is in the list.\")\ncase [Point(0, y1), Point(0, y2)]:\nprint(f\"Two points on the Y axis at {y1}, {y2} are in the list.\")\ncase _:\nprint(\"Something else is found in the list.\")\nComplex patterns and the wildcard\u00b6\nTo this point, the examples have used _\nalone in the last case statement.\nA wildcard can be used in more complex patterns, such as ('error', code, _)\n.\nFor example:\nmatch test_variable:\ncase ('warning', code, 40):\nprint(\"A warning has been received.\")\ncase ('error', code, _):\nprint(f\"An error {code} occurred.\")\nIn the above case, test_variable\nwill match for (\u2018error\u2019, code, 100) and\n(\u2018error\u2019, code, 800).\nGuard\u00b6\nWe can add an if\nclause to a pattern, known as a \u201cguard\u201d. If the\nguard is false, match\ngoes on to try the next case block. Note\nthat value capture happens before the guard is evaluated:\nmatch point:\ncase Point(x, y) if x == y:\nprint(f\"The point is located on the diagonal Y=X at {x}.\")\ncase Point(x, y):\nprint(f\"Point is not on the diagonal.\")\nOther Key Features\u00b6\nSeveral other key features:\nLike unpacking assignments, tuple and list patterns have exactly the same meaning and actually match arbitrary sequences. Technically, the subject must be a sequence. Therefore, an important exception is that patterns don\u2019t match iterators. Also, to prevent a common mistake, sequence patterns don\u2019t match strings.\nSequence patterns support wildcards:\n[x, y, *rest]\nand(x, y, *rest)\nwork similar to wildcards in unpacking assignments. The name after*\nmay also be_\n, so(x, y, *_)\nmatches a sequence of at least two items without binding the remaining items.Mapping patterns:\n{\"bandwidth\": b, \"latency\": l}\ncaptures the\"bandwidth\"\nand\"latency\"\nvalues from a dict. Unlike sequence patterns, extra keys are ignored. A wildcard**rest\nis also supported. (But**_\nwould be redundant, so is not allowed.)Subpatterns may be captured using the\nas\nkeyword:case (Point(x1, y1), Point(x2, y2) as p2): ...\nThis binds x1, y1, x2, y2 like you would expect without the\nas\nclause, and p2 to the entire second item of the subject.Most literals are compared by equality. However, the singletons\nTrue\n,False\nandNone\nare compared by identity.Named constants may be used in patterns. These named constants must be dotted names to prevent the constant from being interpreted as a capture variable:\nfrom enum import Enum class Color(Enum): RED = 0 GREEN = 1 BLUE = 2 color = Color.GREEN match color: case Color.RED: print(\"I see red!\") case Color.GREEN: print(\"Grass is green\") case Color.BLUE: print(\"I'm feeling the blues :(\")\nFor the full specification see PEP 634. Motivation and rationale are in PEP 635, and a longer tutorial is in PEP 636.\nOptional EncodingWarning\nand encoding=\"locale\"\noption\u00b6\nThe default encoding of TextIOWrapper\nand open()\nis\nplatform and locale dependent. Since UTF-8 is used on most Unix\nplatforms, omitting encoding\noption when opening UTF-8 files\n(e.g. JSON, YAML, TOML, Markdown) is a very common bug. For example:\n# BUG: \"rb\" mode or encoding=\"utf-8\" should be used.\nwith open(\"data.json\") as f:\ndata = json.load(f)\nTo find this type of bug, an optional EncodingWarning\nis added.\nIt is emitted when sys.flags.warn_default_encoding\nis true and locale-specific default encoding is used.\n-X warn_default_encoding\noption and PYTHONWARNDEFAULTENCODING\nare added to enable the warning.\nSee Text Encoding for more information.\nOther Language Changes\u00b6\nThe\nint\ntype has a new methodint.bit_count()\n, returning the number of ones in the binary expansion of a given integer, also known as the population count. (Contributed by Niklas Fiekas in bpo-29882.)The views returned by\ndict.keys()\n,dict.values()\nanddict.items()\nnow all have amapping\nattribute that gives atypes.MappingProxyType\nobject wrapping the original dictionary. (Contributed by Dennis Sweeney in bpo-40890.)PEP 618: The\nzip()\nfunction now has an optionalstrict\nflag, used to require that all the iterables have an equal length.Builtin and extension functions that take integer arguments no longer accept\nDecimal\ns,Fraction\ns and other objects that can be converted to integers only with a loss (e.g. that have the__int__()\nmethod but do not have the__index__()\nmethod). (Contributed by Serhiy Storchaka in bpo-37999.)If\nobject.__ipow__()\nreturnsNotImplemented\n, the operator will correctly fall back toobject.__pow__()\nandobject.__rpow__()\nas expected. (Contributed by Alex Shkop in bpo-38302.)Assignment expressions can now be used unparenthesized within set literals and set comprehensions, as well as in sequence indexes (but not slices).\nFunctions have a new\n__builtins__\nattribute which is used to look for builtin symbols when a function is executed, instead of looking into__globals__['__builtins__']\n. The attribute is initialized from__globals__[\"__builtins__\"]\nif it exists, else from the current builtins. (Contributed by Mark Shannon in bpo-42990.)Two new builtin functions \u2013\naiter()\nandanext()\nhave been added to provide asynchronous counterparts toiter()\nandnext()\n, respectively. (Contributed by Joshua Bronson, Daniel Pope, and Justin Wang in bpo-31861.)Static methods (\n@staticmethod\n) and class methods (@classmethod\n) now inherit the method attributes (__module__\n,__name__\n,__qualname__\n,__doc__\n,__annotations__\n) and have a new__wrapped__\nattribute. Moreover, static methods are now callable as regular functions. (Contributed by Victor Stinner in bpo-43682.)Annotations for complex targets (everything beside\nsimple name\ntargets defined by PEP 526) no longer cause any runtime effects withfrom __future__ import annotations\n. (Contributed by Batuhan Taskaya in bpo-42737.)Class and module objects now lazy-create empty annotations dicts on demand. The annotations dicts are stored in the object\u2019s\n__dict__\nfor backwards compatibility. This improves the best practices for working with__annotations__\n; for more information, please see Annotations Best Practices. (Contributed by Larry Hastings in bpo-43901.)Annotations consist of\nyield\n,yield from\n,await\nor named expressions are now forbidden underfrom __future__ import annotations\ndue to their side effects. (Contributed by Batuhan Taskaya in bpo-42725.)Usage of unbound variables,\nsuper()\nand other expressions that might alter the processing of symbol table as annotations are now rendered effectless underfrom __future__ import annotations\n. (Contributed by Batuhan Taskaya in bpo-42725.)Hashes of NaN values of both\nfloat\ntype anddecimal.Decimal\ntype now depend on object identity. Formerly, they always hashed to0\neven though NaN values are not equal to one another. This caused potentially quadratic runtime behavior due to excessive hash collisions when creating dictionaries and sets containing multiple NaNs. (Contributed by Raymond Hettinger in bpo-43475.)A\nSyntaxError\n(instead of aNameError\n) will be raised when deleting the__debug__\nconstant. (Contributed by Donghee Na in bpo-45000.)SyntaxError\nexceptions now haveend_lineno\nandend_offset\nattributes. They will beNone\nif not determined. (Contributed by Pablo Galindo in bpo-43914.)\nNew Modules\u00b6\nNone.\nImproved Modules\u00b6\nasyncio\u00b6\nAdd missing connect_accepted_socket()\nmethod.\n(Contributed by Alex Gr\u00f6nholm in bpo-41332.)\nargparse\u00b6\nMisleading phrase \u201coptional arguments\u201d was replaced with \u201coptions\u201d in argparse help. Some tests might require adaptation if they rely on exact output match. (Contributed by Raymond Hettinger in bpo-9694.)\narray\u00b6\nThe index()\nmethod of array.array\nnow has\noptional start and stop parameters.\n(Contributed by Anders Lorentsen and Zackery Spytz in bpo-31956.)\nasynchat, asyncore, smtpd\u00b6\nThese modules have been marked as deprecated in their module documentation\nsince Python 3.6. An import-time DeprecationWarning\nhas now been\nadded to all three of these modules.\nbase64\u00b6\nAdd base64.b32hexencode()\nand base64.b32hexdecode()\nto support the\nBase32 Encoding with Extended Hex Alphabet.\nbdb\u00b6\nAdd clearBreakpoints()\nto reset all set breakpoints.\n(Contributed by Irit Katriel in bpo-24160.)\nbisect\u00b6\nAdded the possibility of providing a key function to the APIs in the bisect\nmodule. (Contributed by Raymond Hettinger in bpo-4356.)\ncodecs\u00b6\nAdd a codecs.unregister()\nfunction to unregister a codec search function.\n(Contributed by Hai Shi in bpo-41842.)\ncollections.abc\u00b6\nThe __args__\nof the parameterized generic for\ncollections.abc.Callable\nare now consistent with typing.Callable\n.\ncollections.abc.Callable\ngeneric now flattens type parameters, similar\nto what typing.Callable\ncurrently does. This means that\ncollections.abc.Callable[[int, str], str]\nwill have __args__\nof\n(int, str, str)\n; previously this was ([int, str], str)\n. To allow this\nchange, types.GenericAlias\ncan now be subclassed, and a subclass will\nbe returned when subscripting the collections.abc.Callable\ntype. Note\nthat a TypeError\nmay be raised for invalid forms of parameterizing\ncollections.abc.Callable\nwhich may have passed silently in Python 3.9.\n(Contributed by Ken Jin in bpo-42195.)\ncontextlib\u00b6\nAdd a contextlib.aclosing()\ncontext manager to safely close async generators\nand objects representing asynchronously released resources.\n(Contributed by Joongi Kim and John Belmonte in bpo-41229.)\nAdd asynchronous context manager support to contextlib.nullcontext()\n.\n(Contributed by Tom Gringauz in bpo-41543.)\nAdd AsyncContextDecorator\n, for supporting usage of async\ncontext managers as decorators.\ncurses\u00b6\nThe extended color functions added in ncurses 6.1 will be used transparently\nby curses.color_content()\n, curses.init_color()\n,\ncurses.init_pair()\n, and curses.pair_content()\n. A new function,\ncurses.has_extended_color_support()\n, indicates whether extended color\nsupport is provided by the underlying ncurses library.\n(Contributed by Jeffrey Kintscher and Hans Petter Jansson in bpo-36982.)\nThe BUTTON5_*\nconstants are now exposed in the curses\nmodule if\nthey are provided by the underlying curses library.\n(Contributed by Zackery Spytz in bpo-39273.)\ndataclasses\u00b6\n__slots__\u00b6\nAdded slots\nparameter in dataclasses.dataclass()\ndecorator.\n(Contributed by Yurii Karabas in bpo-42269)\nKeyword-only fields\u00b6\ndataclasses now supports fields that are keyword-only in the generated __init__ method. There are a number of ways of specifying keyword-only fields.\nYou can say that every field is keyword-only:\nfrom dataclasses import dataclass\n@dataclass(kw_only=True)\nclass Birthday:\nname: str\nbirthday: datetime.date\nBoth name\nand birthday\nare keyword-only parameters to the\ngenerated __init__ method.\nYou can specify keyword-only on a per-field basis:\nfrom dataclasses import dataclass, field\n@dataclass\nclass Birthday:\nname: str\nbirthday: datetime.date = field(kw_only=True)\nHere only birthday\nis keyword-only. If you set kw_only\non\nindividual fields, be aware that there are rules about re-ordering\nfields due to keyword-only fields needing to follow non-keyword-only\nfields. See the full dataclasses documentation for details.\nYou can also specify that all fields following a KW_ONLY marker are keyword-only. This will probably be the most common usage:\nfrom dataclasses import dataclass, KW_ONLY\n@dataclass\nclass Point:\nx: float\ny: float\n_: KW_ONLY\nz: float = 0.0\nt: float = 0.0\nHere, z\nand t\nare keyword-only parameters, while x\nand\ny\nare not.\n(Contributed by Eric V. Smith in bpo-43532.)\ndistutils\u00b6\nThe entire distutils\npackage is deprecated, to be removed in Python\n3.12. Its functionality for specifying package builds has already been\ncompletely replaced by third-party packages setuptools\nand\npackaging\n, and most other commonly used APIs are available elsewhere\nin the standard library (such as platform\n, shutil\n,\nsubprocess\nor sysconfig\n). There are no plans to migrate\nany other functionality from distutils\n, and applications that are\nusing other functions should plan to make private copies of the code.\nRefer to PEP 632 for discussion.\nThe bdist_wininst\ncommand deprecated in Python 3.8 has been removed.\nThe bdist_wheel\ncommand is now recommended to distribute binary packages\non Windows.\n(Contributed by Victor Stinner in bpo-42802.)\ndoctest\u00b6\nWhen a module does not define __loader__\n, fall back to __spec__.loader\n.\n(Contributed by Brett Cannon in bpo-42133.)\nencodings\u00b6\nencodings.normalize_encoding()\nnow ignores non-ASCII characters.\n(Contributed by Hai Shi in bpo-39337.)\nenum\u00b6\nEnum\n__repr__()\nnow returns enum_name.member_name\nand\n__str__()\nnow returns member_name\n. Stdlib enums available as\nmodule constants have a repr()\nof module_name.member_name\n.\n(Contributed by Ethan Furman in bpo-40066.)\nAdd enum.StrEnum\nfor enums where all members are strings.\n(Contributed by Ethan Furman in bpo-41816.)\nfileinput\u00b6\nAdd encoding and errors parameters in fileinput.input()\nand\nfileinput.FileInput\n.\n(Contributed by Inada Naoki in bpo-43712.)\nfileinput.hook_compressed()\nnow returns TextIOWrapper\nobject\nwhen mode is \u201cr\u201d and file is compressed, like uncompressed files.\n(Contributed by Inada Naoki in bpo-5758.)\nfaulthandler\u00b6\nThe faulthandler\nmodule now detects if a fatal error occurs during a\ngarbage collector collection.\n(Contributed by Victor Stinner in bpo-44466.)\ngc\u00b6\nAdd audit hooks for gc.get_objects()\n, gc.get_referrers()\nand\ngc.get_referents()\n. (Contributed by Pablo Galindo in bpo-43439.)\nglob\u00b6\nAdd the root_dir and dir_fd parameters in glob()\nand\niglob()\nwhich allow to specify the root directory for searching.\n(Contributed by Serhiy Storchaka in bpo-38144.)\nhashlib\u00b6\nThe hashlib module requires OpenSSL 1.1.1 or newer. (Contributed by Christian Heimes in PEP 644 and bpo-43669.)\nThe hashlib module has preliminary support for OpenSSL 3.0.0. (Contributed by Christian Heimes in bpo-38820 and other issues.)\nThe pure-Python fallback of pbkdf2_hmac()\nis deprecated. In\nthe future PBKDF2-HMAC will only be available when Python has been built with\nOpenSSL support.\n(Contributed by Christian Heimes in bpo-43880.)\nhmac\u00b6\nThe hmac module now uses OpenSSL\u2019s HMAC implementation internally. (Contributed by Christian Heimes in bpo-40645.)\nIDLE and idlelib\u00b6\nMake IDLE invoke sys.excepthook()\n(when started without \u2018-n\u2019).\nUser hooks were previously ignored. (Contributed by Ken Hilton in\nbpo-43008.)\nRearrange the settings dialog. Split the General tab into Windows and Shell/Ed tabs. Move help sources, which extend the Help menu, to the Extensions tab. Make space for new options and shorten the dialog. The latter makes the dialog better fit small screens. (Contributed by Terry Jan Reedy in bpo-40468.) Move the indent space setting from the Font tab to the new Windows tab. (Contributed by Mark Roseman and Terry Jan Reedy in bpo-33962.)\nThe changes above were backported to a 3.9 maintenance release.\nAdd a Shell sidebar. Move the primary prompt (\u2018>>>\u2019) to the sidebar. Add secondary prompts (\u2019\u2026\u2019) to the sidebar. Left click and optional drag selects one or more lines of text, as with the editor line number sidebar. Right click after selecting text lines displays a context menu with \u2018copy with prompts\u2019. This zips together prompts from the sidebar with lines from the selected text. This option also appears on the context menu for the text. (Contributed by Tal Einat in bpo-37903.)\nUse spaces instead of tabs to indent interactive code. This makes interactive code entries \u2018look right\u2019. Making this feasible was a major motivation for adding the shell sidebar. (Contributed by Terry Jan Reedy in bpo-37892.)\nHighlight the new soft keywords match\n,\ncase\n, and _\nin\npattern-matching statements. However, this highlighting is not perfect\nand will be incorrect in some rare cases, including some _\n-s in\ncase\npatterns. (Contributed by Tal Einat in bpo-44010.)\nNew in 3.10 maintenance releases.\nApply syntax highlighting to .pyi\nfiles. (Contributed by Alex\nWaygood and Terry Jan Reedy in bpo-45447.)\nInclude prompts when saving Shell with inputs and outputs. (Contributed by Terry Jan Reedy in gh-95191.)\nimportlib.metadata\u00b6\nFeature parity with importlib_metadata\n4.6\n(history).\nimportlib.metadata entry points now provide a nicer experience for selecting entry points by group and name through a new importlib.metadata.EntryPoints class. See the Compatibility Note in the docs for more info on the deprecation and usage.\nAdded importlib.metadata.packages_distributions() for resolving top-level Python modules and packages to their importlib.metadata.Distribution.\ninspect\u00b6\nWhen a module does not define __loader__\n, fall back to __spec__.loader\n.\n(Contributed by Brett Cannon in bpo-42133.)\nAdd inspect.get_annotations()\n, which safely computes the annotations\ndefined on an object. It works around the quirks of accessing the annotations\non various types of objects, and makes very few assumptions about the object\nit examines. inspect.get_annotations()\ncan also correctly un-stringize\nstringized annotations. inspect.get_annotations()\nis now considered\nbest practice for accessing the annotations dict defined on any Python object;\nfor more information on best practices for working with annotations, please see\nAnnotations Best Practices.\nRelatedly, inspect.signature()\n,\ninspect.Signature.from_callable()\n, and inspect.Signature.from_function()\nnow call inspect.get_annotations()\nto retrieve annotations. This means\ninspect.signature()\nand inspect.Signature.from_callable()\ncan\nalso now un-stringize stringized annotations.\n(Contributed by Larry Hastings in bpo-43817.)\nitertools\u00b6\nAdd itertools.pairwise()\n.\n(Contributed by Raymond Hettinger in bpo-38200.)\nlinecache\u00b6\nWhen a module does not define __loader__\n, fall back to __spec__.loader\n.\n(Contributed by Brett Cannon in bpo-42133.)\nos\u00b6\nAdd os.cpu_count()\nsupport for VxWorks RTOS.\n(Contributed by Peixing Xin in bpo-41440.)\nAdd a new function os.eventfd()\nand related helpers to wrap the\neventfd2\nsyscall on Linux.\n(Contributed by Christian Heimes in bpo-41001.)\nAdd os.splice()\nthat allows to move data between two file\ndescriptors without copying between kernel address space and user\naddress space, where one of the file descriptors must refer to a\npipe. (Contributed by Pablo Galindo in bpo-41625.)\nAdd O_EVTONLY\n, O_FSYNC\n, O_SYMLINK\nand O_NOFOLLOW_ANY\nfor macOS.\n(Contributed by Donghee Na in bpo-43106.)\nos.path\u00b6\nos.path.realpath()\nnow accepts a strict keyword-only argument. When set\nto True\n, OSError\nis raised if a path doesn\u2019t exist or a symlink loop\nis encountered.\n(Contributed by Barney Gale in bpo-43757.)\npathlib\u00b6\nAdd slice support to PurePath.parents\n.\n(Contributed by Joshua Cannon in bpo-35498.)\nAdd negative indexing support to PurePath.parents\n.\n(Contributed by Yaroslav Pankovych in bpo-21041.)\nAdd Path.hardlink_to\nmethod that\nsupersedes link_to()\n. The new method has the same argument\norder as symlink_to()\n.\n(Contributed by Barney Gale in bpo-39950.)\npathlib.Path.stat()\nand chmod()\nnow accept a\nfollow_symlinks keyword-only argument for consistency with corresponding\nfunctions in the os\nmodule.\n(Contributed by Barney Gale in bpo-39906.)\nplatform\u00b6\nAdd platform.freedesktop_os_release()\nto retrieve operation system\nidentification from freedesktop.org os-release standard file.\n(Contributed by Christian Heimes in bpo-28468.)\npprint\u00b6\npprint.pprint()\nnow accepts a new underscore_numbers\nkeyword argument.\n(Contributed by sblondon in bpo-42914.)\npprint\ncan now pretty-print dataclasses.dataclass\ninstances.\n(Contributed by Lewis Gaul in bpo-43080.)\npy_compile\u00b6\nAdd --quiet\noption to command-line interface of py_compile\n.\n(Contributed by Gregory Schevchenko in bpo-38731.)\npyclbr\u00b6\nAdd an end_lineno\nattribute to the Function\nand Class\nobjects in the tree returned by pyclbr.readmodule()\nand\npyclbr.readmodule_ex()\n. It matches the existing (start) lineno\n.\n(Contributed by Aviral Srivastava in bpo-38307.)\nshelve\u00b6\nThe shelve\nmodule now uses pickle.DEFAULT_PROTOCOL\nby default\ninstead of pickle\nprotocol 3\nwhen creating shelves.\n(Contributed by Zackery Spytz in bpo-34204.)\nstatistics\u00b6\nAdd covariance()\n, Pearson\u2019s\ncorrelation()\n, and simple\nlinear_regression()\nfunctions.\n(Contributed by Tymoteusz Wo\u0142od\u017ako in bpo-38490.)\nsite\u00b6\nWhen a module does not define __loader__\n, fall back to __spec__.loader\n.\n(Contributed by Brett Cannon in bpo-42133.)\nsocket\u00b6\nThe exception socket.timeout\nis now an alias of TimeoutError\n.\n(Contributed by Christian Heimes in bpo-42413.)\nAdd option to create MPTCP sockets with IPPROTO_MPTCP\n(Contributed by Rui Cunha in bpo-43571.)\nAdd IP_RECVTOS\noption to receive the type of service (ToS) or DSCP/ECN fields\n(Contributed by Georg Sauthoff in bpo-44077.)\nssl\u00b6\nThe ssl module requires OpenSSL 1.1.1 or newer. (Contributed by Christian Heimes in PEP 644 and bpo-43669.)\nThe ssl module has preliminary support for OpenSSL 3.0.0 and new option\nOP_IGNORE_UNEXPECTED_EOF\n.\n(Contributed by Christian Heimes in bpo-38820, bpo-43794,\nbpo-43788, bpo-43791, bpo-43799, bpo-43920,\nbpo-43789, and bpo-43811.)\nDeprecated function and use of deprecated constants now result in\na DeprecationWarning\n. ssl.SSLContext.options\nhas\nOP_NO_SSLv2\nand OP_NO_SSLv3\nset by default and\ntherefore cannot warn about setting the flag again. The\ndeprecation section has a list of deprecated\nfeatures.\n(Contributed by Christian Heimes in bpo-43880.)\nThe ssl module now has more secure default settings. Ciphers without forward\nsecrecy or SHA-1 MAC are disabled by default. Security level 2 prohibits\nweak RSA, DH, and ECC keys with less than 112 bits of security.\nSSLContext\ndefaults to minimum protocol version TLS 1.2.\nSettings are based on Hynek Schlawack\u2019s research.\n(Contributed by Christian Heimes in bpo-43998.)\nThe deprecated protocols SSL 3.0, TLS 1.0, and TLS 1.1 are no longer officially supported. Python does not block them actively. However OpenSSL build options, distro configurations, vendor patches, and cipher suites may prevent a successful handshake.\nAdd a timeout parameter to the ssl.get_server_certificate()\nfunction.\n(Contributed by Zackery Spytz in bpo-31870.)\nThe ssl module uses heap-types and multi-phase initialization. (Contributed by Christian Heimes in bpo-42333.)\nA new verify flag VERIFY_X509_PARTIAL_CHAIN\nhas been added.\n(Contributed by l0x in bpo-40849.)\nsqlite3\u00b6\nAdd audit events for connect()\n,\nenable_load_extension()\n, and\nload_extension()\n.\n(Contributed by Erlend E. Aasland in bpo-43762.)\nsys\u00b6\nAdd sys.orig_argv\nattribute: the list of the original command line\narguments passed to the Python executable.\n(Contributed by Victor Stinner in bpo-23427.)\nAdd sys.stdlib_module_names\n, containing the list of the standard library\nmodule names.\n(Contributed by Victor Stinner in bpo-42955.)\n_thread\u00b6\n_thread.interrupt_main()\nnow takes an optional signal number to\nsimulate (the default is still signal.SIGINT\n).\n(Contributed by Antoine Pitrou in bpo-43356.)\nthreading\u00b6\nAdd threading.gettrace()\nand threading.getprofile()\nto\nretrieve the functions set by threading.settrace()\nand\nthreading.setprofile()\nrespectively.\n(Contributed by Mario Corchero in bpo-42251.)\nAdd threading.__excepthook__\nto allow retrieving the original value\nof threading.excepthook()\nin case it is set to a broken or a different\nvalue.\n(Contributed by Mario Corchero in bpo-42308.)\ntraceback\u00b6\nThe format_exception()\n,\nformat_exception_only()\n, and\nprint_exception()\nfunctions can now take an exception object\nas a positional-only argument.\n(Contributed by Zackery Spytz and Matthias Bussonnier in bpo-26389.)\ntypes\u00b6\nReintroduce the types.EllipsisType\n, types.NoneType\nand types.NotImplementedType\nclasses, providing a new set\nof types readily interpretable by type checkers.\n(Contributed by Bas van Beek in bpo-41810.)\ntyping\u00b6\nFor major changes, see New Features Related to Type Hints.\nThe behavior of typing.Literal\nwas changed to conform with PEP 586\nand to match the behavior of static type checkers specified in the PEP.\nLiteral\nnow de-duplicates parameters.Equality comparisons between\nLiteral\nobjects are now order independent.Literal\ncomparisons now respect types. For example,Literal[0] == Literal[False]\npreviously evaluated toTrue\n. It is nowFalse\n. To support this change, the internally used type cache now supports differentiating types.Literal\nobjects will now raise aTypeError\nexception during equality comparisons if any of their parameters are not hashable. Note that declaringLiteral\nwith unhashable parameters will not throw an error:>>> from typing import Literal >>> Literal[{0}] >>> Literal[{0}] == Literal[{False}] Traceback (most recent call last): File \"\", line 1, in TypeError: unhashable type: 'set'\n(Contributed by Yurii Karabas in bpo-42345.)\nAdd new function typing.is_typeddict()\nto introspect if an annotation\nis a typing.TypedDict\n.\n(Contributed by Patrick Reader in bpo-41792.)\nSubclasses of typing.Protocol\nwhich only have data variables declared\nwill now raise a TypeError\nwhen checked with isinstance\nunless they\nare decorated with runtime_checkable()\n. Previously, these checks\npassed silently. Users should decorate their\nsubclasses with the runtime_checkable()\ndecorator\nif they want runtime protocols.\n(Contributed by Yurii Karabas in bpo-38908.)\nImporting from the typing.io\nand typing.re\nsubmodules will now emit\nDeprecationWarning\n. These submodules have been deprecated since\nPython 3.8 and will be removed in a future version of Python. Anything\nbelonging to those submodules should be imported directly from\ntyping\ninstead.\n(Contributed by Sebastian Rittau in bpo-38291.)\nunittest\u00b6\nAdd new method assertNoLogs()\nto complement the\nexisting assertLogs()\n. (Contributed by Kit Yan Choi\nin bpo-39385.)\nurllib.parse\u00b6\nPython versions earlier than Python 3.10 allowed using both ;\nand &\nas\nquery parameter separators in urllib.parse.parse_qs()\nand\nurllib.parse.parse_qsl()\n. Due to security concerns, and to conform with\nnewer W3C recommendations, this has been changed to allow only a single\nseparator key, with &\nas the default. This change also affects\ncgi.parse()\nand cgi.parse_multipart()\nas they use the affected\nfunctions internally. For more details, please see their respective\ndocumentation.\n(Contributed by Adam Goldschmidt, Senthil Kumaran and Ken Jin in bpo-42967.)\nThe presence of newline or tab characters in parts of a URL allows for some\nforms of attacks. Following the WHATWG specification that updates RFC 3986,\nASCII newline \\n\n, \\r\nand tab \\t\ncharacters are stripped from the\nURL by the parser in urllib.parse\npreventing such attacks. The removal\ncharacters are controlled by a new module level variable\nurllib.parse._UNSAFE_URL_BYTES_TO_REMOVE\n. (See gh-88048)\nxml\u00b6\nAdd a LexicalHandler\nclass to the\nxml.sax.handler\nmodule.\n(Contributed by Jonathan Gossage and Zackery Spytz in bpo-35018.)\nzipimport\u00b6\nAdd methods related to PEP 451: find_spec()\n,\nzipimport.zipimporter.create_module()\n, and\nzipimport.zipimporter.exec_module()\n.\n(Contributed by Brett Cannon in bpo-42131.)\nAdd invalidate_caches()\nmethod.\n(Contributed by Desmond Cheong in bpo-14678.)\nOptimizations\u00b6\nConstructors\nstr()\n,bytes()\nandbytearray()\nare now faster (around 30\u201340% for small objects). (Contributed by Serhiy Storchaka in bpo-41334.)The\nrunpy\nmodule now imports fewer modules. Thepython3 -m module-name\ncommand startup time is 1.4x faster in average. On Linux,python3 -I -m module-name\nimports 69 modules on Python 3.9, whereas it only imports 51 modules (-18) on Python 3.10. (Contributed by Victor Stinner in bpo-41006 and bpo-41718.)The\nLOAD_ATTR\ninstruction now uses new \u201cper opcode cache\u201d mechanism. It is about 36% faster now for regular attributes and 44% faster for slots. (Contributed by Pablo Galindo and Yury Selivanov in bpo-42093 and Guido van Rossum in bpo-42927, based on ideas implemented originally in PyPy and MicroPython.)When building Python with\n--enable-optimizations\nnow-fno-semantic-interposition\nis added to both the compile and link line. This speeds builds of the Python interpreter created with--enable-shared\nwithgcc\nby up to 30%. See this article for more details. (Contributed by Victor Stinner and Pablo Galindo in bpo-38980.)Use a new output buffer management code for\nbz2\n/lzma\n/zlib\nmodules, and add.readall()\nfunction to_compression.DecompressReader\nclass. bz2 decompression is now 1.09x ~ 1.17x faster, lzma decompression 1.20x ~ 1.32x faster,GzipFile.read(-1)\n1.11x ~ 1.18x faster. (Contributed by Ma Lin, reviewed by Gregory P. Smith, in bpo-41486)When using stringized annotations, annotations dicts for functions are no longer created when the function is created. Instead, they are stored as a tuple of strings, and the function object lazily converts this into the annotations dict on demand. This optimization cuts the CPU time needed to define an annotated function by half. (Contributed by Yurii Karabas and Inada Naoki in bpo-42202.)\nSubstring search functions such as\nstr1 in str2\nandstr2.find(str1)\nnow sometimes use Crochemore & Perrin\u2019s \u201cTwo-Way\u201d string searching algorithm to avoid quadratic behavior on long strings. (Contributed by Dennis Sweeney in bpo-41972)Add micro-optimizations to\n_PyType_Lookup()\nto improve type attribute cache lookup performance in the common case of cache hits. This makes the interpreter 1.04 times faster on average. (Contributed by Dino Viehland in bpo-43452.)The following built-in functions now support the faster PEP 590 vectorcall calling convention:\nmap()\n,filter()\n,reversed()\n,bool()\nandfloat()\n. (Contributed by Donghee Na and Jeroen Demeyer in bpo-43575, bpo-43287, bpo-41922, bpo-41873 and bpo-41870.)BZ2File\nperformance is improved by removing internalRLock\n. This makesBZ2File\nthread unsafe in the face of multiple simultaneous readers or writers, just like its equivalent classes ingzip\nandlzma\nhave always been. (Contributed by Inada Naoki in bpo-43785.)\nDeprecated\u00b6\nCurrently Python accepts numeric literals immediately followed by keywords, for example\n0in x\n,1or x\n,0if 1else 2\n. It allows confusing and ambiguous expressions like[0x1for x in y]\n(which can be interpreted as[0x1 for x in y]\nor[0x1f or x in y]\n). Starting in this release, a deprecation warning is raised if the numeric literal is immediately followed by one of keywordsand\n,else\n,for\n,if\n,in\n,is\nandor\n. In future releases it will be changed to syntax warning, and finally to syntax error. (Contributed by Serhiy Storchaka in bpo-43833.)Starting in this release, there will be a concerted effort to begin cleaning up old import semantics that were kept for Python 2.7 compatibility. Specifically,\nfind_loader()\n/find_module()\n(superseded byfind_spec()\n),load_module()\n(superseded byexec_module()\n),module_repr()\n(which the import system takes care of for you), the__package__\nattribute (superseded by__spec__.parent\n), the__loader__\nattribute (superseded by__spec__.loader\n), and the__cached__\nattribute (superseded by__spec__.cached\n) will slowly be removed (as well as other classes and methods inimportlib\n).ImportWarning\nand/orDeprecationWarning\nwill be raised as appropriate to help identify code which needs updating during this transition.The entire\ndistutils\nnamespace is deprecated, to be removed in Python 3.12. Refer to the module changes section for more information.Non-integer arguments to\nrandom.randrange()\nare deprecated. TheValueError\nis deprecated in favor of aTypeError\n. (Contributed by Serhiy Storchaka and Raymond Hettinger in bpo-37319.)The various\nload_module()\nmethods ofimportlib\nhave been documented as deprecated since Python 3.6, but will now also trigger aDeprecationWarning\n. Useexec_module()\ninstead. (Contributed by Brett Cannon in bpo-26131.)zimport.zipimporter.load_module()\nhas been deprecated in preference forexec_module()\n. (Contributed by Brett Cannon in bpo-26131.)The use of\nload_module()\nby the import system now triggers anImportWarning\nasexec_module()\nis preferred. (Contributed by Brett Cannon in bpo-26131.)The use of\nimportlib.abc.MetaPathFinder.find_module()\nandimportlib.abc.PathEntryFinder.find_module()\nby the import system now trigger anImportWarning\nasimportlib.abc.MetaPathFinder.find_spec()\nandimportlib.abc.PathEntryFinder.find_spec()\nare preferred, respectively. You can useimportlib.util.spec_from_loader()\nto help in porting. (Contributed by Brett Cannon in bpo-42134.)The use of\nimportlib.abc.PathEntryFinder.find_loader()\nby the import system now triggers anImportWarning\nasimportlib.abc.PathEntryFinder.find_spec()\nis preferred. You can useimportlib.util.spec_from_loader()\nto help in porting. (Contributed by Brett Cannon in bpo-43672.)The various implementations of\nimportlib.abc.MetaPathFinder.find_module()\n(importlib.machinery.BuiltinImporter.find_module()\n,importlib.machinery.FrozenImporter.find_module()\n,importlib.machinery.WindowsRegistryFinder.find_module()\n,importlib.machinery.PathFinder.find_module()\n,importlib.abc.MetaPathFinder.find_module()\n),importlib.abc.PathEntryFinder.find_module()\n(importlib.machinery.FileFinder.find_module()\n), andimportlib.abc.PathEntryFinder.find_loader()\n(importlib.machinery.FileFinder.find_loader()\n) now raiseDeprecationWarning\nand are slated for removal in Python 3.12 (previously they were documented as deprecated in Python 3.4). (Contributed by Brett Cannon in bpo-42135.)importlib.abc.Finder\nis deprecated (including its sole method,find_module()\n). Bothimportlib.abc.MetaPathFinder\nandimportlib.abc.PathEntryFinder\nno longer inherit from the class. Users should inherit from one of these two classes as appropriate instead. (Contributed by Brett Cannon in bpo-42135.)The deprecations of\nimp\n,importlib.find_loader()\n,importlib.util.set_package_wrapper()\n,importlib.util.set_loader_wrapper()\n,importlib.util.module_for_loader()\n,pkgutil.ImpImporter\n, andpkgutil.ImpLoader\nhave all been updated to list Python 3.12 as the slated version of removal (they began raisingDeprecationWarning\nin previous versions of Python). (Contributed by Brett Cannon in bpo-43720.)The import system now uses the\n__spec__\nattribute on modules before falling back onmodule_repr()\nfor a module\u2019s__repr__()\nmethod. Removal of the use ofmodule_repr()\nis scheduled for Python 3.12. (Contributed by Brett Cannon in bpo-42137.)importlib.abc.Loader.module_repr()\n,importlib.machinery.FrozenLoader.module_repr()\n, andimportlib.machinery.BuiltinLoader.module_repr()\nare deprecated and slated for removal in Python 3.12. (Contributed by Brett Cannon in bpo-42136.)sqlite3.OptimizedUnicode\nhas been undocumented and obsolete since Python 3.3, when it was made an alias tostr\n. It is now deprecated, scheduled for removal in Python 3.12. (Contributed by Erlend E. Aasland in bpo-42264.)The undocumented built-in function\nsqlite3.enable_shared_cache\nis now deprecated, scheduled for removal in Python 3.12. Its use is strongly discouraged by the SQLite3 documentation. See the SQLite3 docs for more details. If a shared cache must be used, open the database in URI mode using thecache=shared\nquery parameter. (Contributed by Erlend E. Aasland in bpo-24464.)The following\nthreading\nmethods are now deprecated:threading.currentThread\n=>threading.current_thread()\nthreading.activeCount\n=>threading.active_count()\nthreading.Condition.notifyAll\n=>threading.Condition.notify_all()\nthreading.Event.isSet\n=>threading.Event.is_set()\nthreading.Thread.setName\n=>threading.Thread.name\nthreading.thread.getName\n=>threading.Thread.name\nthreading.Thread.isDaemon\n=>threading.Thread.daemon\nthreading.Thread.setDaemon\n=>threading.Thread.daemon\n(Contributed by Jelle Zijlstra in gh-87889.)\npathlib.Path.link_to()\nis deprecated and slated for removal in Python 3.12. Usepathlib.Path.hardlink_to()\ninstead. (Contributed by Barney Gale in bpo-39950.)cgi.log()\nis deprecated and slated for removal in Python 3.12. (Contributed by Inada Naoki in bpo-41139.)The following\nssl\nfeatures have been deprecated since Python 3.6, Python 3.7, or OpenSSL 1.1.0 and will be removed in 3.11:OP_NO_SSLv2\n,OP_NO_SSLv3\n,OP_NO_TLSv1\n,OP_NO_TLSv1_1\n,OP_NO_TLSv1_2\n, andOP_NO_TLSv1_3\nare replaced byminimum_version\nandmaximum_version\n.PROTOCOL_SSLv2\n,PROTOCOL_SSLv3\n,PROTOCOL_SSLv23\n,PROTOCOL_TLSv1\n,PROTOCOL_TLSv1_1\n,PROTOCOL_TLSv1_2\n, andPROTOCOL_TLS\nare deprecated in favor ofPROTOCOL_TLS_CLIENT\nandPROTOCOL_TLS_SERVER\nwrap_socket()\nis replaced byssl.SSLContext.wrap_socket()\nmatch_hostname()\nRAND_pseudo_bytes()\n,RAND_egd()\nNPN features like\nssl.SSLSocket.selected_npn_protocol()\nandssl.SSLContext.set_npn_protocols()\nare replaced by ALPN.\nThe threading debug (\nPYTHONTHREADDEBUG\nenvironment variable) is deprecated in Python 3.10 and will be removed in Python 3.12. This feature requires a debug build of Python. (Contributed by Victor Stinner in bpo-44584.)Importing from the\ntyping.io\nandtyping.re\nsubmodules will now emitDeprecationWarning\n. These submodules will be removed in a future version of Python. Anything belonging to these submodules should be imported directly fromtyping\ninstead. (Contributed by Sebastian Rittau in bpo-38291.)\nRemoved\u00b6\nRemoved special methods\n__int__\n,__float__\n,__floordiv__\n,__mod__\n,__divmod__\n,__rfloordiv__\n,__rmod__\nand__rdivmod__\nof thecomplex\nclass. They always raised aTypeError\n. (Contributed by Serhiy Storchaka in bpo-41974.)The\nParserBase.error()\nmethod from the private and undocumented_markupbase\nmodule has been removed.html.parser.HTMLParser\nis the only subclass ofParserBase\nand itserror()\nimplementation was already removed in Python 3.5. (Contributed by Berker Peksag in bpo-31844.)Removed the\nunicodedata.ucnhash_CAPI\nattribute which was an internal PyCapsule object. The related private_PyUnicode_Name_CAPI\nstructure was moved to the internal C API. (Contributed by Victor Stinner in bpo-42157.)Removed the\nparser\nmodule, which was deprecated in 3.9 due to the switch to the new PEG parser, as well as all the C source and header files that were only being used by the old parser, includingnode.h\n,parser.h\n,graminit.h\nandgrammar.h\n.Removed the Public C API functions\nPyParser_SimpleParseStringFlags\n,PyParser_SimpleParseStringFlagsFilename\n,PyParser_SimpleParseFileFlags\nandPyNode_Compile\nthat were deprecated in 3.9 due to the switch to the new PEG parser.Removed the\nformatter\nmodule, which was deprecated in Python 3.4. It is somewhat obsolete, little used, and not tested. It was originally scheduled to be removed in Python 3.6, but such removals were delayed until after Python 2.7 EOL. Existing users should copy whatever classes they use into their code. (Contributed by Donghee Na and Terry J. Reedy in bpo-42299.)Removed the\nPyModule_GetWarningsModule()\nfunction that was useless now due to the_warnings\nmodule was converted to a builtin module in 2.6. (Contributed by Hai Shi in bpo-42599.)Remove deprecated aliases to Collections Abstract Base Classes from the\ncollections\nmodule. (Contributed by Victor Stinner in bpo-37324.)The\nloop\nparameter has been removed from most ofasyncio\n\u2018s high-level API following deprecation in Python 3.8. The motivation behind this change is multifold:This simplifies the high-level API.\nThe functions in the high-level API have been implicitly getting the current thread\u2019s running event loop since Python 3.7. There isn\u2019t a need to pass the event loop to the API in most normal use cases.\nEvent loop passing is error-prone especially when dealing with loops running in different threads.\nNote that the low-level API will still accept\nloop\n. See Changes in the Python API for examples of how to replace existing code.(Contributed by Yurii Karabas, Andrew Svetlov, Yury Selivanov and Kyle Stanley in bpo-42392.)\nPorting to Python 3.10\u00b6\nThis section lists previously described changes and other bugfixes that may require changes to your code.\nChanges in the Python syntax\u00b6\nDeprecation warning is now emitted when compiling previously valid syntax if the numeric literal is immediately followed by a keyword (like in\n0in x\n). In future releases it will be changed to syntax warning, and finally to a syntax error. To get rid of the warning and make the code compatible with future releases just add a space between the numeric literal and the following keyword. (Contributed by Serhiy Storchaka in bpo-43833.)\nChanges in the Python API\u00b6\nThe etype parameters of the\nformat_exception()\n,format_exception_only()\n, andprint_exception()\nfunctions in thetraceback\nmodule have been renamed to exc. (Contributed by Zackery Spytz and Matthias Bussonnier in bpo-26389.)atexit\n: At Python exit, if a callback registered withatexit.register()\nfails, its exception is now logged. Previously, only some exceptions were logged, and the last exception was always silently ignored. (Contributed by Victor Stinner in bpo-42639.)collections.abc.Callable\ngeneric now flattens type parameters, similar to whattyping.Callable\ncurrently does. This means thatcollections.abc.Callable[[int, str], str]\nwill have__args__\nof(int, str, str)\n; previously this was([int, str], str)\n. Code which accesses the arguments viatyping.get_args()\nor__args__\nneed to account for this change. Furthermore,TypeError\nmay be raised for invalid forms of parameterizingcollections.abc.Callable\nwhich may have passed silently in Python 3.9. (Contributed by Ken Jin in bpo-42195.)socket.htons()\nandsocket.ntohs()\nnow raiseOverflowError\ninstead ofDeprecationWarning\nif the given parameter will not fit in a 16-bit unsigned integer. (Contributed by Erlend E. Aasland in bpo-42393.)The\nloop\nparameter has been removed from most ofasyncio\n\u2018s high-level API following deprecation in Python 3.8.A coroutine that currently looks like this:\nasync def foo(loop): await asyncio.sleep(1, loop=loop)\nShould be replaced with this:\nasync def foo(): await asyncio.sleep(1)\nIf\nfoo()\nwas specifically designed not to run in the current thread\u2019s running event loop (e.g. running in another thread\u2019s event loop), consider usingasyncio.run_coroutine_threadsafe()\ninstead.(Contributed by Yurii Karabas, Andrew Svetlov, Yury Selivanov and Kyle Stanley in bpo-42392.)\nThe\ntypes.FunctionType\nconstructor now inherits the current builtins if the globals dictionary has no\"__builtins__\"\nkey, rather than using{\"None\": None}\nas builtins: same behavior aseval()\nandexec()\nfunctions. Defining a function withdef function(...): ...\nin Python is not affected, globals cannot be overridden with this syntax: it also inherits the current builtins. (Contributed by Victor Stinner in bpo-42990.)\nChanges in the C API\u00b6\nThe C API functions\nPyParser_SimpleParseStringFlags\n,PyParser_SimpleParseStringFlagsFilename\n,PyParser_SimpleParseFileFlags\n,PyNode_Compile\nand the type used by these functions,struct _node\n, were removed due to the switch to the new PEG parser.Source should be now be compiled directly to a code object using, for example,\nPy_CompileString()\n. The resulting code object can then be evaluated using, for example,PyEval_EvalCode()\n.Specifically:\nA call to\nPyParser_SimpleParseStringFlags\nfollowed byPyNode_Compile\ncan be replaced by callingPy_CompileString()\n.There is no direct replacement for\nPyParser_SimpleParseFileFlags\n. To compile code from aFILE *\nargument, you will need to read the file in C and pass the resulting buffer toPy_CompileString()\n.To compile a file given a\nchar *\nfilename, explicitly open the file, read it and compile the result. One way to do this is using theio\nmodule withPyImport_ImportModule()\n,PyObject_CallMethod()\n,PyBytes_AsString()\nandPy_CompileString()\n, as sketched below. (Declarations and error handling are omitted.)io_module = Import_ImportModule(\"io\"); fileobject = PyObject_CallMethod(io_module, \"open\", \"ss\", filename, \"rb\"); source_bytes_object = PyObject_CallMethod(fileobject, \"read\", \"\"); result = PyObject_CallMethod(fileobject, \"close\", \"\"); source_buf = PyBytes_AsString(source_bytes_object); code = Py_CompileString(source_buf, filename, Py_file_input);\nFor\nFrameObject\nobjects, thef_lasti\nmember now represents a wordcode offset instead of a simple offset into the bytecode string. This means that this number needs to be multiplied by 2 to be used with APIs that expect a byte offset instead (likePyCode_Addr2Line()\nfor example). Notice as well that thef_lasti\nmember ofFrameObject\nobjects is not considered stable: please usePyFrame_GetLineNumber()\ninstead.\nCPython bytecode changes\u00b6\nThe\nMAKE_FUNCTION\ninstruction now accepts either a dict or a tuple of strings as the function\u2019s annotations. (Contributed by Yurii Karabas and Inada Naoki in bpo-42202.)\nBuild Changes\u00b6\nPEP 644: Python now requires OpenSSL 1.1.1 or newer. OpenSSL 1.0.2 is no longer supported. (Contributed by Christian Heimes in bpo-43669.)\nThe C99 functions\nsnprintf()\nandvsnprintf()\nare now required to build Python. (Contributed by Victor Stinner in bpo-36020.)sqlite3\nrequires SQLite 3.7.15 or higher. (Contributed by Sergey Fedoseev and Erlend E. Aasland in bpo-40744 and bpo-40810.)The\natexit\nmodule must now always be built as a built-in module. (Contributed by Victor Stinner in bpo-42639.)Add\n--disable-test-modules\noption to theconfigure\nscript: don\u2019t build nor install test modules. (Contributed by Xavier de Gaye, Thomas Petazzoni and Peixing Xin in bpo-27640.)Add\n--with-wheel-pkg-dir=PATH option\nto the./configure\nscript. If specified, theensurepip\nmodule looks forsetuptools\nandpip\nwheel packages in this directory: if both are present, these wheel packages are used instead of ensurepip bundled wheel packages.Some Linux distribution packaging policies recommend against bundling dependencies. For example, Fedora installs wheel packages in the\n/usr/share/python-wheels/\ndirectory and don\u2019t install theensurepip._bundled\npackage.(Contributed by Victor Stinner in bpo-42856.)\nAdd a new\nconfigure --without-static-libpython option\nto not build thelibpythonMAJOR.MINOR.a\nstatic library and not install thepython.o\nobject file.(Contributed by Victor Stinner in bpo-43103.)\nThe\nconfigure\nscript now uses thepkg-config\nutility, if available, to detect the location of Tcl/Tk headers and libraries. As before, those locations can be explicitly specified with the--with-tcltk-includes\nand--with-tcltk-libs\nconfiguration options. (Contributed by Manolis Stamatogiannakis in bpo-42603.)Add\n--with-openssl-rpath\noption toconfigure\nscript. The option simplifies building Python with a custom OpenSSL installation, e.g../configure --with-openssl=/path/to/openssl --with-openssl-rpath=auto\n. (Contributed by Christian Heimes in bpo-43466.)\nC API Changes\u00b6\nPEP 652: Maintaining the Stable ABI\u00b6\nThe Stable ABI (Application Binary Interface) for extension modules or embedding Python is now explicitly defined. C API Stability describes C API and ABI stability guarantees along with best practices for using the Stable ABI.\nNew Features\u00b6\nThe result of\nPyNumber_Index()\nnow always has exact typeint\n. Previously, the result could have been an instance of a subclass ofint\n. (Contributed by Serhiy Storchaka in bpo-40792.)Add a new\norig_argv\nmember to thePyConfig\nstructure: the list of the original command line arguments passed to the Python executable. (Contributed by Victor Stinner in bpo-23427.)The\nPyDateTime_DATE_GET_TZINFO()\nandPyDateTime_TIME_GET_TZINFO()\nmacros have been added for accessing thetzinfo\nattributes ofdatetime.datetime\nanddatetime.time\nobjects. (Contributed by Zackery Spytz in bpo-30155.)Add a\nPyCodec_Unregister()\nfunction to unregister a codec search function. (Contributed by Hai Shi in bpo-41842.)The\nPyIter_Send()\nfunction was added to allow sending value into iterator without raisingStopIteration\nexception. (Contributed by Vladimir Matveev in bpo-41756.)Add\nPyUnicode_AsUTF8AndSize()\nto the limited C API. (Contributed by Alex Gaynor in bpo-41784.)Add\nPyModule_AddObjectRef()\nfunction: similar toPyModule_AddObject()\nbut don\u2019t steal a reference to the value on success. (Contributed by Victor Stinner in bpo-1635741.)Add\nPy_NewRef()\nandPy_XNewRef()\nfunctions to increment the reference count of an object and return the object. (Contributed by Victor Stinner in bpo-42262.)The\nPyType_FromSpecWithBases()\nandPyType_FromModuleAndSpec()\nfunctions now accept a single class as the bases argument. (Contributed by Serhiy Storchaka in bpo-42423.)The\nPyType_FromModuleAndSpec()\nfunction now accepts NULLtp_doc\nslot. (Contributed by Hai Shi in bpo-41832.)The\nPyType_GetSlot()\nfunction can accept static types. (Contributed by Hai Shi and Petr Viktorin in bpo-41073.)Add a new\nPySet_CheckExact()\nfunction to the C-API to check if an object is an instance ofset\nbut not an instance of a subtype. (Contributed by Pablo Galindo in bpo-43277.)Add\nPyErr_SetInterruptEx()\nwhich allows passing a signal number to simulate. (Contributed by Antoine Pitrou in bpo-43356.)The limited C API is now supported if Python is built in debug mode (if the\nPy_DEBUG\nmacro is defined). In the limited C API, thePy_INCREF()\nandPy_DECREF()\nfunctions are now implemented as opaque function calls, rather than accessing directly thePyObject.ob_refcnt\nmember, if Python is built in debug mode and thePy_LIMITED_API\nmacro targets Python 3.10 or newer. It became possible to support the limited C API in debug mode because thePyObject\nstructure is the same in release and debug mode since Python 3.8 (see bpo-36465).The limited C API is still not supported in the\n--with-trace-refs\nspecial build (Py_TRACE_REFS\nmacro). (Contributed by Victor Stinner in bpo-43688.)Add the\nPy_Is(x, y)\nfunction to test if the x object is the y object, the same asx is y\nin Python. Add also thePy_IsNone()\n,Py_IsTrue()\n,Py_IsFalse()\nfunctions to test if an object is, respectively, theNone\nsingleton, theTrue\nsingleton or theFalse\nsingleton. (Contributed by Victor Stinner in bpo-43753.)Add new functions to control the garbage collector from C code:\nPyGC_Enable()\n,PyGC_Disable()\n,PyGC_IsEnabled()\n. These functions allow to activate, deactivate and query the state of the garbage collector from C code without having to import thegc\nmodule.Add a new\nPy_TPFLAGS_DISALLOW_INSTANTIATION\ntype flag to disallow creating type instances. (Contributed by Victor Stinner in bpo-43916.)Add a new\nPy_TPFLAGS_IMMUTABLETYPE\ntype flag for creating immutable type objects: type attributes cannot be set nor deleted. (Contributed by Victor Stinner and Erlend E. Aasland in bpo-43908.)\nPorting to Python 3.10\u00b6\nThe\nPY_SSIZE_T_CLEAN\nmacro must now be defined to usePyArg_ParseTuple()\nandPy_BuildValue()\nformats which use#\n:es#\n,et#\n,s#\n,u#\n,y#\n,z#\n,U#\nandZ#\n. See Parsing arguments and building values and PEP 353. (Contributed by Victor Stinner in bpo-40943.)Since\nPy_REFCNT()\nis changed to the inline static function,Py_REFCNT(obj) = new_refcnt\nmust be replaced withPy_SET_REFCNT(obj, new_refcnt)\n: seePy_SET_REFCNT()\n(available since Python 3.9). For backward compatibility, this macro can be used:#if PY_VERSION_HEX < 0x030900A4 # define Py_SET_REFCNT(obj, refcnt) ((Py_REFCNT(obj) = (refcnt)), (void)0) #endif\n(Contributed by Victor Stinner in bpo-39573.)\nCalling\nPyDict_GetItem()\nwithout GIL held had been allowed for historical reason. It is no longer allowed. (Contributed by Victor Stinner in bpo-40839.)PyUnicode_FromUnicode(NULL, size)\nandPyUnicode_FromStringAndSize(NULL, size)\nraiseDeprecationWarning\nnow. UsePyUnicode_New()\nto allocate Unicode object without initial data. (Contributed by Inada Naoki in bpo-36346.)The private\n_PyUnicode_Name_CAPI\nstructure of the PyCapsule APIunicodedata.ucnhash_CAPI\nhas been moved to the internal C API. (Contributed by Victor Stinner in bpo-42157.)Py_GetPath()\n,Py_GetPrefix()\n,Py_GetExecPrefix()\n,Py_GetProgramFullPath()\n,Py_GetPythonHome()\nandPy_GetProgramName()\nfunctions now returnNULL\nif called beforePy_Initialize()\n(before Python is initialized). Use the new Python Initialization Configuration API to get the Python Path Configuration. (Contributed by Victor Stinner in bpo-42260.)PyList_SET_ITEM()\n,PyTuple_SET_ITEM()\nandPyCell_SET()\nmacros can no longer be used as l-value or r-value. For example,x = PyList_SET_ITEM(a, b, c)\nandPyList_SET_ITEM(a, b, c) = x\nnow fail with a compiler error. It prevents bugs likeif (PyList_SET_ITEM (a, b, c) < 0) ...\ntest. (Contributed by Zackery Spytz and Victor Stinner in bpo-30459.)The non-limited API files\nodictobject.h\n,parser_interface.h\n,picklebufobject.h\n,pyarena.h\n,pyctype.h\n,pydebug.h\n,pyfpe.h\n, andpytime.h\nhave been moved to theInclude/cpython\ndirectory. These files must not be included directly, as they are already included inPython.h\n; see Include Files. If they have been included directly, consider includingPython.h\ninstead. (Contributed by Nicholas Sim in bpo-35134.)Use the\nPy_TPFLAGS_IMMUTABLETYPE\ntype flag to create immutable type objects. Do not rely onPy_TPFLAGS_HEAPTYPE\nto decide if a type object is mutable or not; check ifPy_TPFLAGS_IMMUTABLETYPE\nis set instead. (Contributed by Victor Stinner and Erlend E. Aasland in bpo-43908.)The undocumented function\nPy_FrozenMain\nhas been removed from the limited API. The function is mainly useful for custom builds of Python. (Contributed by Petr Viktorin in bpo-26241.)\nDeprecated\u00b6\nThe\nPyUnicode_InternImmortal()\nfunction is now deprecated and will be removed in Python 3.12: usePyUnicode_InternInPlace()\ninstead. (Contributed by Victor Stinner in bpo-41692.)\nRemoved\u00b6\nRemoved\nPy_UNICODE_str*\nfunctions manipulatingPy_UNICODE*\nstrings. (Contributed by Inada Naoki in bpo-41123.)Py_UNICODE_strlen\n: usePyUnicode_GetLength()\norPyUnicode_GET_LENGTH\nPy_UNICODE_strcat\n: usePyUnicode_CopyCharacters()\norPyUnicode_FromFormat()\nPy_UNICODE_strcpy\n,Py_UNICODE_strncpy\n: usePyUnicode_CopyCharacters()\norPyUnicode_Substring()\nPy_UNICODE_strcmp\n: usePyUnicode_Compare()\nPy_UNICODE_strncmp\n: usePyUnicode_Tailmatch()\nPy_UNICODE_strchr\n,Py_UNICODE_strrchr\n: usePyUnicode_FindChar()\nRemoved\nPyUnicode_GetMax()\n. Please migrate to new (PEP 393) APIs. (Contributed by Inada Naoki in bpo-41103.)Removed\nPyLong_FromUnicode()\n. Please migrate toPyLong_FromUnicodeObject()\n. (Contributed by Inada Naoki in bpo-41103.)Removed\nPyUnicode_AsUnicodeCopy()\n. Please usePyUnicode_AsUCS4Copy()\norPyUnicode_AsWideCharString()\n(Contributed by Inada Naoki in bpo-41103.)Removed\n_Py_CheckRecursionLimit\nvariable: it has been replaced byceval.recursion_limit\nof thePyInterpreterState\nstructure. (Contributed by Victor Stinner in bpo-41834.)Removed undocumented macros\nPy_ALLOW_RECURSION\nandPy_END_ALLOW_RECURSION\nand therecursion_critical\nfield of thePyInterpreterState\nstructure. (Contributed by Serhiy Storchaka in bpo-41936.)Removed the undocumented\nPyOS_InitInterrupts()\nfunction. Initializing Python already implicitly installs signal handlers: seePyConfig.install_signal_handlers\n. (Contributed by Victor Stinner in bpo-41713.)Remove the\nPyAST_Validate()\nfunction. It is no longer possible to build a AST object (mod_ty\ntype) with the public C API. The function was already excluded from the limited C API (PEP 384). (Contributed by Victor Stinner in bpo-43244.)Remove the\nsymtable.h\nheader file and the undocumented functions:PyST_GetScope()\nPySymtable_Build()\nPySymtable_BuildObject()\nPySymtable_Free()\nPy_SymtableString()\nPy_SymtableStringObject()\nThe\nPy_SymtableString()\nfunction was part the stable ABI by mistake but it could not be used, because thesymtable.h\nheader file was excluded from the limited C API.Use Python\nsymtable\nmodule instead. (Contributed by Victor Stinner in bpo-43244.)Remove\nPyOS_ReadlineFunctionPointer()\nfrom the limited C API headers and frompython3.dll\n, the library that provides the stable ABI on Windows. Since the function takes aFILE*\nargument, its ABI stability cannot be guaranteed. (Contributed by Petr Viktorin in bpo-43868.)Remove\nast.h\n,asdl.h\n, andPython-ast.h\nheader files. These functions were undocumented and excluded from the limited C API. Most names defined by these header files were not prefixed byPy\nand so could create names conflicts. For example,Python-ast.h\ndefined aYield\nmacro which was conflict with theYield\nname used by the Windows\nheader. Use the Pythonast\nmodule instead. (Contributed by Victor Stinner in bpo-43244.)Remove the compiler and parser functions using\nstruct _mod\ntype, because the public AST C API was removed:PyAST_Compile()\nPyAST_CompileEx()\nPyAST_CompileObject()\nPyFuture_FromAST()\nPyFuture_FromASTObject()\nPyParser_ASTFromFile()\nPyParser_ASTFromFileObject()\nPyParser_ASTFromFilename()\nPyParser_ASTFromString()\nPyParser_ASTFromStringObject()\nThese functions were undocumented and excluded from the limited C API. (Contributed by Victor Stinner in bpo-43244.)\nRemove the\npyarena.h\nheader file with functions:PyArena_New()\nPyArena_Free()\nPyArena_Malloc()\nPyArena_AddPyObject()\nThese functions were undocumented, excluded from the limited C API, and were only used internally by the compiler. (Contributed by Victor Stinner in bpo-43244.)\nThe\nPyThreadState.use_tracing\nmember has been removed to optimize Python. (Contributed by Mark Shannon in bpo-43760.)\nNotable security feature in 3.10.7\u00b6\nConverting between int\nand str\nin bases other than 2\n(binary), 4, 8 (octal), 16 (hexadecimal), or 32 such as base 10 (decimal)\nnow raises a ValueError\nif the number of digits in string form is\nabove a limit to avoid potential denial of service attacks due to the\nalgorithmic complexity. This is a mitigation for CVE 2020-10735.\nThis limit can be configured or disabled by environment variable, command\nline flag, or sys\nAPIs. See the integer string conversion\nlength limitation documentation. The default limit\nis 4300 digits in string form.\nNotable security feature in 3.10.8\u00b6\nThe deprecated mailcap\nmodule now refuses to inject unsafe text\n(filenames, MIME types, parameters) into shell commands. Instead of using such\ntext, it will warn and act as if a match was not found (or for test commands,\nas if the test failed).\n(Contributed by Petr Viktorin in gh-98966.)\nNotable changes in 3.10.12\u00b6\ntarfile\u00b6\nThe extraction methods in\ntarfile\n, andshutil.unpack_archive()\n, have a new a filter argument that allows limiting tar features than may be surprising or dangerous, such as creating files outside the destination directory. See Extraction filters for details. In Python 3.12, use without the filter argument will show aDeprecationWarning\n. In Python 3.14, the default will switch to'data'\n. (Contributed by Petr Viktorin in PEP 706.)", "code_snippets": [" ", "\n ", " ", "\n ", "\n ", " ", "\n ", "\n ", " ", "\n ", "\n ", "\n ", "\n", "\n ", " ", "\n ", " ", "\n ", " ", "\n ", " ", "\n ", " ", "\n ", " ", "\n ", " ", "\n ", "\n ", " ", "\n", " ", " ", " ", " ", " ", "\n ", " ", "\n", "\n ", " ", "\n ", " ", "\n ", " ", "\n ", " ", "\n ", " ", "\n ", " ", "\n ", " ", "\n", "\n", " ", "\n ", " ", " ", "\n ", "\n ", " ", " ", "\n ", "\n ", " ", " ", "\n ", "\n ", " ", " ", "\n ", "\n ", "\n ", " ", "\n", "\n ", " ", " ", "\n ", " ", " ", "\n ", " ", " ", "\n\n", "\n ", " ", "\n ", " ", " ", "\n ", "\n ", " ", " ", "\n ", "\n ", " ", " ", "\n ", "\n ", " ", "\n ", "\n ", "\n ", "\n", " ", "\n", " ", "\n", " ", "\n", " ", "\n", " ", "\n ", " ", "\n ", "\n ", " ", " ", "\n ", "\n ", " ", " ", "\n ", "\n ", " ", " ", " ", " ", "\n ", "\n ", "\n ", "\n", " ", "\n ", " ", " ", " ", "\n ", "\n ", " ", " ", "\n ", "\n", " ", "\n ", " ", " ", " ", " ", " ", " ", "\n ", "\n ", " ", " ", "\n ", "\n", " ", " ", " ", " ", " ", " ", " ", "\n", " ", "\n", "\n ", " ", " ", "\n ", " ", " ", "\n ", " ", " ", "\n\n", " ", " ", "\n", " ", "\n ", " ", "\n ", "\n ", " ", "\n ", "\n ", " ", "\n ", "\n", "\n", " ", " ", " ", "\n ", " ", " ", "\n", " ", " ", " ", " ", " ", "\n ", " ", " ", " ", "\n", " ", " ", " ", " ", " ", " ", " ", "\n ", " ", " ", " ", "\n", " ", " ", " ", "\n", "\n", " ", " ", " ", "\n", " ", " ", " ", "\n", " ", " ", " ", " ", "\n", " ", " ", " ", "\n", " ", "\n", "\n", " ", " ", "\n", "\n File ", ", line ", ", in ", "\n", ": ", "\n", " ", "\n ", " ", " ", "\n", " ", "\n ", " ", "\n", " ", " ", "\n", " ", " ", " ", " ", " ", " ", "\n", " ", " ", " ", " ", "\n", " ", " ", " ", " ", "\n", " ", " ", "\n", " ", " ", " ", " ", "\n", "\n", "\n", "\n"], "language": "Python", "source": "python.org", "token_count": 17974} +{"url": "https://docs.python.org/3/howto/functional.html", "title": "Functional Programming HOWTO", "content": "Functional Programming HOWTO\u00b6\n- Author:\nA. M. Kuchling\n- Release:\n0.32\nIn this document, we\u2019ll take a tour of Python\u2019s features suitable for\nimplementing programs in a functional style. After an introduction to the\nconcepts of functional programming, we\u2019ll look at language features such as\niterators and generators and relevant library modules such as\nitertools\nand functools\n.\nIntroduction\u00b6\nThis section explains the basic concept of functional programming; if you\u2019re just interested in learning about Python language features, skip to the next section on Iterators.\nProgramming languages support decomposing problems in several different ways:\nMost programming languages are procedural: programs are lists of instructions that tell the computer what to do with the program\u2019s input. C, Pascal, and even Unix shells are procedural languages.\nIn declarative languages, you write a specification that describes the problem to be solved, and the language implementation figures out how to perform the computation efficiently. SQL is the declarative language you\u2019re most likely to be familiar with; a SQL query describes the data set you want to retrieve, and the SQL engine decides whether to scan tables or use indexes, which subclauses should be performed first, etc.\nObject-oriented programs manipulate collections of objects. Objects have internal state and support methods that query or modify this internal state in some way. Smalltalk and Java are object-oriented languages. C++ and Python are languages that support object-oriented programming, but don\u2019t force the use of object-oriented features.\nFunctional programming decomposes a problem into a set of functions. Ideally, functions only take inputs and produce outputs, and don\u2019t have any internal state that affects the output produced for a given input. Well-known functional languages include the ML family (Standard ML, OCaml, and other variants) and Haskell.\nThe designers of some computer languages choose to emphasize one particular approach to programming. This often makes it difficult to write programs that use a different approach. Other languages are multi-paradigm languages that support several different approaches. Lisp, C++, and Python are multi-paradigm; you can write programs or libraries that are largely procedural, object-oriented, or functional in all of these languages. In a large program, different sections might be written using different approaches; the GUI might be object-oriented while the processing logic is procedural or functional, for example.\nIn a functional program, input flows through a set of functions. Each function operates on its input and produces some output. Functional style discourages functions with side effects that modify internal state or make other changes that aren\u2019t visible in the function\u2019s return value. Functions that have no side effects at all are called purely functional. Avoiding side effects means not using data structures that get updated as a program runs; every function\u2019s output must only depend on its input.\nSome languages are very strict about purity and don\u2019t even have assignment\nstatements such as a=3\nor c = a + b\n, but it\u2019s difficult to avoid all\nside effects, such as printing to the screen or writing to a disk file. Another\nexample is a call to the print()\nor time.sleep()\nfunction, neither\nof which returns a useful value. Both are called only for their side effects\nof sending some text to the screen or pausing execution for a second.\nPython programs written in functional style usually won\u2019t go to the extreme of avoiding all I/O or all assignments; instead, they\u2019ll provide a functional-appearing interface but will use non-functional features internally. For example, the implementation of a function will still use assignments to local variables, but won\u2019t modify global variables or have other side effects.\nFunctional programming can be considered the opposite of object-oriented programming. Objects are little capsules containing some internal state along with a collection of method calls that let you modify this state, and programs consist of making the right set of state changes. Functional programming wants to avoid state changes as much as possible and works with data flowing between functions. In Python you might combine the two approaches by writing functions that take and return instances representing objects in your application (e-mail messages, transactions, etc.).\nFunctional design may seem like an odd constraint to work under. Why should you avoid objects and side effects? There are theoretical and practical advantages to the functional style:\nFormal provability.\nModularity.\nComposability.\nEase of debugging and testing.\nFormal provability\u00b6\nA theoretical benefit is that it\u2019s easier to construct a mathematical proof that a functional program is correct.\nFor a long time researchers have been interested in finding ways to mathematically prove programs correct. This is different from testing a program on numerous inputs and concluding that its output is usually correct, or reading a program\u2019s source code and concluding that the code looks right; the goal is instead a rigorous proof that a program produces the right result for all possible inputs.\nThe technique used to prove programs correct is to write down invariants, properties of the input data and of the program\u2019s variables that are always true. For each line of code, you then show that if invariants X and Y are true before the line is executed, the slightly different invariants X\u2019 and Y\u2019 are true after the line is executed. This continues until you reach the end of the program, at which point the invariants should match the desired conditions on the program\u2019s output.\nFunctional programming\u2019s avoidance of assignments arose because assignments are difficult to handle with this technique; assignments can break invariants that were true before the assignment without producing any new invariants that can be propagated onward.\nUnfortunately, proving programs correct is largely impractical and not relevant to Python software. Even trivial programs require proofs that are several pages long; the proof of correctness for a moderately complicated program would be enormous, and few or none of the programs you use daily (the Python interpreter, your XML parser, your web browser) could be proven correct. Even if you wrote down or generated a proof, there would then be the question of verifying the proof; maybe there\u2019s an error in it, and you wrongly believe you\u2019ve proved the program correct.\nModularity\u00b6\nA more practical benefit of functional programming is that it forces you to break apart your problem into small pieces. Programs are more modular as a result. It\u2019s easier to specify and write a small function that does one thing than a large function that performs a complicated transformation. Small functions are also easier to read and to check for errors.\nEase of debugging and testing\u00b6\nTesting and debugging a functional-style program is easier.\nDebugging is simplified because functions are generally small and clearly specified. When a program doesn\u2019t work, each function is an interface point where you can check that the data are correct. You can look at the intermediate inputs and outputs to quickly isolate the function that\u2019s responsible for a bug.\nTesting is easier because each function is a potential subject for a unit test. Functions don\u2019t depend on system state that needs to be replicated before running a test; instead you only have to synthesize the right input and then check that the output matches expectations.\nComposability\u00b6\nAs you work on a functional-style program, you\u2019ll write a number of functions with varying inputs and outputs. Some of these functions will be unavoidably specialized to a particular application, but others will be useful in a wide variety of programs. For example, a function that takes a directory path and returns all the XML files in the directory, or a function that takes a filename and returns its contents, can be applied to many different situations.\nOver time you\u2019ll form a personal library of utilities. Often you\u2019ll assemble new programs by arranging existing functions in a new configuration and writing a few functions specialized for the current task.\nIterators\u00b6\nI\u2019ll start by looking at a Python language feature that\u2019s an important foundation for writing functional-style programs: iterators.\nAn iterator is an object representing a stream of data; this object returns the\ndata one element at a time. A Python iterator must support a method called\n__next__()\nthat takes no arguments and always returns the next\nelement of the stream. If there are no more elements in the stream,\n__next__()\nmust raise the StopIteration\nexception.\nIterators don\u2019t have to be finite, though; it\u2019s perfectly reasonable to write\nan iterator that produces an infinite stream of data.\nThe built-in iter()\nfunction takes an arbitrary object and tries to return\nan iterator that will return the object\u2019s contents or elements, raising\nTypeError\nif the object doesn\u2019t support iteration. Several of Python\u2019s\nbuilt-in data types support iteration, the most common being lists and\ndictionaries. An object is called iterable if you can get an iterator\nfor it.\nYou can experiment with the iteration interface manually:\n>>> L = [1, 2, 3]\n>>> it = iter(L)\n>>> it\n<...iterator object at ...>\n>>> it.__next__() # same as next(it)\n1\n>>> next(it)\n2\n>>> next(it)\n3\n>>> next(it)\nTraceback (most recent call last):\nFile \"\", line 1, in \nStopIteration\n>>>\nPython expects iterable objects in several different contexts, the most\nimportant being the for\nstatement. In the statement for X in Y\n,\nY must be an iterator or some object for which iter()\ncan create an\niterator. These two statements are equivalent:\nfor i in iter(obj):\nprint(i)\nfor i in obj:\nprint(i)\nIterators can be materialized as lists or tuples by using the list()\nor\ntuple()\nconstructor functions:\n>>> L = [1, 2, 3]\n>>> iterator = iter(L)\n>>> t = tuple(iterator)\n>>> t\n(1, 2, 3)\nSequence unpacking also supports iterators: if you know an iterator will return N elements, you can unpack them into an N-tuple:\n>>> L = [1, 2, 3]\n>>> iterator = iter(L)\n>>> a, b, c = iterator\n>>> a, b, c\n(1, 2, 3)\nBuilt-in functions such as max()\nand min()\ncan take a single\niterator argument and will return the largest or smallest element. The \"in\"\nand \"not in\"\noperators also support iterators: X in iterator\nis true if\nX is found in the stream returned by the iterator. You\u2019ll run into obvious\nproblems if the iterator is infinite; max()\n, min()\nwill never return, and if the element X never appears in the stream, the\n\"in\"\nand \"not in\"\noperators won\u2019t return either.\nNote that you can only go forward in an iterator; there\u2019s no way to get the\nprevious element, reset the iterator, or make a copy of it. Iterator objects\ncan optionally provide these additional capabilities, but the iterator protocol\nonly specifies the __next__()\nmethod. Functions may therefore\nconsume all of the iterator\u2019s output, and if you need to do something different\nwith the same stream, you\u2019ll have to create a new iterator.\nData Types That Support Iterators\u00b6\nWe\u2019ve already seen how lists and tuples support iterators. In fact, any Python sequence type, such as strings, will automatically support creation of an iterator.\nCalling iter()\non a dictionary returns an iterator that will loop over the\ndictionary\u2019s keys:\n>>> m = {'Jan': 1, 'Feb': 2, 'Mar': 3, 'Apr': 4, 'May': 5, 'Jun': 6,\n... 'Jul': 7, 'Aug': 8, 'Sep': 9, 'Oct': 10, 'Nov': 11, 'Dec': 12}\n>>> for key in m:\n... print(key, m[key])\nJan 1\nFeb 2\nMar 3\nApr 4\nMay 5\nJun 6\nJul 7\nAug 8\nSep 9\nOct 10\nNov 11\nDec 12\nNote that starting with Python 3.7, dictionary iteration order is guaranteed to be the same as the insertion order. In earlier versions, the behaviour was unspecified and could vary between implementations.\nApplying iter()\nto a dictionary always loops over the keys, but\ndictionaries have methods that return other iterators. If you want to iterate\nover values or key/value pairs, you can explicitly call the\nvalues()\nor items()\nmethods to get an appropriate\niterator.\nThe dict()\nconstructor can accept an iterator that returns a finite stream\nof (key, value)\ntuples:\n>>> L = [('Italy', 'Rome'), ('France', 'Paris'), ('US', 'Washington DC')]\n>>> dict(iter(L))\n{'Italy': 'Rome', 'France': 'Paris', 'US': 'Washington DC'}\nFiles also support iteration by calling the readline()\nmethod until there are no more lines in the file. This means you can read each\nline of a file like this:\nfor line in file:\n# do something for each line\n...\nSets can take their contents from an iterable and let you iterate over the set\u2019s elements:\n>>> S = {2, 3, 5, 7, 11, 13}\n>>> for i in S:\n... print(i)\n2\n3\n5\n7\n11\n13\nGenerator expressions and list comprehensions\u00b6\nTwo common operations on an iterator\u2019s output are 1) performing some operation for every element, 2) selecting a subset of elements that meet some condition. For example, given a list of strings, you might want to strip off trailing whitespace from each line or extract all the strings containing a given substring.\nList comprehensions and generator expressions (short form: \u201clistcomps\u201d and \u201cgenexps\u201d) are a concise notation for such operations, borrowed from the functional programming language Haskell (https://www.haskell.org/). You can strip all the whitespace from a stream of strings with the following code:\n>>> line_list = [' line 1\\n', 'line 2 \\n', ' \\n', '']\n>>> # Generator expression -- returns iterator\n>>> stripped_iter = (line.strip() for line in line_list)\n>>> # List comprehension -- returns list\n>>> stripped_list = [line.strip() for line in line_list]\nYou can select only certain elements by adding an \"if\"\ncondition:\n>>> stripped_list = [line.strip() for line in line_list\n... if line != \"\"]\nWith a list comprehension, you get back a Python list; stripped_list\nis a\nlist containing the resulting lines, not an iterator. Generator expressions\nreturn an iterator that computes the values as necessary, not needing to\nmaterialize all the values at once. This means that list comprehensions aren\u2019t\nuseful if you\u2019re working with iterators that return an infinite stream or a very\nlarge amount of data. Generator expressions are preferable in these situations.\nGenerator expressions are surrounded by parentheses (\u201c()\u201d) and list comprehensions are surrounded by square brackets (\u201c[]\u201d). Generator expressions have the form:\n( expression for expr in sequence1\nif condition1\nfor expr2 in sequence2\nif condition2\nfor expr3 in sequence3\n...\nif condition3\nfor exprN in sequenceN\nif conditionN )\nAgain, for a list comprehension only the outside brackets are different (square brackets instead of parentheses).\nThe elements of the generated output will be the successive values of\nexpression\n. The if\nclauses are all optional; if present, expression\nis only evaluated and added to the result when condition\nis true.\nGenerator expressions always have to be written inside parentheses, but the parentheses signalling a function call also count. If you want to create an iterator that will be immediately passed to a function you can write:\nobj_total = sum(obj.count for obj in list_all_objects())\nThe for...in\nclauses contain the sequences to be iterated over. The\nsequences do not have to be the same length, because they are iterated over from\nleft to right, not in parallel. For each element in sequence1\n,\nsequence2\nis looped over from the beginning. sequence3\nis then looped\nover for each resulting pair of elements from sequence1\nand sequence2\n.\nTo put it another way, a list comprehension or generator expression is equivalent to the following Python code:\nfor expr1 in sequence1:\nif not (condition1):\ncontinue # Skip this element\nfor expr2 in sequence2:\nif not (condition2):\ncontinue # Skip this element\n...\nfor exprN in sequenceN:\nif not (conditionN):\ncontinue # Skip this element\n# Output the value of\n# the expression.\nThis means that when there are multiple for...in\nclauses but no if\nclauses, the length of the resulting output will be equal to the product of the\nlengths of all the sequences. If you have two lists of length 3, the output\nlist is 9 elements long:\n>>> seq1 = 'abc'\n>>> seq2 = (1, 2, 3)\n>>> [(x, y) for x in seq1 for y in seq2]\n[('a', 1), ('a', 2), ('a', 3),\n('b', 1), ('b', 2), ('b', 3),\n('c', 1), ('c', 2), ('c', 3)]\nTo avoid introducing an ambiguity into Python\u2019s grammar, if expression\nis\ncreating a tuple, it must be surrounded with parentheses. The first list\ncomprehension below is a syntax error, while the second one is correct:\n# Syntax error\n[x, y for x in seq1 for y in seq2]\n# Correct\n[(x, y) for x in seq1 for y in seq2]\nGenerators\u00b6\nGenerators are a special class of functions that simplify the task of writing iterators. Regular functions compute a value and return it, but generators return an iterator that returns a stream of values.\nYou\u2019re doubtless familiar with how regular function calls work in Python or C.\nWhen you call a function, it gets a private namespace where its local variables\nare created. When the function reaches a return\nstatement, the local\nvariables are destroyed and the value is returned to the caller. A later call\nto the same function creates a new private namespace and a fresh set of local\nvariables. But, what if the local variables weren\u2019t thrown away on exiting a\nfunction? What if you could later resume the function where it left off? This\nis what generators provide; they can be thought of as resumable functions.\nHere\u2019s the simplest example of a generator function:\n>>> def generate_ints(N):\n... for i in range(N):\n... yield i\nAny function containing a yield\nkeyword is a generator function;\nthis is detected by Python\u2019s bytecode compiler which compiles the\nfunction specially as a result.\nWhen you call a generator function, it doesn\u2019t return a single value; instead it\nreturns a generator object that supports the iterator protocol. On executing\nthe yield\nexpression, the generator outputs the value of i\n, similar to a\nreturn\nstatement. The big difference between yield\nand a return\nstatement is that on reaching a yield\nthe generator\u2019s state of execution is\nsuspended and local variables are preserved. On the next call to the\ngenerator\u2019s __next__()\nmethod, the function will resume\nexecuting.\nHere\u2019s a sample usage of the generate_ints()\ngenerator:\n>>> gen = generate_ints(3)\n>>> gen\n\n>>> next(gen)\n0\n>>> next(gen)\n1\n>>> next(gen)\n2\n>>> next(gen)\nTraceback (most recent call last):\nFile \"stdin\", line 1, in \nFile \"stdin\", line 2, in generate_ints\nStopIteration\nYou could equally write for i in generate_ints(5)\n, or a, b, c =\ngenerate_ints(3)\n.\nInside a generator function, return value\ncauses StopIteration(value)\nto be raised from the __next__()\nmethod. Once this happens, or\nthe bottom of the function is reached, the procession of values ends and the\ngenerator cannot yield any further values.\nYou could achieve the effect of generators manually by writing your own class\nand storing all the local variables of the generator as instance variables. For\nexample, returning a list of integers could be done by setting self.count\nto\n0, and having the __next__()\nmethod increment self.count\nand\nreturn it.\nHowever, for a moderately complicated generator, writing a corresponding class\ncan be much messier.\nThe test suite included with Python\u2019s library, Lib/test/test_generators.py, contains a number of more interesting examples. Here\u2019s one generator that implements an in-order traversal of a tree using generators recursively.\n# A recursive generator that generates Tree leaves in in-order.\ndef inorder(t):\nif t:\nfor x in inorder(t.left):\nyield x\nyield t.label\nfor x in inorder(t.right):\nyield x\nTwo other examples in test_generators.py\nproduce solutions for the N-Queens\nproblem (placing N queens on an NxN chess board so that no queen threatens\nanother) and the Knight\u2019s Tour (finding a route that takes a knight to every\nsquare of an NxN chessboard without visiting any square twice).\nPassing values into a generator\u00b6\nIn Python 2.4 and earlier, generators only produced output. Once a generator\u2019s code was invoked to create an iterator, there was no way to pass any new information into the function when its execution is resumed. You could hack together this ability by making the generator look at a global variable or by passing in some mutable object that callers then modify, but these approaches are messy.\nIn Python 2.5 there\u2019s a simple way to pass values into a generator.\nyield\nbecame an expression, returning a value that can be assigned to\na variable or otherwise operated on:\nval = (yield i)\nI recommend that you always put parentheses around a yield\nexpression\nwhen you\u2019re doing something with the returned value, as in the above example.\nThe parentheses aren\u2019t always necessary, but it\u2019s easier to always add them\ninstead of having to remember when they\u2019re needed.\n(PEP 342 explains the exact rules, which are that a yield\n-expression must\nalways be parenthesized except when it occurs at the top-level expression on the\nright-hand side of an assignment. This means you can write val = yield i\nbut have to use parentheses when there\u2019s an operation, as in val = (yield i)\n+ 12\n.)\nValues are sent into a generator by calling its send(value)\nmethod. This method resumes the generator\u2019s code and the\nyield\nexpression returns the specified value. If the regular\n__next__()\nmethod is called, the yield\nreturns None\n.\nHere\u2019s a simple counter that increments by 1 and allows changing the value of the internal counter.\ndef counter(maximum):\ni = 0\nwhile i < maximum:\nval = (yield i)\n# If value provided, change counter\nif val is not None:\ni = val\nelse:\ni += 1\nAnd here\u2019s an example of changing the counter:\n>>> it = counter(10)\n>>> next(it)\n0\n>>> next(it)\n1\n>>> it.send(8)\n8\n>>> next(it)\n9\n>>> next(it)\nTraceback (most recent call last):\nFile \"t.py\", line 15, in \nit.next()\nStopIteration\nBecause yield\nwill often be returning None\n, you should always check for\nthis case. Don\u2019t just use its value in expressions unless you\u2019re sure that the\nsend()\nmethod will be the only method used to resume your\ngenerator function.\nIn addition to send()\n, there are two other methods on\ngenerators:\nthrow(value)\nis used to raise an exception inside the generator; the exception is raised by theyield\nexpression where the generator\u2019s execution is paused.close()\nsends aGeneratorExit\nexception to the generator to terminate the iteration. On receiving this exception, the generator\u2019s code must either raiseGeneratorExit\norStopIteration\n; catching the exception and doing anything else is illegal and will trigger aRuntimeError\n.close()\nwill also be called by Python\u2019s garbage collector when the generator is garbage-collected.If you need to run cleanup code when a\nGeneratorExit\noccurs, I suggest using atry: ... finally:\nsuite instead of catchingGeneratorExit\n.\nThe cumulative effect of these changes is to turn generators from one-way producers of information into both producers and consumers.\nGenerators also become coroutines, a more generalized form of subroutines.\nSubroutines are entered at one point and exited at another point (the top of the\nfunction, and a return\nstatement), but coroutines can be entered, exited,\nand resumed at many different points (the yield\nstatements).\nBuilt-in functions\u00b6\nLet\u2019s look in more detail at built-in functions often used with iterators.\nTwo of Python\u2019s built-in functions, map()\nand filter()\nduplicate the\nfeatures of generator expressions:\nmap(f, iterA, iterB, ...)\nreturns an iterator over the sequencef(iterA[0], iterB[0]), f(iterA[1], iterB[1]), f(iterA[2], iterB[2]), ...\n.>>> def upper(s): ... return s.upper()\n>>> list(map(upper, ['sentence', 'fragment'])) ['SENTENCE', 'FRAGMENT'] >>> [upper(s) for s in ['sentence', 'fragment']] ['SENTENCE', 'FRAGMENT']\nYou can of course achieve the same effect with a list comprehension.\nfilter(predicate, iter)\nreturns an iterator over all the\nsequence elements that meet a certain condition, and is similarly duplicated by\nlist comprehensions. A predicate is a function that returns the truth\nvalue of some condition; for use with filter()\n, the predicate must take a\nsingle value.\n>>> def is_even(x):\n... return (x % 2) == 0\n>>> list(filter(is_even, range(10)))\n[0, 2, 4, 6, 8]\nThis can also be written as a list comprehension:\n>>> list(x for x in range(10) if is_even(x))\n[0, 2, 4, 6, 8]\nenumerate(iter, start=0)\ncounts off the elements in the\niterable returning 2-tuples containing the count (from start) and\neach element.\n>>> for item in enumerate(['subject', 'verb', 'object']):\n... print(item)\n(0, 'subject')\n(1, 'verb')\n(2, 'object')\nenumerate()\nis often used when looping through a list and recording the\nindexes at which certain conditions are met:\nf = open('data.txt', 'r')\nfor i, line in enumerate(f):\nif line.strip() == '':\nprint('Blank line at line #%i' % i)\nsorted(iterable, key=None, reverse=False)\ncollects all the\nelements of the iterable into a list, sorts the list, and returns the sorted\nresult. The key and reverse arguments are passed through to the\nconstructed list\u2019s sort()\nmethod.\n>>> import random\n>>> # Generate 8 random numbers between [0, 10000)\n>>> rand_list = random.sample(range(10000), 8)\n>>> rand_list\n[769, 7953, 9828, 6431, 8442, 9878, 6213, 2207]\n>>> sorted(rand_list)\n[769, 2207, 6213, 6431, 7953, 8442, 9828, 9878]\n>>> sorted(rand_list, reverse=True)\n[9878, 9828, 8442, 7953, 6431, 6213, 2207, 769]\n(For a more detailed discussion of sorting, see the Sorting Techniques.)\nThe any(iter)\nand all(iter)\nbuilt-ins look at the\ntruth values of an iterable\u2019s contents. any()\nreturns True\nif any element\nin the iterable is a true value, and all()\nreturns True\nif all of the\nelements are true values:\n>>> any([0, 1, 0])\nTrue\n>>> any([0, 0, 0])\nFalse\n>>> any([1, 1, 1])\nTrue\n>>> all([0, 1, 0])\nFalse\n>>> all([0, 0, 0])\nFalse\n>>> all([1, 1, 1])\nTrue\nzip(iterA, iterB, ...)\ntakes one element from each iterable and\nreturns them in a tuple:\nzip(['a', 'b', 'c'], (1, 2, 3)) =>\n('a', 1), ('b', 2), ('c', 3)\nIt doesn\u2019t construct an in-memory list and exhaust all the input iterators before returning; instead tuples are constructed and returned only if they\u2019re requested. (The technical term for this behaviour is lazy evaluation.)\nThis iterator is intended to be used with iterables that are all of the same length. If the iterables are of different lengths, the resulting stream will be the same length as the shortest iterable.\nzip(['a', 'b'], (1, 2, 3)) =>\n('a', 1), ('b', 2)\nYou should avoid doing this, though, because an element may be taken from the longer iterators and discarded. This means you can\u2019t go on to use the iterators further because you risk skipping a discarded element.\nThe itertools module\u00b6\nThe itertools\nmodule contains a number of commonly used iterators as well\nas functions for combining several iterators. This section will introduce the\nmodule\u2019s contents by showing small examples.\nThe module\u2019s functions fall into a few broad classes:\nFunctions that create a new iterator based on an existing iterator.\nFunctions for treating an iterator\u2019s elements as function arguments.\nFunctions for selecting portions of an iterator\u2019s output.\nA function for grouping an iterator\u2019s output.\nCreating new iterators\u00b6\nitertools.count(start, step)\nreturns an infinite\nstream of evenly spaced values. You can optionally supply the starting number,\nwhich defaults to 0, and the interval between numbers, which defaults to 1:\nitertools.count() =>\n0, 1, 2, 3, 4, 5, 6, 7, 8, 9, ...\nitertools.count(10) =>\n10, 11, 12, 13, 14, 15, 16, 17, 18, 19, ...\nitertools.count(10, 5) =>\n10, 15, 20, 25, 30, 35, 40, 45, 50, 55, ...\nitertools.cycle(iter)\nsaves a copy of the contents of\na provided iterable and returns a new iterator that returns its elements from\nfirst to last. The new iterator will repeat these elements infinitely.\nitertools.cycle([1, 2, 3, 4, 5]) =>\n1, 2, 3, 4, 5, 1, 2, 3, 4, 5, ...\nitertools.repeat(elem, [n])\nreturns the provided\nelement n times, or returns the element endlessly if n is not provided.\nitertools.repeat('abc') =>\nabc, abc, abc, abc, abc, abc, abc, abc, abc, abc, ...\nitertools.repeat('abc', 5) =>\nabc, abc, abc, abc, abc\nitertools.chain(iterA, iterB, ...)\ntakes an arbitrary\nnumber of iterables as input, and returns all the elements of the first\niterator, then all the elements of the second, and so on, until all of the\niterables have been exhausted.\nitertools.chain(['a', 'b', 'c'], (1, 2, 3)) =>\na, b, c, 1, 2, 3\nitertools.islice(iter, [start], stop, [step])\nreturns\na stream that\u2019s a slice of the iterator. With a single stop argument, it\nwill return the first stop elements. If you supply a starting index, you\u2019ll\nget stop-start elements, and if you supply a value for step, elements\nwill be skipped accordingly. Unlike Python\u2019s string and list slicing, you can\u2019t\nuse negative values for start, stop, or step.\nitertools.islice(range(10), 8) =>\n0, 1, 2, 3, 4, 5, 6, 7\nitertools.islice(range(10), 2, 8) =>\n2, 3, 4, 5, 6, 7\nitertools.islice(range(10), 2, 8, 2) =>\n2, 4, 6\nitertools.tee(iter, [n])\nreplicates an iterator; it\nreturns n independent iterators that will all return the contents of the\nsource iterator.\nIf you don\u2019t supply a value for n, the default is 2. Replicating iterators\nrequires saving some of the contents of the source iterator, so this can consume\nsignificant memory if the iterator is large and one of the new iterators is\nconsumed more than the others.\nitertools.tee( itertools.count() ) =>\niterA, iterB\nwhere iterA ->\n0, 1, 2, 3, 4, 5, 6, 7, 8, 9, ...\nand iterB ->\n0, 1, 2, 3, 4, 5, 6, 7, 8, 9, ...\nCalling functions on elements\u00b6\nThe operator\nmodule contains a set of functions corresponding to Python\u2019s\noperators. Some examples are operator.add(a, b)\n(adds\ntwo values), operator.ne(a, b)\n(same as a != b\n), and\noperator.attrgetter('id')\n(returns a callable that fetches the .id\nattribute).\nitertools.starmap(func, iter)\nassumes that the\niterable will return a stream of tuples, and calls func using these tuples as\nthe arguments:\nitertools.starmap(os.path.join,\n[('/bin', 'python'), ('/usr', 'bin', 'java'),\n('/usr', 'bin', 'perl'), ('/usr', 'bin', 'ruby')])\n=>\n/bin/python, /usr/bin/java, /usr/bin/perl, /usr/bin/ruby\nSelecting elements\u00b6\nAnother group of functions chooses a subset of an iterator\u2019s elements based on a predicate.\nitertools.filterfalse(predicate, iter)\nis the\nopposite of filter()\n, returning all elements for which the predicate\nreturns false:\nitertools.filterfalse(is_even, itertools.count()) =>\n1, 3, 5, 7, 9, 11, 13, 15, ...\nitertools.takewhile(predicate, iter)\nreturns\nelements for as long as the predicate returns true. Once the predicate returns\nfalse, the iterator will signal the end of its results.\ndef less_than_10(x):\nreturn x < 10\nitertools.takewhile(less_than_10, itertools.count()) =>\n0, 1, 2, 3, 4, 5, 6, 7, 8, 9\nitertools.takewhile(is_even, itertools.count()) =>\n0\nitertools.dropwhile(predicate, iter)\ndiscards\nelements while the predicate returns true, and then returns the rest of the\niterable\u2019s results.\nitertools.dropwhile(less_than_10, itertools.count()) =>\n10, 11, 12, 13, 14, 15, 16, 17, 18, 19, ...\nitertools.dropwhile(is_even, itertools.count()) =>\n1, 2, 3, 4, 5, 6, 7, 8, 9, 10, ...\nitertools.compress(data, selectors)\ntakes two\niterators and returns only those elements of data for which the corresponding\nelement of selectors is true, stopping whenever either one is exhausted:\nitertools.compress([1, 2, 3, 4, 5], [True, True, False, False, True]) =>\n1, 2, 5\nCombinatoric functions\u00b6\nThe itertools.combinations(iterable, r)\nreturns an iterator giving all possible r-tuple combinations of the\nelements contained in iterable.\nitertools.combinations([1, 2, 3, 4, 5], 2) =>\n(1, 2), (1, 3), (1, 4), (1, 5),\n(2, 3), (2, 4), (2, 5),\n(3, 4), (3, 5),\n(4, 5)\nitertools.combinations([1, 2, 3, 4, 5], 3) =>\n(1, 2, 3), (1, 2, 4), (1, 2, 5), (1, 3, 4), (1, 3, 5), (1, 4, 5),\n(2, 3, 4), (2, 3, 5), (2, 4, 5),\n(3, 4, 5)\nThe elements within each tuple remain in the same order as\niterable returned them. For example, the number 1 is always before\n2, 3, 4, or 5 in the examples above. A similar function,\nitertools.permutations(iterable, r=None)\n,\nremoves this constraint on the order, returning all possible\narrangements of length r:\nitertools.permutations([1, 2, 3, 4, 5], 2) =>\n(1, 2), (1, 3), (1, 4), (1, 5),\n(2, 1), (2, 3), (2, 4), (2, 5),\n(3, 1), (3, 2), (3, 4), (3, 5),\n(4, 1), (4, 2), (4, 3), (4, 5),\n(5, 1), (5, 2), (5, 3), (5, 4)\nitertools.permutations([1, 2, 3, 4, 5]) =>\n(1, 2, 3, 4, 5), (1, 2, 3, 5, 4), (1, 2, 4, 3, 5),\n...\n(5, 4, 3, 2, 1)\nIf you don\u2019t supply a value for r the length of the iterable is used, meaning that all the elements are permuted.\nNote that these functions produce all of the possible combinations by position and don\u2019t require that the contents of iterable are unique:\nitertools.permutations('aba', 3) =>\n('a', 'b', 'a'), ('a', 'a', 'b'), ('b', 'a', 'a'),\n('b', 'a', 'a'), ('a', 'a', 'b'), ('a', 'b', 'a')\nThe identical tuple ('a', 'a', 'b')\noccurs twice, but the two \u2018a\u2019\nstrings came from different positions.\nThe itertools.combinations_with_replacement(iterable, r)\nfunction relaxes a different constraint: elements can be repeated\nwithin a single tuple. Conceptually an element is selected for the\nfirst position of each tuple and then is replaced before the second\nelement is selected.\nitertools.combinations_with_replacement([1, 2, 3, 4, 5], 2) =>\n(1, 1), (1, 2), (1, 3), (1, 4), (1, 5),\n(2, 2), (2, 3), (2, 4), (2, 5),\n(3, 3), (3, 4), (3, 5),\n(4, 4), (4, 5),\n(5, 5)\nGrouping elements\u00b6\nThe last function I\u2019ll discuss, itertools.groupby(iter, key_func=None)\n, is the most complicated. key_func(elem)\nis a function\nthat can compute a key value for each element returned by the iterable. If you\ndon\u2019t supply a key function, the key is simply each element itself.\ngroupby()\ncollects all the consecutive elements from the\nunderlying iterable that have the same key value, and returns a stream of\n2-tuples containing a key value and an iterator for the elements with that key.\ncity_list = [('Decatur', 'AL'), ('Huntsville', 'AL'), ('Selma', 'AL'),\n('Anchorage', 'AK'), ('Nome', 'AK'),\n('Flagstaff', 'AZ'), ('Phoenix', 'AZ'), ('Tucson', 'AZ'),\n...\n]\ndef get_state(city_state):\nreturn city_state[1]\nitertools.groupby(city_list, get_state) =>\n('AL', iterator-1),\n('AK', iterator-2),\n('AZ', iterator-3), ...\nwhere\niterator-1 =>\n('Decatur', 'AL'), ('Huntsville', 'AL'), ('Selma', 'AL')\niterator-2 =>\n('Anchorage', 'AK'), ('Nome', 'AK')\niterator-3 =>\n('Flagstaff', 'AZ'), ('Phoenix', 'AZ'), ('Tucson', 'AZ')\ngroupby()\nassumes that the underlying iterable\u2019s contents will\nalready be sorted based on the key. Note that the returned iterators also use\nthe underlying iterable, so you have to consume the results of iterator-1 before\nrequesting iterator-2 and its corresponding key.\nThe functools module\u00b6\nThe functools\nmodule contains some higher-order functions.\nA higher-order function takes one or more functions as input and returns a\nnew function. The most useful tool in this module is the\nfunctools.partial()\nfunction.\nFor programs written in a functional style, you\u2019ll sometimes want to construct\nvariants of existing functions that have some of the parameters filled in.\nConsider a Python function f(a, b, c)\n; you may wish to create a new function\ng(b, c)\nthat\u2019s equivalent to f(1, b, c)\n; you\u2019re filling in a value for\none of f()\n\u2019s parameters. This is called \u201cpartial function application\u201d.\nThe constructor for partial()\ntakes the arguments\n(function, arg1, arg2, ..., kwarg1=value1, kwarg2=value2)\n. The resulting\nobject is callable, so you can just call it to invoke function\nwith the\nfilled-in arguments.\nHere\u2019s a small but realistic example:\nimport functools\ndef log(message, subsystem):\n\"\"\"Write the contents of 'message' to the specified subsystem.\"\"\"\nprint('%s: %s' % (subsystem, message))\n...\nserver_log = functools.partial(log, subsystem='server')\nserver_log('Unable to open socket')\nfunctools.reduce(func, iter, [initial_value])\ncumulatively performs an operation on all the iterable\u2019s elements and,\ntherefore, can\u2019t be applied to infinite iterables. func must be a function\nthat takes two elements and returns a single value. functools.reduce()\ntakes the first two elements A and B returned by the iterator and calculates\nfunc(A, B)\n. It then requests the third element, C, calculates\nfunc(func(A, B), C)\n, combines this result with the fourth element returned,\nand continues until the iterable is exhausted. If the iterable returns no\nvalues at all, a TypeError\nexception is raised. If the initial value is\nsupplied, it\u2019s used as a starting point and func(initial_value, A)\nis the\nfirst calculation.\n>>> import operator, functools\n>>> functools.reduce(operator.concat, ['A', 'BB', 'C'])\n'ABBC'\n>>> functools.reduce(operator.concat, [])\nTraceback (most recent call last):\n...\nTypeError: reduce() of empty sequence with no initial value\n>>> functools.reduce(operator.mul, [1, 2, 3], 1)\n6\n>>> functools.reduce(operator.mul, [], 1)\n1\nIf you use operator.add()\nwith functools.reduce()\n, you\u2019ll add up all the\nelements of the iterable. This case is so common that there\u2019s a special\nbuilt-in called sum()\nto compute it:\n>>> import functools, operator\n>>> functools.reduce(operator.add, [1, 2, 3, 4], 0)\n10\n>>> sum([1, 2, 3, 4])\n10\n>>> sum([])\n0\nFor many uses of functools.reduce()\n, though, it can be clearer to just\nwrite the obvious for\nloop:\nimport functools\n# Instead of:\nproduct = functools.reduce(operator.mul, [1, 2, 3], 1)\n# You can write:\nproduct = 1\nfor i in [1, 2, 3]:\nproduct *= i\nA related function is itertools.accumulate(iterable, func=operator.add)\n. It performs the same calculation, but instead of\nreturning only the final result, accumulate()\nreturns an iterator\nthat also yields each partial result:\nitertools.accumulate([1, 2, 3, 4, 5]) =>\n1, 3, 6, 10, 15\nitertools.accumulate([1, 2, 3, 4, 5], operator.mul) =>\n1, 2, 6, 24, 120\nThe operator module\u00b6\nThe operator\nmodule was mentioned earlier. It contains a set of\nfunctions corresponding to Python\u2019s operators. These functions are often useful\nin functional-style code because they save you from writing trivial functions\nthat perform a single operation.\nSome of the functions in this module are:\nMath operations:\nadd()\n,sub()\n,mul()\n,floordiv()\n,abs()\n, \u2026Logical operations:\nnot_()\n,truth()\n.Bitwise operations:\nand_()\n,or_()\n,invert()\n.Comparisons:\neq()\n,ne()\n,lt()\n,le()\n,gt()\n, andge()\n.Object identity:\nis_()\n,is_not()\n.\nConsult the operator module\u2019s documentation for a complete list.\nSmall functions and the lambda expression\u00b6\nWhen writing functional-style programs, you\u2019ll often need little functions that act as predicates or that combine elements in some way.\nIf there\u2019s a Python built-in or a module function that\u2019s suitable, you don\u2019t need to define a new function at all:\nstripped_lines = [line.strip() for line in lines]\nexisting_files = filter(os.path.exists, file_list)\nIf the function you need doesn\u2019t exist, you need to write it. One way to write\nsmall functions is to use the lambda\nexpression. lambda\ntakes a\nnumber of parameters and an expression combining these parameters, and creates\nan anonymous function that returns the value of the expression:\nadder = lambda x, y: x+y\nprint_assign = lambda name, value: name + '=' + str(value)\nAn alternative is to just use the def\nstatement and define a function in the\nusual way:\ndef adder(x, y):\nreturn x + y\ndef print_assign(name, value):\nreturn name + '=' + str(value)\nWhich alternative is preferable? That\u2019s a style question; my usual course is to\navoid using lambda\n.\nOne reason for my preference is that lambda\nis quite limited in the\nfunctions it can define. The result has to be computable as a single\nexpression, which means you can\u2019t have multiway if... elif... else\ncomparisons or try... except\nstatements. If you try to do too much in a\nlambda\nstatement, you\u2019ll end up with an overly complicated expression that\u2019s\nhard to read. Quick, what\u2019s the following code doing?\nimport functools\ntotal = functools.reduce(lambda a, b: (0, a[1] + b[1]), items)[1]\nYou can figure it out, but it takes time to disentangle the expression to figure\nout what\u2019s going on. Using a short nested def\nstatements makes things a\nlittle bit better:\nimport functools\ndef combine(a, b):\nreturn 0, a[1] + b[1]\ntotal = functools.reduce(combine, items)[1]\nBut it would be best of all if I had simply used a for\nloop:\ntotal = 0\nfor a, b in items:\ntotal += b\nOr the sum()\nbuilt-in and a generator expression:\ntotal = sum(b for a, b in items)\nMany uses of functools.reduce()\nare clearer when written as for\nloops.\nFredrik Lundh once suggested the following set of rules for refactoring uses of\nlambda\n:\nWrite a lambda function.\nWrite a comment explaining what the heck that lambda does.\nStudy the comment for a while, and think of a name that captures the essence of the comment.\nConvert the lambda to a def statement, using that name.\nRemove the comment.\nI really like these rules, but you\u2019re free to disagree about whether this lambda-free style is better.\nRevision History and Acknowledgements\u00b6\nThe author would like to thank the following people for offering suggestions, corrections and assistance with various drafts of this article: Ian Bicking, Nick Coghlan, Nick Efford, Raymond Hettinger, Jim Jewett, Mike Krell, Leandro Lameiro, Jussi Salmela, Collin Winter, Blake Winton.\nVersion 0.1: posted June 30 2006.\nVersion 0.11: posted July 1 2006. Typo fixes.\nVersion 0.2: posted July 10 2006. Merged genexp and listcomp sections into one. Typo fixes.\nVersion 0.21: Added more references suggested on the tutor mailing list.\nVersion 0.30: Adds a section on the functional\nmodule written by Collin\nWinter; adds short section on the operator module; a few other edits.\nReferences\u00b6\nGeneral\u00b6\nStructure and Interpretation of Computer Programs, by Harold Abelson and Gerald Jay Sussman with Julie Sussman. The book can be found at https://mitpress.mit.edu/sicp. In this classic textbook of computer science, chapters 2 and 3 discuss the use of sequences and streams to organize the data flow inside a program. The book uses Scheme for its examples, but many of the design approaches described in these chapters are applicable to functional-style Python code.\nhttps://defmacro.org/2006/06/19/fp.html: A general introduction to functional programming that uses Java examples and has a lengthy historical introduction.\nhttps://en.wikipedia.org/wiki/Functional_programming: General Wikipedia entry describing functional programming.\nhttps://en.wikipedia.org/wiki/Coroutine: Entry for coroutines.\nhttps://en.wikipedia.org/wiki/Partial_application: Entry for the concept of partial function application.\nhttps://en.wikipedia.org/wiki/Currying: Entry for the concept of currying.\nPython-specific\u00b6\nhttps://gnosis.cx/TPiP/: The first chapter of David Mertz\u2019s book Text Processing in Python discusses functional programming for text processing, in the section titled \u201cUtilizing Higher-Order Functions in Text Processing\u201d.\nMertz also wrote a 3-part series of articles on functional programming for IBM\u2019s DeveloperWorks site; see part 1, part 2, and part 3,\nPython documentation\u00b6\nDocumentation for the itertools\nmodule.\nDocumentation for the functools\nmodule.\nDocumentation for the operator\nmodule.\nPEP 289: \u201cGenerator Expressions\u201d\nPEP 342: \u201cCoroutines via Enhanced Generators\u201d describes the new generator features in Python 2.5.", "code_snippets": [" ", " ", " ", "\n ", "\n\n", " ", " ", " ", "\n ", "\n", " ", " ", " ", " ", " ", " ", " ", " ", " ", " ", " ", " ", " ", "\n", " ", " ", " ", " ", " ", " ", " ", " ", " ", " ", " ", " ", "\n", " ", " ", " ", "\n", " ", " ", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", " ", " ", " ", "\n ", "\n ", "\n", " ", " ", " ", " ", " ", " ", " ", "\n", " ", " ", " ", "\n", " ", "\n", "\n", "\n", "\n", "\n", "\n", "\n", " ", " ", " ", " ", " ", "\n\n", "\n", " ", " ", " ", " ", " ", " ", "\n\n", "\n", " ", " ", " ", " ", " ", " ", "\n", " ", " ", " ", " ", " ", " ", "\n", " ", " ", " ", " ", "\n", " ", " ", " ", " ", " ", "\n ", " ", "\n ", " ", " ", " ", "\n ", " ", "\n ", " ", " ", " ", "\n ", "\n ", " ", "\n ", " ", " ", " ", "\n ", " ", " ", "\n", " ", " ", " ", " ", " ", " ", "\n", " ", " ", " ", "\n ", " ", " ", "\n ", " ", "\n ", " ", " ", " ", "\n ", " ", " ", "\n ", " ", "\n ", "\n ", " ", " ", " ", "\n ", " ", " ", "\n ", " ", "\n\n ", "\n ", "\n", "\n", " ", " ", " ", " ", " ", " ", " ", " ", " ", "\n", "\n", " ", " ", " ", " ", " ", " ", " ", " ", " ", "\n", "\n", "\n ", " ", "\n ", " ", " ", " ", "\n ", " ", "\n\n ", " ", "\n\n ", " ", " ", " ", "\n ", " ", "\n", " ", " ", " ", "\n", " ", " ", " ", " ", " ", "\n", " ", "\n", "\n", "\n", "\n", " ", " ", " ", "\n", " ", " ", " ", " ", "\n ", " ", " ", " ", "\n ", " ", " ", "\n", "\n", "\n", " ", " ", " ", "\n", "\n", "\n", "\n", "\n", " ", "\n", "\n", " ", " ", " ", " ", " ", " ", "\n ", " ", " ", " ", " ", " ", "\n", " ", " ", " ", " ", " ", "\n ", " ", " ", " ", "\n", " ", "\n ", " ", " ", " ", " ", " ", " ", " ", " ", " ", " ", "\n", " ", "\n ", " ", " ", " ", " ", " ", " ", " ", " ", " ", " ", "\n", " ", " ", "\n ", " ", " ", " ", " ", " ", " ", " ", " ", " ", " ", "\n", " ", " ", " ", " ", " ", "\n ", " ", " ", " ", " ", " ", " ", " ", " ", " ", " ", "\n", " ", "\n ", " ", " ", " ", " ", " ", " ", " ", " ", " ", " ", "\n", " ", " ", "\n ", " ", " ", " ", " ", "\n", " ", " ", " ", " ", " ", " ", "\n ", " ", " ", " ", " ", " ", "\n", " ", " ", "\n ", " ", " ", " ", " ", " ", " ", " ", "\n", " ", " ", " ", "\n ", " ", " ", " ", " ", " ", "\n", " ", " ", " ", " ", "\n ", " ", " ", "\n", " ", " ", " ", "\n ", " ", "\n\n", " ", " ", "\n ", " ", " ", " ", " ", " ", " ", " ", " ", " ", " ", "\n\n", " ", " ", "\n ", " ", " ", " ", " ", " ", " ", " ", " ", " ", " ", "\n", "\n ", " ", " ", " ", " ", "\n ", " ", " ", " ", " ", " ", "\n", "\n ", " ", " ", " ", "\n", " ", " ", "\n ", " ", " ", " ", " ", " ", " ", " ", " ", "\n", "\n ", " ", " ", " ", "\n\n", " ", " ", "\n ", " ", " ", " ", " ", " ", " ", " ", " ", " ", "\n\n", " ", " ", "\n ", "\n", " ", " ", "\n ", " ", " ", " ", " ", " ", " ", " ", " ", " ", " ", "\n\n", " ", " ", "\n ", " ", " ", " ", " ", " ", " ", " ", " ", " ", " ", "\n", " ", " ", " ", " ", " ", " ", " ", " ", " ", " ", "\n ", " ", " ", "\n", " ", " ", " ", " ", " ", " ", "\n ", " ", " ", " ", " ", " ", " ", " ", "\n ", " ", " ", " ", " ", " ", "\n ", " ", " ", " ", "\n ", " ", "\n\n", " ", " ", " ", " ", " ", " ", "\n ", " ", " ", " ", " ", " ", " ", " ", " ", " ", " ", " ", " ", " ", " ", " ", " ", " ", "\n ", " ", " ", " ", " ", " ", " ", " ", " ", "\n ", " ", " ", "\n", " ", " ", " ", " ", " ", " ", "\n ", " ", " ", " ", " ", " ", " ", " ", "\n ", " ", " ", " ", " ", " ", " ", " ", "\n ", " ", " ", " ", " ", " ", " ", " ", "\n ", " ", " ", " ", " ", " ", " ", " ", "\n ", " ", " ", " ", " ", " ", " ", " ", "\n\n", " ", " ", " ", " ", " ", "\n ", " ", " ", " ", " ", " ", " ", " ", " ", " ", " ", " ", " ", " ", " ", "\n ", "\n ", " ", " ", " ", " ", "\n", " ", " ", "\n ", " ", " ", " ", " ", " ", " ", " ", " ", "\n ", " ", " ", " ", " ", " ", " ", " ", " ", "\n", " ", " ", " ", " ", " ", " ", "\n ", " ", " ", " ", " ", " ", " ", " ", " ", " ", "\n ", " ", " ", " ", " ", " ", " ", " ", "\n ", " ", " ", " ", " ", " ", "\n ", " ", " ", " ", "\n ", " ", "\n", " ", " ", " ", " ", " ", " ", " ", "\n ", " ", " ", " ", "\n ", " ", " ", " ", " ", " ", "\n ", "\n ", "\n\n", "\n ", " ", "\n\n", " ", " ", "\n ", " ", "\n ", " ", "\n ", " ", " ", "\n\n", "\n", " ", "\n ", " ", " ", " ", " ", " ", "\n", " ", "\n ", " ", " ", " ", "\n", " ", "\n ", " ", " ", " ", " ", " ", "\n", "\n\n", " ", "\n", "\n ", " ", " ", " ", "\n ", "\n\n", " ", " ", " ", "\n", "\n", "\n", " ", " ", " ", "\n", "\n", " ", "\n", "\n", "\n", ": ", "\n", " ", " ", " ", " ", "\n", "\n", " ", " ", "\n", "\n", "\n", "\n", " ", " ", " ", " ", " ", " ", "\n\n", "\n", " ", " ", "\n", " ", " ", " ", " ", " ", "\n ", " ", " ", "\n", " ", " ", " ", " ", " ", "\n ", " ", " ", " ", " ", "\n\n", " ", " ", " ", " ", " ", " ", "\n ", " ", " ", " ", " ", "\n", " ", " ", " ", " ", " ", " ", "\n", " ", " ", " ", "\n", " ", " ", " ", " ", " ", "\n\n", " ", " ", " ", " ", " ", " ", " ", " ", " ", "\n", " ", "\n ", " ", " ", " ", "\n\n", " ", "\n ", " ", " ", " ", " ", " ", "\n", "\n", " ", " ", " ", " ", " ", " ", " ", " ", " ", "\n", "\n", " ", "\n ", " ", " ", " ", " ", "\n\n", " ", " ", " ", "\n", " ", " ", "\n", " ", " ", " ", " ", "\n ", " ", " ", "\n", " ", " ", " ", " ", " ", " ", " ", "\n"], "language": "Python", "source": "python.org", "token_count": 10864} +{"url": "https://docs.python.org/3/library/resource.html", "title": " \u2014 Resource usage information", "content": "resource\n\u2014 Resource usage information\u00b6\nThis module provides basic mechanisms for measuring and controlling system resources utilized by a program.\nAvailability: Unix, not WASI.\nSymbolic constants are used to specify particular system resources and to request usage information about either the current process or its children.\nAn OSError\nis raised on syscall failure.\nResource Limits\u00b6\nResources usage can be limited using the setrlimit()\nfunction described\nbelow. Each resource is controlled by a pair of limits: a soft limit and a hard\nlimit. The soft limit is the current limit, and may be lowered or raised by a\nprocess over time. The soft limit can never exceed the hard limit. The hard\nlimit can be lowered to any value greater than the soft limit, but not raised.\n(Only processes with the effective UID of the super-user can raise a hard\nlimit.)\nThe specific resources that can be limited are system dependent. They are described in the getrlimit(2) man page. The resources listed below are supported when the underlying operating system supports them; resources which cannot be checked or controlled by the operating system are not defined in this module for those platforms.\n- resource.RLIM_INFINITY\u00b6\nConstant used to represent the limit for an unlimited resource.\n- resource.getrlimit(resource)\u00b6\nReturns a tuple\n(soft, hard)\nwith the current soft and hard limits of resource. RaisesValueError\nif an invalid resource is specified, orerror\nif the underlying system call fails unexpectedly.\n- resource.setrlimit(resource, limits)\u00b6\nSets new limits of consumption of resource. The limits argument must be a tuple\n(soft, hard)\nof two integers describing the new limits. A value ofRLIM_INFINITY\ncan be used to request a limit that is unlimited.Raises\nValueError\nif an invalid resource is specified, if the new soft limit exceeds the hard limit, or if a process tries to raise its hard limit. Specifying a limit ofRLIM_INFINITY\nwhen the hard or system limit for that resource is not unlimited will result in aValueError\n. A process with the effective UID of super-user can request any valid limit value, including unlimited, butValueError\nwill still be raised if the requested limit exceeds the system imposed limit.setrlimit\nmay also raiseerror\nif the underlying system call fails.VxWorks only supports setting\nRLIMIT_NOFILE\n.Raises an auditing event\nresource.setrlimit\nwith argumentsresource\n,limits\n.\n- resource.prlimit(pid, resource[, limits])\u00b6\nCombines\nsetrlimit()\nandgetrlimit()\nin one function and supports to get and set the resources limits of an arbitrary process. If pid is 0, then the call applies to the current process. resource and limits have the same meaning as insetrlimit()\n, except that limits is optional.When limits is not given the function returns the resource limit of the process pid. When limits is given the resource limit of the process is set and the former resource limit is returned.\nRaises\nProcessLookupError\nwhen pid can\u2019t be found andPermissionError\nwhen the user doesn\u2019t haveCAP_SYS_RESOURCE\nfor the process.Raises an auditing event\nresource.prlimit\nwith argumentspid\n,resource\n,limits\n.Availability: Linux >= 2.6.36 with glibc >= 2.13.\nAdded in version 3.4.\nThese symbols define resources whose consumption can be controlled using the\nsetrlimit()\nand getrlimit()\nfunctions described below. The values of\nthese symbols are exactly the constants used by C programs.\nThe Unix man page for getrlimit(2) lists the available resources. Note that not all systems use the same symbol or same value to denote the same resource. This module does not attempt to mask platform differences \u2014 symbols not defined for a platform will not be available from this module on that platform.\n- resource.RLIMIT_CORE\u00b6\nThe maximum size (in bytes) of a core file that the current process can create. This may result in the creation of a partial core file if a larger core would be required to contain the entire process image.\n- resource.RLIMIT_CPU\u00b6\nThe maximum amount of processor time (in seconds) that a process can use. If this limit is exceeded, a\nSIGXCPU\nsignal is sent to the process. (See thesignal\nmodule documentation for information about how to catch this signal and do something useful, e.g. flush open files to disk.)\n- resource.RLIMIT_FSIZE\u00b6\nThe maximum size of a file which the process may create.\n- resource.RLIMIT_DATA\u00b6\nThe maximum size (in bytes) of the process\u2019s heap.\n- resource.RLIMIT_STACK\u00b6\nThe maximum size (in bytes) of the call stack for the current process. This only affects the stack of the main thread in a multi-threaded process.\n- resource.RLIMIT_RSS\u00b6\nThe maximum resident set size that should be made available to the process.\n- resource.RLIMIT_NPROC\u00b6\nThe maximum number of processes the current process may create.\n- resource.RLIMIT_NOFILE\u00b6\nThe maximum number of open file descriptors for the current process.\n- resource.RLIMIT_OFILE\u00b6\nThe BSD name for\nRLIMIT_NOFILE\n.\n- resource.RLIMIT_MEMLOCK\u00b6\nThe maximum address space which may be locked in memory.\n- resource.RLIMIT_VMEM\u00b6\nThe largest area of mapped memory which the process may occupy. Usually an alias of\nRLIMIT_AS\n.Availability: Solaris, FreeBSD, NetBSD.\n- resource.RLIMIT_AS\u00b6\nThe maximum area (in bytes) of address space which may be taken by the process.\n- resource.RLIMIT_MSGQUEUE\u00b6\nThe number of bytes that can be allocated for POSIX message queues.\nAvailability: Linux >= 2.6.8.\nAdded in version 3.4.\n- resource.RLIMIT_NICE\u00b6\nThe ceiling for the process\u2019s nice level (calculated as 20 - rlim_cur).\nAvailability: Linux >= 2.6.12.\nAdded in version 3.4.\n- resource.RLIMIT_RTPRIO\u00b6\nThe ceiling of the real-time priority.\nAvailability: Linux >= 2.6.12.\nAdded in version 3.4.\n- resource.RLIMIT_RTTIME\u00b6\nThe time limit (in microseconds) on CPU time that a process can spend under real-time scheduling without making a blocking syscall.\nAvailability: Linux >= 2.6.25.\nAdded in version 3.4.\n- resource.RLIMIT_SIGPENDING\u00b6\nThe number of signals which the process may queue.\nAvailability: Linux >= 2.6.8.\nAdded in version 3.4.\n- resource.RLIMIT_SBSIZE\u00b6\nThe maximum size (in bytes) of socket buffer usage for this user. This limits the amount of network memory, and hence the amount of mbufs, that this user may hold at any time.\nAvailability: FreeBSD, NetBSD.\nAdded in version 3.4.\n- resource.RLIMIT_SWAP\u00b6\nThe maximum size (in bytes) of the swap space that may be reserved or used by all of this user id\u2019s processes. This limit is enforced only if bit 1 of the vm.overcommit sysctl is set. Please see tuning(7) for a complete description of this sysctl.\nAvailability: FreeBSD >= 8.\nAdded in version 3.4.\n- resource.RLIMIT_NPTS\u00b6\nThe maximum number of pseudo-terminals created by this user id.\nAvailability: FreeBSD >= 8.\nAdded in version 3.4.\n- resource.RLIMIT_KQUEUES\u00b6\nThe maximum number of kqueues this user id is allowed to create.\nAvailability: FreeBSD >= 11.\nAdded in version 3.10.\nResource Usage\u00b6\nThese functions are used to retrieve resource usage information:\n- resource.getrusage(who)\u00b6\nThis function returns an object that describes the resources consumed by either the current process or its children, as specified by the who parameter. The who parameter should be specified using one of the\nRUSAGE_*\nconstants described below.A simple example:\nfrom resource import * import time # a non CPU-bound task time.sleep(3) print(getrusage(RUSAGE_SELF)) # a CPU-bound task for i in range(10 ** 8): _ = 1 + 1 print(getrusage(RUSAGE_SELF))\nThe fields of the return value each describe how a particular system resource has been used, e.g. amount of time spent running in user mode or number of times the process was swapped out of main memory. Some values are dependent on the clock tick interval, e.g. the amount of memory the process is using.\nFor backward compatibility, the return value is also accessible as a tuple of 16 elements.\nThe fields\nru_utime\nandru_stime\nof the return value are floating-point values representing the amount of time spent executing in user mode and the amount of time spent executing in system mode, respectively. The remaining values are integers. Consult the getrusage(2) man page for detailed information about these values. A brief summary is presented here:Index\nField\nResource\n0\nru_utime\ntime in user mode (float seconds)\n1\nru_stime\ntime in system mode (float seconds)\n2\nru_maxrss\nmaximum resident set size\n3\nru_ixrss\nshared memory size\n4\nru_idrss\nunshared memory size\n5\nru_isrss\nunshared stack size\n6\nru_minflt\npage faults not requiring I/O\n7\nru_majflt\npage faults requiring I/O\n8\nru_nswap\nnumber of swap outs\n9\nru_inblock\nblock input operations\n10\nru_oublock\nblock output operations\n11\nru_msgsnd\nmessages sent\n12\nru_msgrcv\nmessages received\n13\nru_nsignals\nsignals received\n14\nru_nvcsw\nvoluntary context switches\n15\nru_nivcsw\ninvoluntary context switches\nThis function will raise a\nValueError\nif an invalid who parameter is specified. It may also raiseerror\nexception in unusual circumstances.\n- resource.getpagesize()\u00b6\nReturns the number of bytes in a system page. (This need not be the same as the hardware page size.)\nThe following RUSAGE_*\nsymbols are passed to the getrusage()\nfunction to specify which processes information should be provided for.\n- resource.RUSAGE_SELF\u00b6\nPass to\ngetrusage()\nto request resources consumed by the calling process, which is the sum of resources used by all threads in the process.\n- resource.RUSAGE_CHILDREN\u00b6\nPass to\ngetrusage()\nto request resources consumed by child processes of the calling process which have been terminated and waited for.\n- resource.RUSAGE_BOTH\u00b6\nPass to\ngetrusage()\nto request resources consumed by both the current process and child processes. May not be available on all systems.\n- resource.RUSAGE_THREAD\u00b6\nPass to\ngetrusage()\nto request resources consumed by the current thread. May not be available on all systems.Added in version 3.2.", "code_snippets": [" ", "\n", "\n\n", "\n", "\n", "\n\n", "\n", " ", " ", " ", " ", " ", "\n ", " ", " ", " ", " ", "\n", "\n"], "language": "Python", "source": "python.org", "token_count": 2456} +{"url": "https://docs.python.org/3/library/asyncio-api-index.html", "title": "High-level API Index", "content": "High-level API Index\u00b6\nThis page lists all high-level async/await enabled asyncio APIs.\nTasks\u00b6\nUtilities to run asyncio programs, create Tasks, and await on multiple things with timeouts.\nCreate event loop, run a coroutine, close the loop. |\n|\nA context manager that simplifies multiple async function calls. |\n|\nTask object. |\n|\nA context manager that holds a group of tasks. Provides a convenient and reliable way to wait for all tasks in the group to finish. |\n|\nStart an asyncio Task, then returns it. |\n|\nReturn the current Task. |\n|\nReturn all tasks that are not yet finished for an event loop. |\n|\n|\nSleep for a number of seconds. |\n|\nSchedule and wait for things concurrently. |\n|\nRun with a timeout. |\n|\nShield from cancellation. |\n|\nMonitor for completion. |\nRun with a timeout. Useful in cases when |\n|\nAsynchronously run a function in a separate OS thread. |\n|\nSchedule a coroutine from another OS thread. |\n|\n|\nMonitor for completion with a |\nExamples\nQueues\u00b6\nQueues should be used to distribute work amongst multiple asyncio Tasks, implement connection pools, and pub/sub patterns.\nA FIFO queue. |\n|\nA priority queue. |\n|\nA LIFO queue. |\nExamples\nSubprocesses\u00b6\nUtilities to spawn subprocesses and run shell commands.\n|\nCreate a subprocess. |\nRun a shell command. |\nExamples\nSee also the subprocess APIs documentation.\nStreams\u00b6\nHigh-level APIs to work with network IO.\n|\nEstablish a TCP connection. |\n|\nEstablish a Unix socket connection. |\n|\nStart a TCP server. |\n|\nStart a Unix socket server. |\nHigh-level async/await object to receive network data. |\n|\nHigh-level async/await object to send network data. |\nExamples\nSee also the streams APIs documentation.\nSynchronization\u00b6\nThreading-like synchronization primitives that can be used in Tasks.\nA mutex lock. |\n|\nAn event object. |\n|\nA condition object. |\n|\nA semaphore. |\n|\nA bounded semaphore. |\n|\nA barrier object. |\nExamples\nSee also the documentation of asyncio synchronization primitives.\nExceptions\u00b6\nRaised when a Task is cancelled. See also |\n|\nRaised when a Barrier is broken. See also |\nExamples\nHandling CancelledError to run code on cancellation request.\nSee also the full list of asyncio-specific exceptions.", "code_snippets": [], "language": "Python", "source": "python.org", "token_count": 546} +{"url": "https://docs.python.org/3/whatsnew/3.4.html", "title": "What\u2019s New In Python 3.4", "content": "What\u2019s New In Python 3.4\u00b6\n- Author:\nR. David Murray (Editor)\nThis article explains the new features in Python 3.4, compared to 3.3. Python 3.4 was released on March 16, 2014. For full details, see the changelog.\nSee also\nPEP 429 \u2013 Python 3.4 Release Schedule\nSummary \u2013 Release Highlights\u00b6\nNew syntax features:\nNo new syntax features were added in Python 3.4.\nOther new features:\nNewly created file descriptors are non-inheritable (PEP 446).\ncommand line option for isolated mode (bpo-16499).\nimprovements in the handling of codecs that are not text encodings (multiple issues).\nA ModuleSpec Type for the Import System (PEP 451). (Affects importer authors.)\nThe\nmarshal\nformat has been made more compact and efficient (bpo-16475).\nNew library modules:\nasyncio\n: New provisional API for asynchronous IO (PEP 3156).selectors\n: High-level and efficient I/O multiplexing, built upon theselect\nmodule primitives (part of PEP 3156).statistics\n: A basic numerically stable statistics library (PEP 450).\nSignificantly improved library modules:\nNew\npickle\nprotocol 4 (PEP 3154).multiprocessing\nnow has an option to avoid using os.fork on Unix (bpo-8713).email\nhas a new submodule,contentmanager\n, and a newMessage\nsubclass (EmailMessage\n) that simplify MIME handling (bpo-18891).The\ninspect\nandpydoc\nmodules are now capable of correct introspection of a much wider variety of callable objects, which improves the output of the Pythonhelp()\nsystem.The\nipaddress\nmodule API has been declared stable\nSecurity improvements:\nMake newly created file descriptors non-inheritable (PEP 446) to avoid leaking file descriptors to child processes.\nNew command line option for isolated mode, (bpo-16499).\nmultiprocessing\nnow has an option to avoid using os.fork on Unix. spawn and forkserver are more secure because they avoid sharing data with child processes.multiprocessing\nchild processes on Windows no longer inherit all of the parent\u2019s inheritable handles, only the necessary ones.A new\nhashlib.pbkdf2_hmac()\nfunction provides the PKCS#5 password-based key derivation function 2.Retrieving certificates from the Windows system cert store support for\nssl\n.The\nssl.SSLContext\nclass has a lot of improvements.All modules in the standard library that support SSL now support server certificate verification, including hostname matching (\nssl.match_hostname()\n) and CRLs (Certificate Revocation lists, seessl.SSLContext.load_verify_locations()\n).\nCPython implementation improvements:\nLeveraging PEP 442, in most cases module globals are no longer set to None during finalization (bpo-18214).\nPlease read on for a comprehensive list of user-facing changes, including many other smaller improvements, CPython optimizations, deprecations, and potential porting issues.\nNew Features\u00b6\nPEP 453: Explicit Bootstrapping of PIP in Python Installations\u00b6\nBootstrapping pip By Default\u00b6\nThe new ensurepip\nmodule (defined in PEP 453) provides a standard\ncross-platform mechanism to bootstrap the pip installer into Python\ninstallations and virtual environments. The version of pip\nincluded\nwith Python 3.4.0 is pip\n1.5.4, and future 3.4.x maintenance releases\nwill update the bundled version to the latest version of pip\nthat is\navailable at the time of creating the release candidate.\nBy default, the commands pipX\nand pipX.Y\nwill be installed on all\nplatforms (where X.Y stands for the version of the Python installation),\nalong with the pip\nPython package and its dependencies. On Windows and\nin virtual environments on all platforms, the unversioned pip\ncommand\nwill also be installed. On other platforms, the system wide unversioned\npip\ncommand typically refers to the separately installed Python 2\nversion.\nThe pyvenv\ncommand line utility and the venv\nmodule make use of the ensurepip\nmodule to make pip\nreadily\navailable in virtual environments. When using the command line utility,\npip\nis installed by default, while when using the venv\nmodule\nAPI installation of pip\nmust be requested explicitly.\nFor CPython source builds on POSIX systems,\nthe make install\nand make altinstall\ncommands bootstrap pip\nby\ndefault. This behaviour can be controlled through configure options, and\noverridden through Makefile options.\nOn Windows and Mac OS X, the CPython installers now default to installing\npip\nalong with CPython itself (users may opt out of installing it\nduring the installation process). Window users will need to opt in to the\nautomatic PATH\nmodifications to have pip\navailable from the command\nline by default, otherwise it can still be accessed through the Python\nlauncher for Windows as py -m pip\n.\nAs discussed in the PEP platform packagers may choose not to install these commands by default, as long as, when invoked, they provide clear and simple directions on how to install them on that platform (usually using the system package manager).\nNote\nTo avoid conflicts between parallel Python 2 and Python 3 installations,\nonly the versioned pip3\nand pip3.4\ncommands are bootstrapped by\ndefault when ensurepip\nis invoked directly - the --default-pip\noption is needed to also request the unversioned pip\ncommand.\npyvenv\nand the Windows installer ensure that the unqualified pip\ncommand is made available in those environments, and pip\ncan always be\ninvoked via the -m\nswitch rather than directly to avoid ambiguity on\nsystems with multiple Python installations.\nDocumentation Changes\u00b6\nAs part of this change, the Installing Python Modules and Distributing Python Modules sections of the documentation have been completely redesigned as short getting started and FAQ documents. Most packaging documentation has now been moved out to the Python Packaging Authority maintained Python Packaging User Guide and the documentation of the individual projects.\nHowever, as this migration is currently still incomplete, the legacy versions of those guides remaining available as Building C and C++ Extensions with setuptools and Building C and C++ Extensions with setuptools.\nSee also\n- PEP 453 \u2013 Explicit bootstrapping of pip in Python installations\nPEP written by Donald Stufft and Nick Coghlan, implemented by Donald Stufft, Nick Coghlan, Martin von L\u00f6wis and Ned Deily.\nPEP 446: Newly Created File Descriptors Are Non-Inheritable\u00b6\nPEP 446 makes newly created file descriptors non-inheritable. In general, this is the behavior an application will want: when launching a new process, having currently open files also open in the new process can lead to all sorts of hard to find bugs, and potentially to security issues.\nHowever, there are occasions when inheritance is desired. To support these cases, the following new functions and methods are available:\nSee also\n- PEP 446 \u2013 Make newly created file descriptors non-inheritable\nPEP written and implemented by Victor Stinner.\nImprovements to Codec Handling\u00b6\nSince it was first introduced, the codecs\nmodule has always been\nintended to operate as a type-neutral dynamic encoding and decoding\nsystem. However, its close coupling with the Python text model, especially\nthe type restricted convenience methods on the builtin str\n,\nbytes\nand bytearray\ntypes, has historically obscured that\nfact.\nAs a key step in clarifying the situation, the codecs.encode()\nand\ncodecs.decode()\nconvenience functions are now properly documented in\nPython 2.7, 3.3 and 3.4. These functions have existed in the codecs\nmodule (and have been covered by the regression test suite) since Python 2.4,\nbut were previously only discoverable through runtime introspection.\nUnlike the convenience methods on str\n, bytes\nand\nbytearray\n, the codecs\nconvenience functions support arbitrary\ncodecs in both Python 2 and Python 3, rather than being limited to Unicode text\nencodings (in Python 3) or basestring\n<-> basestring\nconversions (in\nPython 2).\nIn Python 3.4, the interpreter is able to identify the known non-text encodings provided in the standard library and direct users towards these general purpose convenience functions when appropriate:\n>>> b\"abcdef\".decode(\"hex\")\nTraceback (most recent call last):\nFile \"\", line 1, in \nLookupError: 'hex' is not a text encoding; use codecs.decode() to handle arbitrary codecs\n>>> \"hello\".encode(\"rot13\")\nTraceback (most recent call last):\nFile \"\", line 1, in \nLookupError: 'rot13' is not a text encoding; use codecs.encode() to handle arbitrary codecs\n>>> open(\"foo.txt\", encoding=\"hex\")\nTraceback (most recent call last):\nFile \"\", line 1, in \nLookupError: 'hex' is not a text encoding; use codecs.open() to handle arbitrary codecs\nIn a related change, whenever it is feasible without breaking backwards compatibility, exceptions raised during encoding and decoding operations are wrapped in a chained exception of the same type that mentions the name of the codec responsible for producing the error:\n>>> import codecs\n>>> codecs.decode(b\"abcdefgh\", \"hex\")\nTraceback (most recent call last):\nFile \"/usr/lib/python3.4/encodings/hex_codec.py\", line 20, in hex_decode\nreturn (binascii.a2b_hex(input), len(input))\nbinascii.Error: Non-hexadecimal digit found\nThe above exception was the direct cause of the following exception:\nTraceback (most recent call last):\nFile \"\", line 1, in \nbinascii.Error: decoding with 'hex' codec failed (Error: Non-hexadecimal digit found)\n>>> codecs.encode(\"hello\", \"bz2\")\nTraceback (most recent call last):\nFile \"/usr/lib/python3.4/encodings/bz2_codec.py\", line 17, in bz2_encode\nreturn (bz2.compress(input), len(input))\nFile \"/usr/lib/python3.4/bz2.py\", line 498, in compress\nreturn comp.compress(data) + comp.flush()\nTypeError: 'str' does not support the buffer interface\nThe above exception was the direct cause of the following exception:\nTraceback (most recent call last):\nFile \"\", line 1, in \nTypeError: encoding with 'bz2' codec failed (TypeError: 'str' does not support the buffer interface)\nFinally, as the examples above show, these improvements have permitted the restoration of the convenience aliases for the non-Unicode codecs that were themselves restored in Python 3.2. This means that encoding binary data to and from its hexadecimal representation (for example) can now be written as:\n>>> from codecs import encode, decode\n>>> encode(b\"hello\", \"hex\")\nb'68656c6c6f'\n>>> decode(b\"68656c6c6f\", \"hex\")\nb'hello'\nThe binary and text transforms provided in the standard library are detailed in Binary Transforms and Text Transforms.\n(Contributed by Nick Coghlan in bpo-7475, bpo-17827, bpo-17828 and bpo-19619.)\nPEP 451: A ModuleSpec Type for the Import System\u00b6\nPEP 451 provides an encapsulation of the information about a module that the import machinery will use to load it (that is, a module specification). This helps simplify both the import implementation and several import-related APIs. The change is also a stepping stone for several future import-related improvements.\nThe public-facing changes from the PEP are entirely backward-compatible. Furthermore, they should be transparent to everyone but importer authors. Key finder and loader methods have been deprecated, but they will continue working. New importers should use the new methods described in the PEP. Existing importers should be updated to implement the new methods. See the Deprecated section for a list of methods that should be replaced and their replacements.\nOther Language Changes\u00b6\nSome smaller changes made to the core Python language are:\nUnicode database updated to UCD version 6.3.\nmin()\nandmax()\nnow accept a default keyword-only argument that can be used to specify the value they return if the iterable they are evaluating has no elements. (Contributed by Julian Berman in bpo-18111.)Module objects are now weakly referenceable.\nModule\n__file__\nattributes (and related values) should now always contain absolute paths by default, with the sole exception of__main__.__file__\nwhen a script has been executed directly using a relative path. (Contributed by Brett Cannon in bpo-18416.)All the UTF-* codecs (except UTF-7) now reject surrogates during both encoding and decoding unless the\nsurrogatepass\nerror handler is used, with the exception of the UTF-16 decoder (which accepts valid surrogate pairs) and the UTF-16 encoder (which produces them while encoding non-BMP characters). (Contributed by Victor Stinner, Kang-Hao (Kenny) Lu and Serhiy Storchaka in bpo-12892.)New German EBCDIC codec\ncp273\n. (Contributed by Michael Bierenfeld and Andrew Kuchling in bpo-1097797.)New Ukrainian codec\ncp1125\n. (Contributed by Serhiy Storchaka in bpo-19668.)bytes\n.join() andbytearray\n.join() now accept arbitrary buffer objects as arguments. (Contributed by Antoine Pitrou in bpo-15958.)The\nint\nconstructor now accepts any object that has an__index__\nmethod for its base argument. (Contributed by Mark Dickinson in bpo-16772.)Frame objects now have a\nclear()\nmethod that clears all references to local variables from the frame. (Contributed by Antoine Pitrou in bpo-17934.)memoryview\nis now registered as aSequence\n, and supports thereversed()\nbuiltin. (Contributed by Nick Coghlan and Claudiu Popa in bpo-18690 and bpo-19078.)Signatures reported by\nhelp()\nhave been modified and improved in several cases as a result of the introduction of Argument Clinic and other changes to theinspect\nandpydoc\nmodules.__length_hint__()\nis now part of the formal language specification (see PEP 424). (Contributed by Armin Ronacher in bpo-16148.)\nNew Modules\u00b6\nasyncio\u00b6\nThe new asyncio\nmodule (defined in PEP 3156) provides a standard\npluggable event loop model for Python, providing solid asynchronous IO\nsupport in the standard library, and making it easier for other event loop\nimplementations to interoperate with the standard library and each other.\nFor Python 3.4, this module is considered a provisional API.\nSee also\n- PEP 3156 \u2013 Asynchronous IO Support Rebooted: the \u201casyncio\u201d Module\nPEP written and implementation led by Guido van Rossum.\nensurepip\u00b6\nThe new ensurepip\nmodule is the primary infrastructure for the\nPEP 453 implementation. In the normal course of events end users will not\nneed to interact with this module, but it can be used to manually bootstrap\npip\nif the automated bootstrapping into an installation or virtual\nenvironment was declined.\nensurepip\nincludes a bundled copy of pip\n, up-to-date as of the first\nrelease candidate of the release of CPython with which it ships (this applies\nto both maintenance releases and feature releases). ensurepip\ndoes not\naccess the internet. If the installation has internet access, after\nensurepip\nis run the bundled pip\ncan be used to upgrade pip\nto a\nmore recent release than the bundled one. (Note that such an upgraded version\nof pip\nis considered to be a separately installed package and will not be\nremoved if Python is uninstalled.)\nThe module is named ensurepip because if called when pip\nis already\ninstalled, it does nothing. It also has an --upgrade\noption that will\ncause it to install the bundled copy of pip\nif the existing installed\nversion of pip\nis older than the bundled copy.\nenum\u00b6\nThe new enum\nmodule (defined in PEP 435) provides a standard\nimplementation of enumeration types, allowing other modules (such as\nsocket\n) to provide more informative error messages and better\ndebugging support by replacing opaque integer constants with backwards\ncompatible enumeration values.\nSee also\n- PEP 435 \u2013 Adding an Enum type to the Python standard library\nPEP written by Barry Warsaw, Eli Bendersky and Ethan Furman, implemented by Ethan Furman.\npathlib\u00b6\nThe new pathlib\nmodule offers classes representing filesystem paths\nwith semantics appropriate for different operating systems. Path classes are\ndivided between pure paths, which provide purely computational operations\nwithout I/O, and concrete paths, which inherit from pure paths but also\nprovide I/O operations.\nFor Python 3.4, this module is considered a provisional API.\nSee also\n- PEP 428 \u2013 The pathlib module \u2013 object-oriented filesystem paths\nPEP written and implemented by Antoine Pitrou.\nselectors\u00b6\nThe new selectors\nmodule (created as part of implementing PEP 3156)\nallows high-level and efficient I/O multiplexing, built upon the\nselect\nmodule primitives.\nstatistics\u00b6\nThe new statistics\nmodule (defined in PEP 450) offers some core\nstatistics functionality directly in the standard library. This module\nsupports calculation of the mean, median, mode, variance and standard\ndeviation of a data series.\nSee also\n- PEP 450 \u2013 Adding A Statistics Module To The Standard Library\nPEP written and implemented by Steven D\u2019Aprano\ntracemalloc\u00b6\nThe new tracemalloc\nmodule (defined in PEP 454) is a debug tool to\ntrace memory blocks allocated by Python. It provides the following information:\nTrace where an object was allocated\nStatistics on allocated memory blocks per filename and per line number: total size, number and average size of allocated memory blocks\nCompute the differences between two snapshots to detect memory leaks\nSee also\n- PEP 454 \u2013 Add a new tracemalloc module to trace Python memory allocations\nPEP written and implemented by Victor Stinner\nImproved Modules\u00b6\nabc\u00b6\nNew function abc.get_cache_token()\ncan be used to know when to invalidate\ncaches that are affected by changes in the object graph. (Contributed\nby \u0141ukasz Langa in bpo-16832.)\nNew class ABC\nhas ABCMeta\nas its meta class.\nUsing ABC\nas a base class has essentially the same effect as specifying\nmetaclass=abc.ABCMeta\n, but is simpler to type and easier to read.\n(Contributed by Bruno Dupuis in bpo-16049.)\naifc\u00b6\nThe getparams()\nmethod now returns a namedtuple rather than a\nplain tuple. (Contributed by Claudiu Popa in bpo-17818.)\naifc.open()\nnow supports the context management protocol: when used in a\nwith\nblock, the close()\nmethod of the returned\nobject will be called automatically at the end of the block. (Contributed by\nSerhiy Storchacha in bpo-16486.)\nThe writeframesraw()\nand writeframes()\nmethods now accept any bytes-like object. (Contributed by Serhiy\nStorchaka in bpo-8311.)\nargparse\u00b6\nThe FileType\nclass now accepts encoding and\nerrors arguments, which are passed through to open()\n. (Contributed\nby Lucas Maystre in bpo-11175.)\naudioop\u00b6\naudioop\nnow supports 24-bit samples. (Contributed by Serhiy Storchaka\nin bpo-12866.)\nNew byteswap()\nfunction converts big-endian samples to\nlittle-endian and vice versa. (Contributed by Serhiy Storchaka in\nbpo-19641.)\nAll audioop\nfunctions now accept any bytes-like object. Strings\nare not accepted: they didn\u2019t work before, now they raise an error right away.\n(Contributed by Serhiy Storchaka in bpo-16685.)\nbase64\u00b6\nThe encoding and decoding functions in base64\nnow accept any\nbytes-like object in cases where it previously required a\nbytes\nor bytearray\ninstance. (Contributed by Nick Coghlan in\nbpo-17839.)\nNew functions a85encode()\n, a85decode()\n,\nb85encode()\n, and b85decode()\nprovide the ability to\nencode and decode binary data from and to Ascii85\nand the git/mercurial\nBase85\nformats, respectively. The a85\nfunctions have options that can\nbe used to make them compatible with the variants of the Ascii85\nencoding,\nincluding the Adobe variant. (Contributed by Martin Morrison, the Mercurial\nproject, Serhiy Storchaka, and Antoine Pitrou in bpo-17618.)\ncollections\u00b6\nThe ChainMap.new_child()\nmethod now accepts an m argument specifying\nthe child map to add to the chain. This allows an existing mapping and/or a\ncustom mapping type to be used for the child. (Contributed by Vinay Sajip in\nbpo-16613.)\ncolorsys\u00b6\nThe number of digits in the coefficients for the RGB \u2014 YIQ conversions have been expanded so that they match the FCC NTSC versions. The change in results should be less than 1% and may better match results found elsewhere. (Contributed by Brian Landers and Serhiy Storchaka in bpo-14323.)\ncontextlib\u00b6\nThe new contextlib.suppress\ncontext manager helps to clarify the\nintent of code that deliberately suppresses exceptions from a single\nstatement. (Contributed by Raymond Hettinger in bpo-15806 and\nZero Piraeus in bpo-19266.)\nThe new contextlib.redirect_stdout()\ncontext manager makes it easier\nfor utility scripts to handle inflexible APIs that write their output to\nsys.stdout\nand don\u2019t provide any options to redirect it. Using the\ncontext manager, the sys.stdout\noutput can be redirected to any\nother stream or, in conjunction with io.StringIO\n, to a string.\nThe latter can be especially useful, for example, to capture output\nfrom a function that was written to implement a command line interface.\nIt is recommended only for utility scripts because it affects the\nglobal state of sys.stdout\n. (Contributed by Raymond Hettinger\nin bpo-15805.)\nThe contextlib\ndocumentation has also been updated to include a\ndiscussion of the\ndifferences between single use, reusable and reentrant context managers.\ndbm\u00b6\ndbm.open()\nobjects now support the context management protocol. When\nused in a with\nstatement, the close\nmethod of the database\nobject will be called automatically at the end of the block. (Contributed by\nClaudiu Popa and Nick Coghlan in bpo-19282.)\ndis\u00b6\nFunctions show_code()\n, dis()\n, distb()\n, and\ndisassemble()\nnow accept a keyword-only file argument that\ncontrols where they write their output.\nThe dis\nmodule is now built around an Instruction\nclass\nthat provides object oriented access to the details of each individual bytecode\noperation.\nA new method, get_instructions()\n, provides an iterator that emits\nthe Instruction stream for a given piece of Python code. Thus it is now\npossible to write a program that inspects and manipulates a bytecode\nobject in ways different from those provided by the dis\nmodule\nitself. For example:\n>>> import dis\n>>> for instr in dis.get_instructions(lambda x: x + 1):\n... print(instr.opname)\nLOAD_FAST\nLOAD_CONST\nBINARY_ADD\nRETURN_VALUE\nThe various display tools in the dis\nmodule have been rewritten to use\nthese new components.\nIn addition, a new application-friendly class Bytecode\nprovides\nan object-oriented API for inspecting bytecode in both in human-readable form\nand for iterating over instructions. The Bytecode\nconstructor\ntakes the same arguments that get_instructions()\ndoes (plus an\noptional current_offset), and the resulting object can be iterated to produce\nInstruction\nobjects. But it also has a dis\nmethod, equivalent to calling dis\non the constructor argument, but\nreturned as a multi-line string:\n>>> bytecode = dis.Bytecode(lambda x: x + 1, current_offset=3)\n>>> for instr in bytecode:\n... print('{} ({})'.format(instr.opname, instr.opcode))\nLOAD_FAST (124)\nLOAD_CONST (100)\nBINARY_ADD (23)\nRETURN_VALUE (83)\n>>> bytecode.dis().splitlines()\n[' 1 0 LOAD_FAST 0 (x)',\n' --> 3 LOAD_CONST 1 (1)',\n' 6 BINARY_ADD',\n' 7 RETURN_VALUE']\nBytecode\nalso has a class method,\nfrom_traceback()\n, that provides the ability to manipulate a\ntraceback (that is, print(Bytecode.from_traceback(tb).dis())\nis equivalent\nto distb(tb)\n).\n(Contributed by Nick Coghlan, Ryan Kelly and Thomas Kluyver in bpo-11816 and Claudiu Popa in bpo-17916.)\nNew function stack_effect()\ncomputes the effect on the Python stack\nof a given opcode and argument, information that is not otherwise available.\n(Contributed by Larry Hastings in bpo-19722.)\ndoctest\u00b6\nA new option flag, FAIL_FAST\n, halts\ntest running as soon as the first failure is detected. (Contributed by R.\nDavid Murray and Daniel Urban in bpo-16522.)\nThe doctest\ncommand line interface now uses argparse\n, and has two\nnew options, -o\nand -f\n. -o\nallows doctest options to be specified on the command line, and -f\nis a\nshorthand for -o FAIL_FAST\n(to parallel the similar option supported by the\nunittest\nCLI). (Contributed by R. David Murray in bpo-11390.)\ndoctest\nwill now find doctests in extension module __doc__\nstrings.\n(Contributed by Zachary Ware in bpo-3158.)\nemail\u00b6\nas_string()\nnow accepts a policy argument to\noverride the default policy of the message when generating a string\nrepresentation of it. This means that as_string\ncan now be used in more\ncircumstances, instead of having to create and use a generator\nin\norder to pass formatting parameters to its flatten\nmethod. (Contributed by\nR. David Murray in bpo-18600.)\nNew method as_bytes()\nadded to produce a bytes\nrepresentation of the message in a fashion similar to how as_string\nproduces a string representation. It does not accept the maxheaderlen\nargument, but does accept the unixfrom and policy arguments. The\nMessage\n__bytes__()\nmethod\ncalls it, meaning that bytes(mymsg)\nwill now produce the intuitive\nresult: a bytes object containing the fully formatted message. (Contributed\nby R. David Murray in bpo-18600.)\nThe Message.set_param()\nmessage now accepts a replace keyword argument.\nWhen specified, the associated header will be updated without changing\nits location in the list of headers. For backward compatibility, the default\nis False\n. (Contributed by R. David Murray in bpo-18891.)\nA pair of new subclasses of Message\nhave been added\n(EmailMessage\nand MIMEPart\n), along with a new sub-module,\ncontentmanager\nand a new policy\nattribute\ncontent_manager\n. All documentation is\ncurrently in the new module, which is being added as part of email\u2019s new\nprovisional API. These classes provide a number of new methods that\nmake extracting content from and inserting content into email messages much\neasier. For details, see the contentmanager\ndocumentation and\nthe email: Examples. These API additions complete the\nbulk of the work that was planned as part of the email6 project. The currently\nprovisional API is scheduled to become final in Python 3.5 (possibly with a few\nminor additions in the area of error handling). (Contributed by R. David\nMurray in bpo-18891.)\nfilecmp\u00b6\nA new clear_cache()\nfunction provides the ability to clear the\nfilecmp\ncomparison cache, which uses os.stat()\ninformation to\ndetermine if the file has changed since the last compare. This can be used,\nfor example, if the file might have been changed and re-checked in less time\nthan the resolution of a particular filesystem\u2019s file modification time field.\n(Contributed by Mark Levitt in bpo-18149.)\nNew module attribute DEFAULT_IGNORES\nprovides the list of\ndirectories that are used as the default value for the ignore parameter of\nthe dircmp()\nfunction. (Contributed by Eli Bendersky in\nbpo-15442.)\nfunctools\u00b6\nThe new partialmethod()\ndescriptor brings partial argument\napplication to descriptors, just as partial()\nprovides\nfor normal callables. The new descriptor also makes it easier to get\narbitrary callables (including partial()\ninstances)\nto behave like normal instance methods when included in a class definition.\n(Contributed by Alon Horev and Nick Coghlan in bpo-4331.)\nThe new singledispatch()\ndecorator brings support for\nsingle-dispatch generic functions to the Python standard library. Where\nobject oriented programming focuses on grouping multiple operations on a\ncommon set of data into a class, a generic function focuses on grouping\nmultiple implementations of an operation that allows it to work with\ndifferent kinds of data.\nSee also\n- PEP 443 \u2013 Single-dispatch generic functions\nPEP written and implemented by \u0141ukasz Langa.\ntotal_ordering()\nnow supports a return value of\nNotImplemented\nfrom the underlying comparison function. (Contributed\nby Katie Miller in bpo-10042.)\nA pure-python version of the partial()\nfunction is now in the\nstdlib; in CPython it is overridden by the C accelerated version, but it is\navailable for other implementations to use. (Contributed by Brian Thorne in\nbpo-12428.)\ngc\u00b6\nNew function get_stats()\nreturns a list of three per-generation\ndictionaries containing the collections statistics since interpreter startup.\n(Contributed by Antoine Pitrou in bpo-16351.)\nglob\u00b6\nA new function escape()\nprovides a way to escape special characters\nin a filename so that they do not become part of the globbing expansion but are\ninstead matched literally. (Contributed by Serhiy Storchaka in bpo-8402.)\nhashlib\u00b6\nA new hashlib.pbkdf2_hmac()\nfunction provides\nthe PKCS#5 password-based key derivation function 2. (Contributed by Christian\nHeimes in bpo-18582.)\nThe name\nattribute of hashlib\nhash objects is now\na formally supported interface. It has always existed in CPython\u2019s\nhashlib\n(although it did not return lower case names for all supported\nhashes), but it was not a public interface and so some other Python\nimplementations have not previously supported it. (Contributed by Jason R.\nCoombs in bpo-18532.)\nhmac\u00b6\nhmac\nnow accepts bytearray\nas well as bytes\nfor the key\nargument to the new()\nfunction, and the msg parameter to both the\nnew()\nfunction and the update()\nmethod now\naccepts any type supported by the hashlib\nmodule. (Contributed\nby Jonas Borgstr\u00f6m in bpo-18240.)\nThe digestmod argument to the hmac.new()\nfunction may now be any hash\ndigest name recognized by hashlib\n. In addition, the current behavior in\nwhich the value of digestmod defaults to MD5\nis deprecated: in a\nfuture version of Python there will be no default value. (Contributed by\nChristian Heimes in bpo-17276.)\nWith the addition of block_size\nand name\nattributes (and the formal documentation of the digest_size\nattribute), the hmac\nmodule now conforms fully to the PEP 247 API.\n(Contributed by Christian Heimes in bpo-18775.)\nhtml\u00b6\nNew function unescape()\nfunction converts HTML5 character references to\nthe corresponding Unicode characters. (Contributed by Ezio Melotti in\nbpo-2927.)\nHTMLParser\naccepts a new keyword argument\nconvert_charrefs that, when True\n, automatically converts all character\nreferences. For backward-compatibility, its value defaults to False\n, but\nit will change to True\nin a future version of Python, so you are invited to\nset it explicitly and update your code to use this new feature. (Contributed\nby Ezio Melotti in bpo-13633.)\nThe strict argument of HTMLParser\nis now deprecated.\n(Contributed by Ezio Melotti in bpo-15114.)\nhttp\u00b6\nsend_error()\nnow accepts an\noptional additional explain parameter which can be used to provide an\nextended error description, overriding the hardcoded default if there is one.\nThis extended error description will be formatted using the\nerror_message_format\nattribute\nand sent as the body of the error response.\n(Contributed by Karl Cow in bpo-12921.)\nThe http.server\ncommand line interface now has\na -b/--bind\noption that causes the server to listen on a specific address.\n(Contributed by Malte Swart in bpo-17764.)\nidlelib and IDLE\u00b6\nSince idlelib implements the IDLE shell and editor and is not intended for\nimport by other programs, it gets improvements with every release. See\nLib/idlelib/NEWS.txt\nfor a cumulative list of changes since 3.3.0,\nas well as changes made in future 3.4.x releases. This file is also available\nfrom the IDLE dialog.\nimportlib\u00b6\nThe InspectLoader\nABC defines a new method,\nsource_to_code()\nthat accepts source\ndata and a path and returns a code object. The default implementation\nis equivalent to compile(data, path, 'exec', dont_inherit=True)\n.\n(Contributed by Eric Snow and Brett Cannon in bpo-15627.)\nInspectLoader\nalso now has a default implementation\nfor the get_code()\nmethod. However,\nit will normally be desirable to override the default implementation\nfor performance reasons. (Contributed by Brett Cannon in bpo-18072.)\nThe reload()\nfunction has been moved from imp\nto\nimportlib\nas part of the imp\nmodule deprecation. (Contributed by\nBerker Peksag in bpo-18193.)\nimportlib.util\nnow has a MAGIC_NUMBER\nattribute\nproviding access to the bytecode version number. This replaces the\nget_magic()\nfunction in the deprecated imp\nmodule.\n(Contributed by Brett Cannon in bpo-18192.)\nNew importlib.util\nfunctions cache_from_source()\nand source_from_cache()\nreplace the same-named functions\nin the deprecated imp\nmodule. (Contributed by Brett Cannon in\nbpo-18194.)\nThe importlib\nbootstrap NamespaceLoader\nnow conforms to\nthe InspectLoader\nABC, which means that runpy\nand\npython -m\ncan now be used with namespace packages. (Contributed\nby Brett Cannon in bpo-18058.)\nimportlib.util\nhas a new function decode_source()\nthat decodes source from bytes using universal newline processing. This is\nuseful for implementing InspectLoader.get_source()\nmethods.\nimportlib.machinery.ExtensionFileLoader\nnow has a\nget_filename()\nmethod. This was\ninadvertently omitted in the original implementation. (Contributed by Eric\nSnow in bpo-19152.)\ninspect\u00b6\nThe inspect\nmodule now offers a basic command line interface to quickly display source code and other\ninformation for modules, classes and functions. (Contributed by Claudiu Popa\nand Nick Coghlan in bpo-18626.)\nunwrap()\nmakes it easy to unravel wrapper function chains\ncreated by functools.wraps()\n(and any other API that sets the\n__wrapped__\nattribute on a wrapper function). (Contributed by\nDaniel Urban, Aaron Iles and Nick Coghlan in bpo-13266.)\nAs part of the implementation of the new enum\nmodule, the\ninspect\nmodule now has substantially better support for custom\n__dir__\nmethods and dynamic class attributes provided through\nmetaclasses. (Contributed by Ethan Furman in bpo-18929 and\nbpo-19030.)\ngetfullargspec()\nand getargspec()\nnow use the signature()\nAPI. This allows them to\nsupport a much broader range of callables, including those with\n__signature__\nattributes, those with metadata provided by argument\nclinic, functools.partial()\nobjects and more. Note that, unlike\nsignature()\n, these functions still ignore __wrapped__\nattributes, and report the already bound first argument for bound methods,\nso it is still necessary to update your code to use\nsignature()\ndirectly if those features are desired.\n(Contributed by Yury Selivanov in bpo-17481.)\nsignature()\nnow supports duck types of CPython functions,\nwhich adds support for functions compiled with Cython. (Contributed\nby Stefan Behnel and Yury Selivanov in bpo-17159.)\nipaddress\u00b6\nipaddress\nwas added to the standard library in Python 3.3 as a\nprovisional API. With the release of Python 3.4, this qualification\nhas been removed: ipaddress\nis now considered a stable API, covered\nby the normal standard library requirements to maintain backwards\ncompatibility.\nA new is_global\nproperty is True\nif\nan address is globally routeable. (Contributed by Peter Moody in\nbpo-17400.)\nlogging\u00b6\nThe TimedRotatingFileHandler\nhas a new atTime\nparameter that can be used to specify the time of day when rollover should\nhappen. (Contributed by Ronald Oussoren in bpo-9556.)\nSocketHandler\nand\nDatagramHandler\nnow support Unix domain sockets (by\nsetting port to None\n). (Contributed by Vinay Sajip in commit\nce46195b56a9.)\nfileConfig()\nnow accepts a\nconfigparser.RawConfigParser\nsubclass instance for the fname\nparameter. This facilitates using a configuration file when logging\nconfiguration is just a part of the overall application configuration, or where\nthe application modifies the configuration before passing it to\nfileConfig()\n. (Contributed by Vinay Sajip in\nbpo-16110.)\nLogging configuration data received from a socket via the\nlogging.config.listen()\nfunction can now be validated before being\nprocessed by supplying a verification function as the argument to the new\nverify keyword argument. (Contributed by Vinay Sajip in bpo-15452.)\nmarshal\u00b6\nThe default marshal\nversion has been bumped to 3. The code implementing\nthe new version restores the Python2 behavior of recording only one copy of\ninterned strings and preserving the interning on deserialization, and extends\nthis \u201cone copy\u201d ability to any object type (including handling recursive\nreferences). This reduces both the size of .pyc\nfiles and the amount of\nmemory a module occupies in memory when it is loaded from a .pyc\n(or\n.pyo\n) file. (Contributed by Kristj\u00e1n Valur J\u00f3nsson in bpo-16475,\nwith additional speedups by Antoine Pitrou in bpo-19219.)\nmmap\u00b6\nmmap objects are now weakly referenceable. (Contributed by Valerie Lambert in bpo-4885.)\nmultiprocessing\u00b6\nOn Unix two new start methods,\nspawn\nand forkserver\n, have been added for starting processes using\nmultiprocessing\n. These make the mixing of processes with threads more\nrobust, and the spawn\nmethod matches the semantics that multiprocessing has\nalways used on Windows. New function\nget_all_start_methods()\nreports all start methods\navailable on the platform, get_start_method()\nreports\nthe current start method, and set_start_method()\nsets\nthe start method. (Contributed by Richard Oudkerk in bpo-8713.)\nmultiprocessing\nalso now has the concept of a context\n, which\ndetermines how child processes are created. New function\nget_context()\nreturns a context that uses a specified\nstart method. It has the same API as the multiprocessing\nmodule itself,\nso you can use it to create Pool\ns and other\nobjects that will operate within that context. This allows a framework and an\napplication or different parts of the same application to use multiprocessing\nwithout interfering with each other. (Contributed by Richard Oudkerk in\nbpo-18999.)\nExcept when using the old fork start method, child processes no longer inherit unneeded handles/file descriptors from their parents (part of bpo-8713).\nmultiprocessing\nnow relies on runpy\n(which implements the\n-m\nswitch) to initialise __main__\nappropriately in child processes\nwhen using the spawn\nor forkserver\nstart methods. This resolves some\nedge cases where combining multiprocessing, the -m\ncommand line switch,\nand explicit relative imports could cause obscure failures in child\nprocesses. (Contributed by Nick Coghlan in bpo-19946.)\noperator\u00b6\nNew function length_hint()\nprovides an implementation of the\nspecification for how the __length_hint__()\nspecial method should\nbe used, as part of the PEP 424 formal specification of this language\nfeature. (Contributed by Armin Ronacher in bpo-16148.)\nThere is now a pure-python version of the operator\nmodule available for\nreference and for use by alternate implementations of Python. (Contributed by\nZachary Ware in bpo-16694.)\nos\u00b6\nThere are new functions to get and set the inheritable flag of a file descriptor (os.get_inheritable()\n,\nos.set_inheritable()\n) or a Windows handle\n(os.get_handle_inheritable()\n, os.set_handle_inheritable()\n).\nNew function cpu_count()\nreports the number of CPUs available on the\nplatform on which Python is running (or None\nif the count can\u2019t be\ndetermined). The multiprocessing.cpu_count()\nfunction is now implemented\nin terms of this function). (Contributed by Trent Nelson, Yogesh Chaudhari,\nVictor Stinner, and Charles-Fran\u00e7ois Natali in bpo-17914.)\nos.path.samestat()\nis now available on the Windows platform (and the\nos.path.samefile()\nimplementation is now shared between Unix and\nWindows). (Contributed by Brian Curtin in bpo-11939.)\nos.path.ismount()\nnow recognizes volumes mounted below a drive\nroot on Windows. (Contributed by Tim Golden in bpo-9035.)\nos.open()\nsupports two new flags on platforms that provide them,\nO_PATH\n(un-opened file descriptor), and O_TMPFILE\n(unnamed temporary file; as of 3.4.0 release available only on Linux systems\nwith a kernel version of 3.11 or newer that have uapi headers). (Contributed\nby Christian Heimes in bpo-18673 and Benjamin Peterson, respectively.)\npdb\u00b6\npdb\nhas been enhanced to handle generators, yield\n, and\nyield from\nin a more useful fashion. This is especially helpful when\ndebugging asyncio\nbased programs. (Contributed by Andrew Svetlov and\nXavier de Gaye in bpo-16596.)\nThe print\ncommand has been removed from pdb\n, restoring access to the\nPython print()\nfunction from the pdb command line. Python2\u2019s pdb\ndid\nnot have a print\ncommand; instead, entering print\nexecuted the\nprint\nstatement. In Python3 print\nwas mistakenly made an alias for the\npdb p\ncommand. p\n, however, prints the repr\nof its argument,\nnot the str\nlike the Python2 print\ncommand did. Worse, the Python3\npdb print\ncommand shadowed the Python3 print\nfunction, making it\ninaccessible at the pdb\nprompt. (Contributed by Connor Osborn in\nbpo-18764.)\npickle\u00b6\npickle\nnow supports (but does not use by default) a new pickle protocol,\nprotocol 4. This new protocol addresses a number of issues that were present\nin previous protocols, such as the serialization of nested classes, very large\nstrings and containers, and classes whose __new__()\nmethod takes\nkeyword-only arguments. It also provides some efficiency improvements.\nSee also\n- PEP 3154 \u2013 Pickle protocol 4\nPEP written by Antoine Pitrou and implemented by Alexandre Vassalotti.\nplistlib\u00b6\nplistlib\nnow has an API that is similar to the standard pattern for\nstdlib serialization protocols, with new load()\n,\ndump()\n, loads()\n, and dumps()\nfunctions. (The older API is now deprecated.) In addition to the already\nsupported XML plist format (FMT_XML\n), it also now supports\nthe binary plist format (FMT_BINARY\n). (Contributed by Ronald\nOussoren and others in bpo-14455.)\npoplib\u00b6\nTwo new methods have been added to poplib\n: capa()\n,\nwhich returns the list of capabilities advertised by the POP server, and\nstls()\n, which switches a clear-text POP3 session into an\nencrypted POP3 session if the POP server supports it. (Contributed by Lorenzo\nCatucci in bpo-4473.)\npprint\u00b6\nThe pprint\nmodule\u2019s PrettyPrinter\nclass and its\npformat()\n, and pprint()\nfunctions have a new\noption, compact, that controls how the output is formatted. Currently\nsetting compact to True\nmeans that sequences will be printed with as many\nsequence elements as will fit within width on each (indented) line.\n(Contributed by Serhiy Storchaka in bpo-19132.)\nLong strings are now wrapped using Python\u2019s normal line continuation syntax. (Contributed by Antoine Pitrou in bpo-17150.)\npty\u00b6\npty.spawn()\nnow returns the status value from os.waitpid()\non\nthe child process, instead of None\n. (Contributed by Gregory P. Smith.)\npydoc\u00b6\nThe pydoc\nmodule is now based directly on the inspect.signature()\nintrospection API, allowing it to provide signature information for a wider\nvariety of callable objects. This change also means that __wrapped__\nattributes are now taken into account when displaying help information.\n(Contributed by Larry Hastings in bpo-19674.)\nThe pydoc\nmodule no longer displays the self\nparameter for\nalready bound methods. Instead, it aims to always display the exact current\nsignature of the supplied callable. (Contributed by Larry Hastings in\nbpo-20710.)\nIn addition to the changes that have been made to pydoc\ndirectly,\nits handling of custom __dir__\nmethods and various descriptor\nbehaviours has also been improved substantially by the underlying changes in\nthe inspect\nmodule.\nAs the help()\nbuiltin is based on pydoc\n, the above changes also\naffect the behaviour of help()\n.\nre\u00b6\nNew fullmatch()\nfunction and Pattern.fullmatch()\nmethod anchor\nthe pattern at both ends of the string to match. This provides a way to be\nexplicit about the goal of the match, which avoids a class of subtle bugs where\n$\ncharacters get lost during code changes or the addition of alternatives\nto an existing regular expression. (Contributed by Matthew Barnett in\nbpo-16203.)\nThe repr of regex objects now includes the pattern and the flags; the repr of match objects now includes the start, end, and the part of the string that matched. (Contributed by Hugo Lopes Tavares and Serhiy Storchaka in bpo-13592 and bpo-17087.)\nresource\u00b6\nNew prlimit()\nfunction, available on Linux platforms with a\nkernel version of 2.6.36 or later and glibc of 2.13 or later, provides the\nability to query or set the resource limits for processes other than the one\nmaking the call. (Contributed by Christian Heimes in bpo-16595.)\nOn Linux kernel version 2.6.36 or later, there are also some new\nLinux specific constants: RLIMIT_MSGQUEUE\n,\nRLIMIT_NICE\n, RLIMIT_RTPRIO\n,\nRLIMIT_RTTIME\n, and RLIMIT_SIGPENDING\n.\n(Contributed by Christian Heimes in bpo-19324.)\nOn FreeBSD version 9 and later, there some new FreeBSD specific constants:\nRLIMIT_SBSIZE\n, RLIMIT_SWAP\n, and\nRLIMIT_NPTS\n. (Contributed by Claudiu Popa in\nbpo-19343.)\nselect\u00b6\nepoll\nobjects now support the context management protocol.\nWhen used in a with\nstatement, the close()\nmethod will be called automatically at the end of the block. (Contributed\nby Serhiy Storchaka in bpo-16488.)\ndevpoll\nobjects now have fileno()\nand\nclose()\nmethods, as well as a new attribute\nclosed\n. (Contributed by Victor Stinner in\nbpo-18794.)\nshelve\u00b6\nShelf\ninstances may now be used in with\nstatements,\nand will be automatically closed at the end of the with\nblock.\n(Contributed by Filip Gruszczy\u0144ski in bpo-13896.)\nshutil\u00b6\ncopyfile()\nnow raises a specific Error\nsubclass,\nSameFileError\n, when the source and destination are the same\nfile, which allows an application to take appropriate action on this specific\nerror. (Contributed by Atsuo Ishimoto and Hynek Schlawack in\nbpo-1492704.)\nsmtpd\u00b6\nThe SMTPServer\nand SMTPChannel\nclasses now\naccept a map keyword argument which, if specified, is passed in to\nasynchat.async_chat\nas its map argument. This allows an application\nto avoid affecting the global socket map. (Contributed by Vinay Sajip in\nbpo-11959.)\nsmtplib\u00b6\nSMTPException\nis now a subclass of OSError\n, which allows\nboth socket level errors and SMTP protocol level errors to be caught in one\ntry/except statement by code that only cares whether or not an error occurred.\n(Contributed by Ned Jackson Lovely in bpo-2118.)\nsocket\u00b6\nThe socket module now supports the CAN_BCM\nprotocol on\nplatforms that support it. (Contributed by Brian Thorne in bpo-15359.)\nSocket objects have new methods to get or set their inheritable flag, get_inheritable()\nand\nset_inheritable()\n.\nThe socket.AF_*\nand socket.SOCK_*\nconstants are now enumeration values\nusing the new enum\nmodule. This allows meaningful names to be printed\nduring debugging, instead of integer \u201cmagic numbers\u201d.\nThe AF_LINK\nconstant is now available on BSD and OSX.\ninet_pton()\nand inet_ntop()\nare now supported\non Windows. (Contributed by Atsuo Ishimoto in bpo-7171.)\nsqlite3\u00b6\nA new boolean parameter to the connect()\nfunction, uri, can be\nused to indicate that the database parameter is a uri\n(see the SQLite\nURI documentation). (Contributed by poq in\nbpo-13773.)\nssl\u00b6\nPROTOCOL_TLSv1_1\nand PROTOCOL_TLSv1_2\n(TLSv1.1 and\nTLSv1.2 support) have been added; support for these protocols is only available if\nPython is linked with OpenSSL 1.0.1 or later. (Contributed by Michele Orr\u00f9 and\nAntoine Pitrou in bpo-16692.)\nNew function create_default_context()\nprovides a standard way to\nobtain an SSLContext\nwhose settings are intended to be a\nreasonable balance between compatibility and security. These settings are\nmore stringent than the defaults provided by the SSLContext\nconstructor, and may be adjusted in the future, without prior deprecation, if\nbest-practice security requirements change. The new recommended best\npractice for using stdlib libraries that support SSL is to use\ncreate_default_context()\nto obtain an SSLContext\nobject, modify it if needed, and then pass it as the context argument\nof the appropriate stdlib API. (Contributed by Christian Heimes\nin bpo-19689.)\nSSLContext\nmethod load_verify_locations()\naccepts a new optional argument cadata, which can be used to provide PEM or\nDER encoded certificates directly via strings or bytes, respectively.\n(Contributed by Christian Heimes in bpo-18138.)\nNew function get_default_verify_paths()\nreturns\na named tuple of the paths and environment variables that the\nset_default_verify_paths()\nmethod uses to set\nOpenSSL\u2019s default cafile\nand capath\n. This can be an aid in\ndebugging default verification issues. (Contributed by Christian Heimes\nin bpo-18143.)\nSSLContext\nhas a new method,\ncert_store_stats()\n, that reports the number of loaded\nX.509\ncerts, X.509 CA\ncerts, and certificate revocation lists\n(crl\ns), as well as a get_ca_certs()\nmethod that\nreturns a list of the loaded CA\ncertificates. (Contributed by Christian\nHeimes in bpo-18147.)\nIf OpenSSL 0.9.8 or later is available, SSLContext\nhas a new\nattribute verify_flags\nthat can be used to control the\ncertificate verification process by setting it to some combination of the new\nconstants VERIFY_DEFAULT\n, VERIFY_CRL_CHECK_LEAF\n,\nVERIFY_CRL_CHECK_CHAIN\n, or VERIFY_X509_STRICT\n.\nOpenSSL does not do any CRL verification by default. (Contributed by\nChristien Heimes in bpo-8813.)\nNew SSLContext\nmethod load_default_certs()\nloads a set of default \u201ccertificate authority\u201d (CA) certificates from default\nlocations, which vary according to the platform. It can be used to load both\nTLS web server authentication certificates\n(purpose=\nSERVER_AUTH\n) for a client to use to verify a\nserver, and certificates for a server to use in verifying client certificates\n(purpose=\nCLIENT_AUTH\n). (Contributed by Christian\nHeimes in bpo-19292.)\nTwo new windows-only functions, enum_certificates()\nand\nenum_crls()\nprovide the ability to retrieve certificates,\ncertificate information, and CRLs from the Windows cert store. (Contributed\nby Christian Heimes in bpo-17134.)\nSupport for server-side SNI (Server Name Indication) using the new\nssl.SSLContext.set_servername_callback()\nmethod.\n(Contributed by Daniel Black in bpo-8109.)\nThe dictionary returned by SSLSocket.getpeercert()\ncontains additional\nX509v3\nextension items: crlDistributionPoints\n, calIssuers\n, and\nOCSP\nURIs. (Contributed by Christian Heimes in bpo-18379.)\nstat\u00b6\nThe stat\nmodule is now backed by a C implementation in _stat\n. A C\nimplementation is required as most of the values aren\u2019t standardized and\nare platform-dependent. (Contributed by Christian Heimes in bpo-11016.)\nThe module supports new ST_MODE\nflags, S_IFDOOR\n,\nS_IFPORT\n, and S_IFWHT\n. (Contributed by\nChristian Hiemes in bpo-11016.)\nstruct\u00b6\nNew function iter_unpack\nand a new\nstruct.Struct.iter_unpack()\nmethod on compiled formats provide streamed\nunpacking of a buffer containing repeated instances of a given format of data.\n(Contributed by Antoine Pitrou in bpo-17804.)\nsubprocess\u00b6\ncheck_output()\nnow accepts an input argument that can\nbe used to provide the contents of stdin\nfor the command that is run.\n(Contributed by Zack Weinberg in bpo-16624.)\ngetoutput()\nand getstatusoutput()\nnow\nwork on Windows. This change was actually inadvertently made in 3.3.4.\n(Contributed by Tim Golden in bpo-10197.)\nsunau\u00b6\nThe getparams()\nmethod now returns a namedtuple rather than a\nplain tuple. (Contributed by Claudiu Popa in bpo-18901.)\nsunau.open()\nnow supports the context management protocol: when used in a\nwith\nblock, the close\nmethod of the returned object will be\ncalled automatically at the end of the block. (Contributed by Serhiy Storchaka\nin bpo-18878.)\nAU_write.setsampwidth()\nnow supports 24 bit samples, thus adding\nsupport for writing 24 sample using the module. (Contributed by\nSerhiy Storchaka in bpo-19261.)\nThe writeframesraw()\nand\nwriteframes()\nmethods now accept any bytes-like\nobject. (Contributed by Serhiy Storchaka in bpo-8311.)\nsys\u00b6\nNew function sys.getallocatedblocks()\nreturns the current number of\nblocks allocated by the interpreter. (In CPython with the default\n--with-pymalloc\nsetting, this is allocations made through the\nPyObject_Malloc()\nAPI.) This can be useful for tracking memory leaks,\nespecially if automated via a test suite. (Contributed by Antoine Pitrou\nin bpo-13390.)\nWhen the Python interpreter starts in interactive mode, it checks for an __interactivehook__\nattribute\non the sys\nmodule. If the attribute exists, its value is called with no\narguments just before interactive mode is started. The check is made after the\nPYTHONSTARTUP\nfile is read, so it can be set there. The site\nmodule sets it to a function that enables tab\ncompletion and history saving (in ~/.python-history\n) if the platform\nsupports readline\n. If you do not want this (new) behavior, you can\noverride it in PYTHONSTARTUP\n, sitecustomize\n, or\nusercustomize\nby deleting this attribute from sys\n(or setting it\nto some other callable). (Contributed by \u00c9ric Araujo and Antoine Pitrou in\nbpo-5845.)\ntarfile\u00b6\nThe tarfile\nmodule now supports a simple Command-Line Interface when\ncalled as a script directly or via -m\n. This can be used to create and\nextract tarfile archives. (Contributed by Berker Peksag in bpo-13477.)\ntextwrap\u00b6\nThe TextWrapper\nclass has two new attributes/constructor\narguments: max_lines\n, which limits the number of\nlines in the output, and placeholder\n, which is a\nstring that will appear at the end of the output if it has been truncated\nbecause of max_lines. Building on these capabilities, a new convenience\nfunction shorten()\ncollapses all of the whitespace in the input\nto single spaces and produces a single line of a given width that ends with\nthe placeholder (by default, [...]\n). (Contributed by Antoine Pitrou and\nSerhiy Storchaka in bpo-18585 and bpo-18725.)\nthreading\u00b6\nThe Thread\nobject representing the main thread can be\nobtained from the new main_thread()\nfunction. In normal\nconditions this will be the thread from which the Python interpreter was\nstarted. (Contributed by Andrew Svetlov in bpo-18882.)\ntraceback\u00b6\nA new traceback.clear_frames()\nfunction takes a traceback object\nand clears the local variables in all of the frames it references,\nreducing the amount of memory consumed. (Contributed by Andrew Kuchling in\nbpo-1565525.)\ntypes\u00b6\nA new DynamicClassAttribute()\ndescriptor provides a way to define\nan attribute that acts normally when looked up through an instance object, but\nwhich is routed to the class __getattr__\nwhen looked up through the\nclass. This allows one to have properties active on a class, and have virtual\nattributes on the class with the same name (see enum\nfor an example).\n(Contributed by Ethan Furman in bpo-19030.)\nurllib\u00b6\nurllib.request\nnow supports data:\nURLs via the\nDataHandler\nclass. (Contributed by Mathias Panzenb\u00f6ck\nin bpo-16423.)\nThe http method that will be used by a Request\nclass\ncan now be specified by setting a method\nclass attribute on the subclass. (Contributed by Jason R Coombs in\nbpo-18978.)\nRequest\nobjects are now reusable: if the\nfull_url\nor data\nattributes are modified, all relevant internal properties are updated. This\nmeans, for example, that it is now possible to use the same\nRequest\nobject in more than one\nOpenerDirector.open()\ncall with different data arguments, or to\nmodify a Request\n\u2018s url\nrather than recomputing it\nfrom scratch. There is also a new\nremove_header()\nmethod that can be used to remove\nheaders from a Request\n. (Contributed by Alexey\nKachayev in bpo-16464, Daniel Wozniak in bpo-17485, and Damien Brecht\nand Senthil Kumaran in bpo-17272.)\nHTTPError\nobjects now have a\nheaders\nattribute that provides access to the\nHTTP response headers associated with the error. (Contributed by\nBerker Peksag in bpo-15701.)\nunittest\u00b6\nThe TestCase\nclass has a new method,\nsubTest()\n, that produces a context manager whose\nwith\nblock becomes a \u201csub-test\u201d. This context manager allows a test\nmethod to dynamically generate subtests by, say, calling the subTest\ncontext manager inside a loop. A single test method can thereby produce an\nindefinite number of separately identified and separately counted tests, all of\nwhich will run even if one or more of them fail. For example:\nclass NumbersTest(unittest.TestCase):\ndef test_even(self):\nfor i in range(6):\nwith self.subTest(i=i):\nself.assertEqual(i % 2, 0)\nwill result in six subtests, each identified in the unittest verbose output\nwith a label consisting of the variable name i\nand a particular value for\nthat variable (i=0\n, i=1\n, etc). See Distinguishing test iterations using subtests for the full\nversion of this example. (Contributed by Antoine Pitrou in bpo-16997.)\nunittest.main()\nnow accepts an iterable of test names for\ndefaultTest, where previously it only accepted a single test name as a\nstring. (Contributed by Jyrki Pulliainen in bpo-15132.)\nIf SkipTest\nis raised during test discovery (that is, at the\nmodule level in the test file), it is now reported as a skip instead of an\nerror. (Contributed by Zach Ware in bpo-16935.)\ndiscover()\nnow sorts the discovered files to provide\nconsistent test ordering. (Contributed by Martin Melin and Jeff Ramnani in\nbpo-16709.)\nTestSuite\nnow drops references to tests as soon as the test\nhas been run, if the test is successful. On Python interpreters that do\ngarbage collection, this allows the tests to be garbage collected if nothing\nelse is holding a reference to the test. It is possible to override this\nbehavior by creating a TestSuite\nsubclass that defines a\ncustom _removeTestAtIndex\nmethod. (Contributed by Tom Wardill, Matt\nMcClure, and Andrew Svetlov in bpo-11798.)\nA new test assertion context-manager, assertLogs()\n,\nwill ensure that a given block of code emits a log message using the\nlogging\nmodule. By default the message can come from any logger and\nhave a priority of INFO\nor higher, but both the logger name and an\nalternative minimum logging level may be specified. The object returned by the\ncontext manager can be queried for the LogRecord\ns and/or\nformatted messages that were logged. (Contributed by Antoine Pitrou in\nbpo-18937.)\nTest discovery now works with namespace packages (Contributed by Claudiu Popa in bpo-17457.)\nunittest.mock\nobjects now inspect their specification signatures when\nmatching calls, which means an argument can now be matched by either position\nor name, instead of only by position. (Contributed by Antoine Pitrou in\nbpo-17015.)\nmock_open()\nobjects now have readline\nand readlines\nmethods. (Contributed by Toshio Kuratomi in bpo-17467.)\nvenv\u00b6\nvenv\nnow includes activation scripts for the csh\nand fish\nshells. (Contributed by Andrew Svetlov in bpo-15417.)\nEnvBuilder\nand the create()\nconvenience function\ntake a new keyword argument with_pip, which defaults to False\n, that\ncontrols whether or not EnvBuilder\nensures that pip\nis\ninstalled in the virtual environment. (Contributed by Nick Coghlan in\nbpo-19552 as part of the PEP 453 implementation.)\nwave\u00b6\nThe getparams()\nmethod now returns a namedtuple rather\nthan a plain tuple. (Contributed by Claudiu Popa in bpo-17487.)\nwave.open()\nnow supports the context management protocol. (Contributed\nby Claudiu Popa in bpo-17616.)\nwave\ncan now write output to unseekable files. (Contributed by David Jones, Guilherme Polo, and Serhiy\nStorchaka in bpo-5202.)\nThe writeframesraw()\nand\nwriteframes()\nmethods now accept any bytes-like\nobject. (Contributed by Serhiy Storchaka in bpo-8311.)\nweakref\u00b6\nNew WeakMethod\nclass simulates weak references to bound\nmethods. (Contributed by Antoine Pitrou in bpo-14631.)\nNew finalize\nclass makes it possible to register a callback\nto be invoked when an object is garbage collected, without needing to\ncarefully manage the lifecycle of the weak reference itself. (Contributed by\nRichard Oudkerk in bpo-15528.)\nThe callback, if any, associated with a ref\nis now\nexposed via the __callback__\nattribute. (Contributed\nby Mark Dickinson in bpo-17643.)\nxml.etree\u00b6\nA new parser, XMLPullParser\n, allows a\nnon-blocking applications to parse XML documents. An example can be\nseen at Pull API for non-blocking parsing. (Contributed by Antoine\nPitrou in bpo-17741.)\nThe xml.etree.ElementTree\ntostring()\nand\ntostringlist()\nfunctions, and the\nElementTree\nwrite()\nmethod, now have a\nshort_empty_elements keyword-only parameter\nproviding control over whether elements with no content are written in\nabbreviated (\n) or expanded (\n) form. (Contributed by\nAriel Poliak and Serhiy Storchaka in bpo-14377.)\nzipfile\u00b6\nThe writepy()\nmethod of the\nPyZipFile\nclass has a new filterfunc option that can be\nused to control which directories and files are added to the archive. For\nexample, this could be used to exclude test files from the archive.\n(Contributed by Christian Tismer in bpo-19274.)\nThe allowZip64 parameter to ZipFile\nand\nPyZipFile\nis now True\nby default. (Contributed by\nWilliam Mallard in bpo-17201.)\nCPython Implementation Changes\u00b6\nPEP 445: Customization of CPython Memory Allocators\u00b6\nPEP 445 adds new C level interfaces to customize memory allocation in the CPython interpreter.\nSee also\n- PEP 445 \u2013 Add new APIs to customize Python memory allocators\nPEP written and implemented by Victor Stinner.\nPEP 442: Safe Object Finalization\u00b6\nPEP 442 removes the current limitations and quirks of object finalization\nin CPython. With it, objects with __del__()\nmethods, as well as\ngenerators with finally\nclauses, can be finalized when they are\npart of a reference cycle.\nAs part of this change, module globals are no longer forcibly set to\nNone\nduring interpreter shutdown in most cases, instead relying\non the normal operation of the cyclic garbage collector. This avoids a\nwhole class of interpreter-shutdown-time errors, usually involving\n__del__\nmethods, that have plagued Python since the cyclic GC\nwas first introduced.\nSee also\n- PEP 442 \u2013 Safe object finalization\nPEP written and implemented by Antoine Pitrou.\nPEP 456: Secure and Interchangeable Hash Algorithm\u00b6\nPEP 456 follows up on earlier security fix work done on Python\u2019s hash algorithm to address certain DOS attacks to which public facing APIs backed by dictionary lookups may be subject. (See bpo-14621 for the start of the current round of improvements.) The PEP unifies CPython\u2019s hash code to make it easier for a packager to substitute a different hash algorithm, and switches Python\u2019s default implementation to a SipHash implementation on platforms that have a 64 bit data type. Any performance differences in comparison with the older FNV algorithm are trivial.\nThe PEP adds additional fields to the sys.hash_info\nnamed tuple to\ndescribe the hash algorithm in use by the currently executing binary. Otherwise,\nthe PEP does not alter any existing CPython APIs.\nPEP 436: Argument Clinic\u00b6\n\u201cArgument Clinic\u201d (PEP 436) is now part of the CPython build process and can be used to simplify the process of defining and maintaining accurate signatures for builtins and standard library extension modules implemented in C.\nSome standard library extension modules have been converted to use Argument\nClinic in Python 3.4, and pydoc\nand inspect\nhave been updated\naccordingly.\nIt is expected that signature metadata for programmatic introspection will be added to additional callables implemented in C as part of Python 3.4 maintenance releases.\nNote\nThe Argument Clinic PEP is not fully up to date with the state of the implementation. This has been deemed acceptable by the release manager and core development team in this case, as Argument Clinic will not be made available as a public API for third party use in Python 3.4.\nSee also\n- PEP 436 \u2013 The Argument Clinic DSL\nPEP written and implemented by Larry Hastings.\nOther Build and C API Changes\u00b6\nThe new\nPyType_GetSlot()\nfunction has been added to the stable ABI, allowing retrieval of function pointers from named type slots when using the limited API. (Contributed by Martin von L\u00f6wis in bpo-17162.)The new\nPy_SetStandardStreamEncoding()\npre-initialization API allows applications embedding the CPython interpreter to reliably force a particular encoding and error handler for the standard streams. (Contributed by Bastien Montagne and Nick Coghlan in bpo-16129.)Most Python C APIs that don\u2019t mutate string arguments are now correctly marked as accepting\nconst char *\nrather thanchar *\n. (Contributed by Serhiy Storchaka in bpo-1772673.)A new shell version of\npython-config\ncan be used even when a python interpreter is not available (for example, in cross compilation scenarios).PyUnicode_FromFormat()\nnow supports width and precision specifications for%s\n,%A\n,%U\n,%V\n,%S\n, and%R\n. (Contributed by Ysj Ray and Victor Stinner in bpo-7330.)New function\nPyStructSequence_InitType2()\nsupplements the existingPyStructSequence_InitType()\nfunction. The difference is that it returns0\non success and-1\non failure.The CPython source can now be compiled using the address sanity checking features of recent versions of GCC and clang: the false alarms in the small object allocator have been silenced. (Contributed by Dhiru Kholia in bpo-18596.)\nThe Windows build now uses Address Space Layout Randomization and Data Execution Prevention. (Contributed by Christian Heimes in bpo-16632.)\nNew function\nPyObject_LengthHint()\nis the C API equivalent ofoperator.length_hint()\n. (Contributed by Armin Ronacher in bpo-16148.)\nOther Improvements\u00b6\nThe python command has a new option,\n-I\n, which causes it to run in \u201cisolated mode\u201d, which means thatsys.path\ncontains neither the script\u2019s directory nor the user\u2019ssite-packages\ndirectory, and allPYTHON*\nenvironment variables are ignored (it implies both-s\nand-E\n). Other restrictions may also be applied in the future, with the goal being to isolate the execution of a script from the user\u2019s environment. This is appropriate, for example, when Python is used to run a system script. On most POSIX systems it can and should be used in the#!\nline of system scripts. (Contributed by Christian Heimes in bpo-16499.)Tab-completion is now enabled by default in the interactive interpreter on systems that support\nreadline\n. History is also enabled by default, and is written to (and read from) the file~/.python-history\n. (Contributed by Antoine Pitrou and \u00c9ric Araujo in bpo-5845.)Invoking the Python interpreter with\n--version\nnow outputs the version to standard output instead of standard error (bpo-18338). Similar changes were made toargparse\n(bpo-18920) and other modules that have script-like invocation capabilities (bpo-18922).The CPython Windows installer now adds\n.py\nto thePATHEXT\nvariable when extensions are registered, allowing users to run a python script at the windows command prompt by just typing its name without the.py\nextension. (Contributed by Paul Moore in bpo-18569.)A new\nmake\ntarget coverage-report will build python, run the test suite, and generate an HTML coverage report for the C codebase usinggcov\nand lcov.The\n-R\noption to the python regression test suite now also checks for memory allocation leaks, usingsys.getallocatedblocks()\n. (Contributed by Antoine Pitrou in bpo-13390.)python -m\nnow works with namespace packages.The\nstat\nmodule is now implemented in C, which means it gets the values for its constants from the C header files, instead of having the values hard-coded in the python module as was previously the case.Loading multiple python modules from a single OS module (\n.so\n,.dll\n) now works correctly (previously it silently returned the first python module in the file). (Contributed by V\u00e1clav \u0160milauer in bpo-16421.)A new opcode,\nLOAD_CLASSDEREF\n, has been added to fix a bug in the loading of free variables in class bodies that could be triggered by certain uses of __prepare__. (Contributed by Benjamin Peterson in bpo-17853.)A number of MemoryError-related crashes were identified and fixed by Victor Stinner using his PEP 445-based\npyfailmalloc\ntool (bpo-18408, bpo-18520).The\npyvenv\ncommand now accepts a--copies\noption to use copies rather than symlinks even on systems where symlinks are the default. (Contributed by Vinay Sajip in bpo-18807.)The\npyvenv\ncommand also accepts a--without-pip\noption to suppress the otherwise-automatic bootstrapping of pip into the virtual environment. (Contributed by Nick Coghlan in bpo-19552 as part of the PEP 453 implementation.)The encoding name is now optional in the value set for the\nPYTHONIOENCODING\nenvironment variable. This makes it possible to set just the error handler, without changing the default encoding. (Contributed by Serhiy Storchaka in bpo-18818.)The\nbz2\n,lzma\n, andgzip\nmoduleopen\nfunctions now supportx\n(exclusive creation) mode. (Contributed by Tim Heaney and Vajrasky Kok in bpo-19201, bpo-19222, and bpo-19223.)\nSignificant Optimizations\u00b6\nThe UTF-32 decoder is now 3x to 4x faster. (Contributed by Serhiy Storchaka in bpo-14625.)\nThe cost of hash collisions for sets is now reduced. Each hash table probe now checks a series of consecutive, adjacent key/hash pairs before continuing to make random probes through the hash table. This exploits cache locality to make collision resolution less expensive. The collision resolution scheme can be described as a hybrid of linear probing and open addressing. The number of additional linear probes defaults to nine. This can be changed at compile-time by defining LINEAR_PROBES to be any value. Set LINEAR_PROBES=0 to turn-off linear probing entirely. (Contributed by Raymond Hettinger in bpo-18771.)\nThe interpreter starts about 30% faster. A couple of measures lead to the speedup. The interpreter loads fewer modules on startup, e.g. the\nre\n,collections\nandlocale\nmodules and their dependencies are no longer imported by default. The marshal module has been improved to load compiled Python code faster. (Contributed by Antoine Pitrou, Christian Heimes and Victor Stinner in bpo-19219, bpo-19218, bpo-19209, bpo-19205 and bpo-9548.)bz2.BZ2File\nis now as fast or faster than the Python2 version for most cases.lzma.LZMAFile\nhas also been optimized. (Contributed by Serhiy Storchaka and Nadeem Vawda in bpo-16034.)random.getrandbits()\nis 20%-40% faster for small integers (the most common use case). (Contributed by Serhiy Storchaka in bpo-16674.)By taking advantage of the new storage format for strings, pickling of strings is now significantly faster. (Contributed by Victor Stinner and Antoine Pitrou in bpo-15596.)\nA performance issue in\nio.FileIO.readall()\nhas been solved. This particularly affects Windows, and significantly speeds up the case of piping significant amounts of data throughsubprocess\n. (Contributed by Richard Oudkerk in bpo-15758.)html.escape()\nis now 10x faster. (Contributed by Matt Bryant in bpo-18020.)On Windows, the native\nVirtualAlloc\nis now used instead of the CRTmalloc\ninobmalloc\n. Artificial benchmarks show about a 3% memory savings.os.urandom()\nnow uses a lazily opened persistent file descriptor so as to avoid using many file descriptors when run in parallel from multiple threads. (Contributed by Antoine Pitrou in bpo-18756.)\nDeprecated\u00b6\nThis section covers various APIs and other features that have been deprecated\nin Python 3.4, and will be removed in Python 3.5 or later. In most (but not\nall) cases, using the deprecated APIs will produce a DeprecationWarning\nwhen the interpreter is run with deprecation warnings enabled (for example, by\nusing -Wd\n).\nDeprecations in the Python API\u00b6\nAs mentioned in PEP 451: A ModuleSpec Type for the Import System, a number of\nimportlib\nmethods and functions are deprecated:importlib.find_loader()\nis replaced byimportlib.util.find_spec()\n;importlib.machinery.PathFinder.find_module()\nis replaced byimportlib.machinery.PathFinder.find_spec()\n;importlib.abc.MetaPathFinder.find_module()\nis replaced byimportlib.abc.MetaPathFinder.find_spec()\n;importlib.abc.PathEntryFinder.find_loader()\nandfind_module()\nare replaced byimportlib.abc.PathEntryFinder.find_spec()\n; all of thexxxLoader\nABCload_module\nmethods (importlib.abc.Loader.load_module()\n,importlib.abc.InspectLoader.load_module()\n,importlib.abc.FileLoader.load_module()\n,importlib.abc.SourceLoader.load_module()\n) should no longer be implemented, instead loaders should implement anexec_module\nmethod (importlib.abc.Loader.exec_module()\n,importlib.abc.InspectLoader.exec_module()\nimportlib.abc.SourceLoader.exec_module()\n) and let the import system take care of the rest; andimportlib.abc.Loader.module_repr()\n,importlib.util.module_for_loader()\n,importlib.util.set_loader()\n, andimportlib.util.set_package()\nare no longer needed because their functions are now handled automatically by the import system.The\nimp\nmodule is pending deprecation. To keep compatibility with Python 2/3 code bases, the module\u2019s removal is currently not scheduled.The\nformatter\nmodule is pending deprecation and is slated for removal in Python 3.6.MD5\nas the default digestmod for thehmac.new()\nfunction is deprecated. Python 3.6 will require an explicit digest name or constructor as digestmod argument.The internal\nNetrc\nclass in theftplib\nmodule has been documented as deprecated in its docstring for quite some time. It now emits aDeprecationWarning\nand will be removed completely in Python 3.5.The undocumented endtime argument to\nsubprocess.Popen.wait()\nshould not have been exposed and is hopefully not in use; it is deprecated and will mostly likely be removed in Python 3.5.The strict argument of\nHTMLParser\nis deprecated.The\nplistlib\nreadPlist()\n,writePlist()\n,readPlistFromBytes()\n, andwritePlistToBytes()\nfunctions are deprecated in favor of the corresponding new functionsload()\n,dump()\n,loads()\n, anddumps()\n.Data()\nis deprecated in favor of just using thebytes\nconstructor.The\nsysconfig\nkeySO\nis deprecated, it has been replaced byEXT_SUFFIX\n.The\nU\nmode accepted by variousopen\nfunctions is deprecated. In Python3 it does not do anything useful, and should be replaced by appropriate uses ofio.TextIOWrapper\n(if needed) and its newline argument.The parser argument of\nxml.etree.ElementTree.iterparse()\nhas been deprecated, as has the html argument ofXMLParser()\n. To prepare for the removal of the latter, all arguments toXMLParser\nshould be passed by keyword.\nDeprecated Features\u00b6\nRunning IDLE \u2014 Python editor and shell with the\n-n\nflag (no subprocess) is deprecated. However, the feature will not be removed until bpo-18823 is resolved.The site module adding a \u201csite-python\u201d directory to sys.path, if it exists, is deprecated (bpo-19375).\nRemoved\u00b6\nOperating Systems No Longer Supported\u00b6\nSupport for the following operating systems has been removed from the source and build tools:\nAPI and Feature Removals\u00b6\nThe following obsolete and previously deprecated APIs and features have been removed:\nThe unmaintained\nMisc/TextMate\nandMisc/vim\ndirectories have been removed (see the devguide for suggestions on what to use instead).The\nSO\nmakefile macro is removed (it was replaced by theSHLIB_SUFFIX\nandEXT_SUFFIX\nmacros) (bpo-16754).The\nPyThreadState.tick_counter\nfield has been removed; its value has been meaningless since Python 3.2, when the \u201cnew GIL\u201d was introduced (bpo-19199).PyLoader\nandPyPycLoader\nhave been removed fromimportlib\n. (Contributed by Taras Lyapun in bpo-15641.)The strict argument to\nHTTPConnection\nandHTTPSConnection\nhas been removed. HTTP 0.9-style \u201cSimple Responses\u201d are no longer supported.The deprecated\nurllib.request.Request\ngetter and setter methodsadd_data\n,has_data\n,get_data\n,get_type\n,get_host\n,get_selector\n,set_proxy\n,get_origin_req_host\n, andis_unverifiable\nhave been removed (use direct attribute access instead).Support for loading the deprecated\nTYPE_INT64\nhas been removed frommarshal\n. (Contributed by Dan Riti in bpo-15480.)inspect.Signature\n: positional-only parameters are now required to have a valid name.object.__format__()\nno longer accepts non-empty format strings, it now raises aTypeError\ninstead. Using a non-empty string has been deprecated since Python 3.2. This change has been made to prevent a situation where previously working (but incorrect) code would start failing if an object gained a __format__ method, which means that your code may now raise aTypeError\nif you are using an's'\nformat code with objects that do not have a __format__ method that handles it. See bpo-7994 for background.difflib.SequenceMatcher.isbjunk()\nanddifflib.SequenceMatcher.isbpopular()\nwere deprecated in 3.2, and have now been removed: usex in sm.bjunk\nandx in sm.bpopular\n, where sm is aSequenceMatcher\nobject (bpo-13248).\nCode Cleanups\u00b6\nThe unused and undocumented internal\nScanner\nclass has been removed from thepydoc\nmodule.The private and effectively unused\n_gestalt\nmodule has been removed, along with the privateplatform\nfunctions_mac_ver_lookup\n,_mac_ver_gstalt\n, and_bcd2str\n, which would only have ever been called on badly broken OSX systems (see bpo-18393).The hardcoded copies of certain\nstat\nconstants that were included in thetarfile\nmodule namespace have been removed.\nPorting to Python 3.4\u00b6\nThis section lists previously described changes and other bugfixes that may require changes to your code.\nChanges in \u2018python\u2019 Command Behavior\u00b6\nIn a posix shell, setting the\nPATH\nenvironment variable to an empty value is equivalent to not setting it at all. However, settingPYTHONPATH\nto an empty value was not equivalent to not setting it at all: settingPYTHONPATH\nto an empty value was equivalent to setting it to.\n, which leads to confusion when reasoning by analogy to howPATH\nworks. The behavior now conforms to the posix convention forPATH\n.The [X refs, Y blocks] output of a debug (\n--with-pydebug\n) build of the CPython interpreter is now off by default. It can be re-enabled using the-X showrefcount\noption. (Contributed by Ezio Melotti in bpo-17323.)The python command and most stdlib scripts (as well as\nargparse\n) now output--version\ninformation tostdout\ninstead ofstderr\n(for issue list see Other Improvements above).\nChanges in the Python API\u00b6\nThe ABCs defined in\nimportlib.abc\nnow either raise the appropriate exception or return a default value instead of raisingNotImplementedError\nblindly. This will only affect code callingsuper()\nand falling through all the way to the ABCs. For compatibility, catch bothNotImplementedError\nor the appropriate exception as needed.The module type now initializes the\n__package__\nand__loader__\nattributes toNone\nby default. To determine if these attributes were set in a backwards-compatible fashion, use e.g.getattr(module, '__loader__', None) is not None\n. (bpo-17115.)importlib.util.module_for_loader()\nnow sets__loader__\nand__package__\nunconditionally to properly support reloading. If this is not desired then you will need to set these attributes manually. You can useimportlib.util.module_to_load()\nfor module management.Import now resets relevant attributes (e.g.\n__name__\n,__loader__\n,__package__\n,__file__\n,__cached__\n) unconditionally when reloading. Note that this restores a pre-3.3 behavior in that it means a module is re-found when re-loaded (bpo-19413).Frozen packages no longer set\n__path__\nto a list containing the package name, they now set it to an empty list. The previous behavior could cause the import system to do the wrong thing on submodule imports if there was also a directory with the same name as the frozen package. The correct way to determine if a module is a package or not is to usehasattr(module, '__path__')\n(bpo-18065).Frozen modules no longer define a\n__file__\nattribute. It\u2019s semantically incorrect for frozen modules to set the attribute as they are not loaded from any explicit location. If you must know that a module comes from frozen code then you can see if the module\u2019s__spec__.location\nis set to'frozen'\n, check if the loader is a subclass ofimportlib.machinery.FrozenImporter\n, or if Python 2 compatibility is necessary you can useimp.is_frozen()\n.py_compile.compile()\nnow raisesFileExistsError\nif the file path it would write to is a symlink or a non-regular file. This is to act as a warning that import will overwrite those files with a regular file regardless of what type of file path they were originally.importlib.abc.SourceLoader.get_source()\nno longer raisesImportError\nwhen the source code being loaded triggers aSyntaxError\norUnicodeDecodeError\n. AsImportError\nis meant to be raised only when source code cannot be found but it should, it was felt to be over-reaching/overloading of that meaning when the source code is found but improperly structured. If you were catching ImportError before and wish to continue to ignore syntax or decoding issues, catch all three exceptions now.functools.update_wrapper()\nandfunctools.wraps()\nnow correctly set the__wrapped__\nattribute to the function being wrapped, even if that function also had its__wrapped__\nattribute set. This means__wrapped__\nattributes now correctly link a stack of decorated functions rather than every__wrapped__\nattribute in the chain referring to the innermost function. Introspection libraries that assumed the previous behaviour was intentional can useinspect.unwrap()\nto access the first function in the chain that has no__wrapped__\nattribute.inspect.getfullargspec()\nhas been reimplemented on top ofinspect.signature()\nand hence handles a much wider variety of callable objects than it did in the past. It is expected that additional builtin and extension module callables will gain signature metadata over the course of the Python 3.4 series. Code that assumes thatinspect.getfullargspec()\nwill fail on non-Python callables may need to be adjusted accordingly.importlib.machinery.PathFinder\nnow passes on the current working directory to objects insys.path_hooks\nfor the empty string. This results insys.path_importer_cache\nnever containing''\n, thus iterating throughsys.path_importer_cache\nbased onsys.path\nwill not find all keys. A module\u2019s__file__\nwhen imported in the current working directory will also now have an absolute path, including when using-m\nwith the interpreter (except for__main__.__file__\nwhen a script has been executed directly using a relative path) (Contributed by Brett Cannon in bpo-18416). is specified on the command-line) (bpo-18416).The removal of the strict argument to\nHTTPConnection\nandHTTPSConnection\nchanges the meaning of the remaining arguments if you are specifying them positionally rather than by keyword. If you\u2019ve been paying attention to deprecation warnings your code should already be specifying any additional arguments via keywords.Strings between\nfrom __future__ import ...\nstatements now always raise aSyntaxError\n. Previously if there was no leading docstring, an interstitial string would sometimes be ignored. This brings CPython into compliance with the language spec; Jython and PyPy already were. (bpo-17434).ssl.SSLSocket.getpeercert()\nandssl.SSLSocket.do_handshake()\nnow raise anOSError\nwithENOTCONN\nwhen theSSLSocket\nis not connected, instead of the previous behavior of raising anAttributeError\n. In addition,getpeercert()\nwill raise aValueError\nif the handshake has not yet been done.base64.b32decode()\nnow raises abinascii.Error\nwhen the input string contains non-b32-alphabet characters, instead of aTypeError\n. This particularTypeError\nwas missed when the otherTypeError\ns were converted. (Contributed by Serhiy Storchaka in bpo-18011.) Note: this change was also inadvertently applied in Python 3.3.3.The\nfile\nattribute is now automatically closed when the creatingcgi.FieldStorage\ninstance is garbage collected. If you were pulling the file object out separately from thecgi.FieldStorage\ninstance and not keeping the instance alive, then you should either store the entirecgi.FieldStorage\ninstance or read the contents of the file before thecgi.FieldStorage\ninstance is garbage collected.Calling\nread\norwrite\non a closed SSL socket now raises an informativeValueError\nrather than the previous more mysteriousAttributeError\n(bpo-9177).slice.indices()\nno longer produces anOverflowError\nfor huge values. As a consequence of this fix,slice.indices()\nnow raises aValueError\nif given a negative length; previously it returned nonsense values (bpo-14794).The\ncomplex\nconstructor, unlike thecmath\nfunctions, was incorrectly acceptingfloat\nvalues if an object\u2019s__complex__\nspecial method returned one. This now raises aTypeError\n. (bpo-16290.)The\nint\nconstructor in 3.2 and 3.3 erroneously acceptsfloat\nvalues for the base parameter. It is unlikely anyone was doing this, but if so, it will now raise aTypeError\n(bpo-16772).Defaults for keyword-only arguments are now evaluated after defaults for regular keyword arguments, instead of before. Hopefully no one wrote any code that depends on the previous buggy behavior (bpo-16967).\nStale thread states are now cleared after\nfork()\n. This may cause some system resources to be released that previously were incorrectly kept perpetually alive (for example, database connections kept in thread-local storage). (bpo-17094.)Parameter names in\n__annotations__\ndicts are now mangled properly, similarly to__kwdefaults__\n. (Contributed by Yury Selivanov in bpo-20625.)hashlib.hash.name\nnow always returns the identifier in lower case. Previously some builtin hashes had uppercase names, but now that it is a formal public interface the naming has been made consistent (bpo-18532).Because\nunittest.TestSuite\nnow drops references to tests after they are run, test harnesses that reuse aTestSuite\nto re-run a set of tests may fail. Test suites should not be re-used in this fashion since it means state is retained between test runs, breaking the test isolation thatunittest\nis designed to provide. However, if the lack of isolation is considered acceptable, the old behavior can be restored by creating aTestSuite\nsubclass that defines a_removeTestAtIndex\nmethod that does nothing (seeTestSuite.__iter__()\n) (bpo-11798).unittest\nnow usesargparse\nfor command line parsing. There are certain invalid command forms that used to work that are no longer allowed; in theory this should not cause backward compatibility issues since the disallowed command forms didn\u2019t make any sense and are unlikely to be in use.The\nre.split()\n,re.findall()\n, andre.sub()\nfunctions, and thegroup()\nandgroups()\nmethods ofmatch\nobjects now always return a bytes object when the string to be matched is a bytes-like object. Previously the return type matched the input type, so if your code was depending on the return value being, say, abytearray\n, you will need to change your code.audioop\nfunctions now raise an error immediately if passed string input, instead of failing randomly later on (bpo-16685).The new convert_charrefs argument to\nHTMLParser\ncurrently defaults toFalse\nfor backward compatibility, but will eventually be changed to default toTrue\n. It is recommended that you add this keyword, with the appropriate value, to anyHTMLParser\ncalls in your code (bpo-13633).Since the digestmod argument to the\nhmac.new()\nfunction will in the future have no default, all calls tohmac.new()\nshould be changed to explicitly specify a digestmod (bpo-17276).Calling\nsysconfig.get_config_var()\nwith theSO\nkey, or lookingSO\nup in the results of a call tosysconfig.get_config_vars()\nis deprecated. This key should be replaced byEXT_SUFFIX\norSHLIB_SUFFIX\n, depending on the context (bpo-19555).Any calls to\nopen\nfunctions that specifyU\nshould be modified.U\nis ineffective in Python3 and will eventually raise an error if used. Depending on the function, the equivalent of its old Python2 behavior can be achieved using either a newline argument, or if necessary by wrapping the stream inTextIOWrapper\nto use its newline argument (bpo-15204).If you use\npyvenv\nin a script and desire that pip not be installed, you must add--without-pip\nto your command invocation.The default behavior of\njson.dump()\nandjson.dumps()\nwhen an indent is specified has changed: it no longer produces trailing spaces after the item separating commas at the ends of lines. This will matter only if you have tests that are doing white-space-sensitive comparisons of such output (bpo-16333).doctest\nnow looks for doctests in extension module__doc__\nstrings, so if your doctest test discovery includes extension modules that have things that look like doctests in them you may see test failures you\u2019ve never seen before when running your tests (bpo-3158).The\ncollections.abc\nmodule has been slightly refactored as part of the Python startup improvements. As a consequence of this, it is no longer the case that importingcollections\nautomatically importscollections.abc\n. If your program depended on the (undocumented) implicit import, you will need to add an explicitimport collections.abc\n(bpo-20784).\nChanges in the C API\u00b6\nPyEval_EvalFrameEx()\n,PyObject_Repr()\n, andPyObject_Str()\n, along with some other internal C APIs, now include a debugging assertion that ensures they are not used in situations where they may silently discard a currently active exception. In cases where discarding the active exception is expected and desired (for example, because it has already been saved locally withPyErr_Fetch()\nor is being deliberately replaced with a different exception), an explicitPyErr_Clear()\ncall will be needed to avoid triggering the assertion when invoking these operations (directly or indirectly) and running against a version of Python that is compiled with assertions enabled.PyErr_SetImportError()\nnow setsTypeError\nwhen its msg argument is not set. Previously onlyNULL\nwas returned with no exception set.The result of the\nPyOS_ReadlineFunctionPointer\ncallback must now be a string allocated byPyMem_RawMalloc()\norPyMem_RawRealloc()\n, orNULL\nif an error occurred, instead of a string allocated byPyMem_Malloc()\norPyMem_Realloc()\n(bpo-16742)PyThread_set_key_value()\nnow always set the value. In Python 3.3, the function did nothing if the key already exists (if the current value is a non-NULL\npointer).The\nf_tstate\n(thread state) field of thePyFrameObject\nstructure has been removed to fix a bug: see bpo-14432 for the rationale.\nChanged in 3.4.3\u00b6\nPEP 476: Enabling certificate verification by default for stdlib http clients\u00b6\nhttp.client\nand modules which use it, such as urllib.request\nand\nxmlrpc.client\n, will now verify that the server presents a certificate\nwhich is signed by a CA in the platform trust store and whose hostname matches\nthe hostname being requested by default, significantly improving security for\nmany applications.\nFor applications which require the old previous behavior, they can pass an alternate context:\nimport urllib.request\nimport ssl\n# This disables all verification\ncontext = ssl._create_unverified_context()\n# This allows using a specific certificate for the host, which doesn't need\n# to be in the trust store\ncontext = ssl.create_default_context(cafile=\"/path/to/file.crt\")\nurllib.request.urlopen(\"https://invalid-cert\", context=context)", "code_snippets": ["\n", "\n File ", ", line ", ", in ", "\n", ": ", "\n\n", "\n", "\n File ", ", line ", ", in ", "\n", ": ", "\n\n", " ", "\n", "\n File ", ", line ", ", in ", "\n", ": ", "\n", "\n\n", " ", "\n", "\n File ", ", line ", ", in ", "\n", " ", " ", "\n", ": ", "\n\n", "\n\n", "\n File ", ", line ", ", in ", "\n", ": ", "\n\n", " ", "\n", "\n File ", ", line ", ", in ", "\n", " ", " ", "\n File ", ", line ", ", in ", "\n", " ", " ", " ", "\n", ": ", "\n\n", "\n\n", "\n File ", ", line ", ", in ", "\n", ": ", "\n", " ", " ", "\n", " ", "\n", "\n", " ", "\n", "\n", "\n", " ", " ", " ", " ", " ", " ", " ", "\n", " ", "\n", "\n", "\n", "\n", "\n", " ", " ", " ", " ", " ", " ", " ", "\n", " ", " ", " ", "\n", " ", " ", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n ", "\n ", " ", " ", " ", "\n ", " ", "\n ", " ", " ", " ", "\n", "\n", "\n\n", "\n", " ", " ", "\n\n", "\n", "\n", " ", " ", "\n\n", " ", "\n"], "language": "Python", "source": "python.org", "token_count": 22351} +{"url": "https://docs.python.org/3/c-api/veryhigh.html", "title": "The Very High Level Layer", "content": "The Very High Level Layer\u00b6\nThe functions in this chapter will let you execute Python source code given in a file or a buffer, but they will not let you interact in a more detailed way with the interpreter.\nSeveral of these functions accept a start symbol from the grammar as a\nparameter. The available start symbols are Py_eval_input\n,\nPy_file_input\n, Py_single_input\n, and\nPy_func_type_input\n. These are described following the functions\nwhich accept them as parameters.\nNote also that several of these functions take FILE* parameters. One\nparticular issue which needs to be handled carefully is that the FILE\nstructure for different C libraries can be different and incompatible. Under\nWindows (at least), it is possible for dynamically linked extensions to actually\nuse different libraries, so care should be taken that FILE* parameters\nare only passed to these functions if it is certain that they were created by\nthe same library that the Python runtime is using.\n-\nint PyRun_AnyFile(FILE *fp, const char *filename)\u00b6\nThis is a simplified interface to\nPyRun_AnyFileExFlags()\nbelow, leaving closeit set to0\nand flags set toNULL\n.\n-\nint PyRun_AnyFileFlags(FILE *fp, const char *filename, PyCompilerFlags *flags)\u00b6\nThis is a simplified interface to\nPyRun_AnyFileExFlags()\nbelow, leaving the closeit argument set to0\n.\n-\nint PyRun_AnyFileEx(FILE *fp, const char *filename, int closeit)\u00b6\nThis is a simplified interface to\nPyRun_AnyFileExFlags()\nbelow, leaving the flags argument set toNULL\n.\n-\nint PyRun_AnyFileExFlags(FILE *fp, const char *filename, int closeit, PyCompilerFlags *flags)\u00b6\nIf fp refers to a file associated with an interactive device (console or terminal input or Unix pseudo-terminal), return the value of\nPyRun_InteractiveLoop()\n, otherwise return the result ofPyRun_SimpleFile()\n. filename is decoded from the filesystem encoding (sys.getfilesystemencoding()\n). If filename isNULL\n, this function uses\"???\"\nas the filename. If closeit is true, the file is closed beforePyRun_SimpleFileExFlags()\nreturns.\n-\nint PyRun_SimpleString(const char *command)\u00b6\nThis is a simplified interface to\nPyRun_SimpleStringFlags()\nbelow, leaving thePyCompilerFlags\n* argument set toNULL\n.\n-\nint PyRun_SimpleStringFlags(const char *command, PyCompilerFlags *flags)\u00b6\nExecutes the Python source code from command in the\n__main__\nmodule according to the flags argument. If__main__\ndoes not already exist, it is created. Returns0\non success or-1\nif an exception was raised. If there was an error, there is no way to get the exception information. For the meaning of flags, see below.Note that if an otherwise unhandled\nSystemExit\nis raised, this function will not return-1\n, but exit the process, as long asPyConfig.inspect\nis zero.\n-\nint PyRun_SimpleFile(FILE *fp, const char *filename)\u00b6\nThis is a simplified interface to\nPyRun_SimpleFileExFlags()\nbelow, leaving closeit set to0\nand flags set toNULL\n.\n-\nint PyRun_SimpleFileEx(FILE *fp, const char *filename, int closeit)\u00b6\nThis is a simplified interface to\nPyRun_SimpleFileExFlags()\nbelow, leaving flags set toNULL\n.\n-\nint PyRun_SimpleFileExFlags(FILE *fp, const char *filename, int closeit, PyCompilerFlags *flags)\u00b6\nSimilar to\nPyRun_SimpleStringFlags()\n, but the Python source code is read from fp instead of an in-memory string. filename should be the name of the file, it is decoded from filesystem encoding and error handler. If closeit is true, the file is closed beforePyRun_SimpleFileExFlags()\nreturns.Note\nOn Windows, fp should be opened as binary mode (e.g.\nfopen(filename, \"rb\")\n). Otherwise, Python may not handle script file with LF line ending correctly.\n-\nint PyRun_InteractiveOneObject(FILE *fp, PyObject *filename, PyCompilerFlags *flags)\u00b6\nRead and execute a single statement from a file associated with an interactive device according to the flags argument. The user will be prompted using\nsys.ps1\nandsys.ps2\n. filename must be a Pythonstr\nobject.Returns\n0\nwhen the input was executed successfully,-1\nif there was an exception, or an error code from theerrcode.h\ninclude file distributed as part of Python if there was a parse error. (Note thaterrcode.h\nis not included byPython.h\n, so must be included specifically if needed.)\n-\nint PyRun_InteractiveOne(FILE *fp, const char *filename)\u00b6\nThis is a simplified interface to\nPyRun_InteractiveOneFlags()\nbelow, leaving flags set toNULL\n.\n-\nint PyRun_InteractiveOneFlags(FILE *fp, const char *filename, PyCompilerFlags *flags)\u00b6\nSimilar to\nPyRun_InteractiveOneObject()\n, but filename is a const char*, which is decoded from the filesystem encoding and error handler.\n-\nint PyRun_InteractiveLoop(FILE *fp, const char *filename)\u00b6\nThis is a simplified interface to\nPyRun_InteractiveLoopFlags()\nbelow, leaving flags set toNULL\n.\n-\nint PyRun_InteractiveLoopFlags(FILE *fp, const char *filename, PyCompilerFlags *flags)\u00b6\nRead and execute statements from a file associated with an interactive device until EOF is reached. The user will be prompted using\nsys.ps1\nandsys.ps2\n. filename is decoded from the filesystem encoding and error handler. Returns0\nat EOF or a negative number upon failure.\n-\nint (*PyOS_InputHook)(void)\u00b6\n- Part of the Stable ABI.\nCan be set to point to a function with the prototype\nint func(void)\n. The function will be called when Python\u2019s interpreter prompt is about to become idle and wait for user input from the terminal. The return value is ignored. Overriding this hook can be used to integrate the interpreter\u2019s prompt with other event loops, as done inModules/_tkinter.c\nin the Python source code.Changed in version 3.12: This function is only called from the main interpreter.\n-\nchar *(*PyOS_ReadlineFunctionPointer)(FILE*, FILE*, const char*)\u00b6\nCan be set to point to a function with the prototype\nchar *func(FILE *stdin, FILE *stdout, char *prompt)\n, overriding the default function used to read a single line of input at the interpreter\u2019s prompt. The function is expected to output the string prompt if it\u2019s notNULL\n, and then read a line of input from the provided standard input file, returning the resulting string. For example, Thereadline\nmodule sets this hook to provide line-editing and tab-completion features.The result must be a string allocated by\nPyMem_RawMalloc()\norPyMem_RawRealloc()\n, orNULL\nif an error occurred.Changed in version 3.4: The result must be allocated by\nPyMem_RawMalloc()\norPyMem_RawRealloc()\n, instead of being allocated byPyMem_Malloc()\norPyMem_Realloc()\n.Changed in version 3.12: This function is only called from the main interpreter.\n-\nPyObject *PyRun_String(const char *str, int start, PyObject *globals, PyObject *locals)\u00b6\n- Return value: New reference.\nThis is a simplified interface to\nPyRun_StringFlags()\nbelow, leaving flags set toNULL\n.\n-\nPyObject *PyRun_StringFlags(const char *str, int start, PyObject *globals, PyObject *locals, PyCompilerFlags *flags)\u00b6\n- Return value: New reference.\nExecute Python source code from str in the context specified by the objects globals and locals with the compiler flags specified by flags. globals must be a dictionary; locals can be any object that implements the mapping protocol. The parameter start specifies the start symbol and must be one of the available start symbols.\nReturns the result of executing the code as a Python object, or\nNULL\nif an exception was raised.\n-\nPyObject *PyRun_File(FILE *fp, const char *filename, int start, PyObject *globals, PyObject *locals)\u00b6\n- Return value: New reference.\nThis is a simplified interface to\nPyRun_FileExFlags()\nbelow, leaving closeit set to0\nand flags set toNULL\n.\n-\nPyObject *PyRun_FileEx(FILE *fp, const char *filename, int start, PyObject *globals, PyObject *locals, int closeit)\u00b6\n- Return value: New reference.\nThis is a simplified interface to\nPyRun_FileExFlags()\nbelow, leaving flags set toNULL\n.\n-\nPyObject *PyRun_FileFlags(FILE *fp, const char *filename, int start, PyObject *globals, PyObject *locals, PyCompilerFlags *flags)\u00b6\n- Return value: New reference.\nThis is a simplified interface to\nPyRun_FileExFlags()\nbelow, leaving closeit set to0\n.\n-\nPyObject *PyRun_FileExFlags(FILE *fp, const char *filename, int start, PyObject *globals, PyObject *locals, int closeit, PyCompilerFlags *flags)\u00b6\n- Return value: New reference.\nSimilar to\nPyRun_StringFlags()\n, but the Python source code is read from fp instead of an in-memory string. filename should be the name of the file, it is decoded from the filesystem encoding and error handler. If closeit is true, the file is closed beforePyRun_FileExFlags()\nreturns.\n-\nPyObject *Py_CompileString(const char *str, const char *filename, int start)\u00b6\n- Return value: New reference. Part of the Stable ABI.\nThis is a simplified interface to\nPy_CompileStringFlags()\nbelow, leaving flags set toNULL\n.\n-\nPyObject *Py_CompileStringFlags(const char *str, const char *filename, int start, PyCompilerFlags *flags)\u00b6\n- Return value: New reference.\nThis is a simplified interface to\nPy_CompileStringExFlags()\nbelow, with optimize set to-1\n.\n-\nPyObject *Py_CompileStringObject(const char *str, PyObject *filename, int start, PyCompilerFlags *flags, int optimize)\u00b6\n- Return value: New reference.\nParse and compile the Python source code in str, returning the resulting code object. The start symbol is given by start; this can be used to constrain the code which can be compiled and should be available start symbols. The filename specified by filename is used to construct the code object and may appear in tracebacks or\nSyntaxError\nexception messages. This returnsNULL\nif the code cannot be parsed or compiled.The integer optimize specifies the optimization level of the compiler; a value of\n-1\nselects the optimization level of the interpreter as given by-O\noptions. Explicit levels are0\n(no optimization;__debug__\nis true),1\n(asserts are removed,__debug__\nis false) or2\n(docstrings are removed too).Added in version 3.4.\n-\nPyObject *Py_CompileStringExFlags(const char *str, const char *filename, int start, PyCompilerFlags *flags, int optimize)\u00b6\n- Return value: New reference.\nLike\nPy_CompileStringObject()\n, but filename is a byte string decoded from the filesystem encoding and error handler.Added in version 3.2.\n-\nPyObject *PyEval_EvalCode(PyObject *co, PyObject *globals, PyObject *locals)\u00b6\n- Return value: New reference. Part of the Stable ABI.\nThis is a simplified interface to\nPyEval_EvalCodeEx()\n, with just the code object, and global and local variables. The other arguments are set toNULL\n.\n-\nPyObject *PyEval_EvalCodeEx(PyObject *co, PyObject *globals, PyObject *locals, PyObject *const *args, int argcount, PyObject *const *kws, int kwcount, PyObject *const *defs, int defcount, PyObject *kwdefs, PyObject *closure)\u00b6\n- Return value: New reference. Part of the Stable ABI.\nEvaluate a precompiled code object, given a particular environment for its evaluation. This environment consists of a dictionary of global variables, a mapping object of local variables, arrays of arguments, keywords and defaults, a dictionary of default values for keyword-only arguments and a closure tuple of cells.\n-\nPyObject *PyEval_EvalFrame(PyFrameObject *f)\u00b6\n- Return value: New reference. Part of the Stable ABI.\nEvaluate an execution frame. This is a simplified interface to\nPyEval_EvalFrameEx()\n, for backward compatibility.\n-\nPyObject *PyEval_EvalFrameEx(PyFrameObject *f, int throwflag)\u00b6\n- Return value: New reference. Part of the Stable ABI.\nThis is the main, unvarnished function of Python interpretation. The code object associated with the execution frame f is executed, interpreting bytecode and executing calls as needed. The additional throwflag parameter can mostly be ignored - if true, then it causes an exception to immediately be thrown; this is used for the\nthrow()\nmethods of generator objects.Changed in version 3.4: This function now includes a debug assertion to help ensure that it does not silently discard an active exception.\n-\nint PyEval_MergeCompilerFlags(PyCompilerFlags *cf)\u00b6\nThis function changes the flags of the current evaluation frame, and returns true on success, false on failure.\n-\nstruct PyCompilerFlags\u00b6\nThis is the structure used to hold compiler flags. In cases where code is only being compiled, it is passed as\nint flags\n, and in cases where code is being executed, it is passed asPyCompilerFlags *flags\n. In this case,from __future__ import\ncan modify flags.Whenever\nPyCompilerFlags *flags\nisNULL\n,cf_flags\nis treated as equal to0\n, and any modification due tofrom __future__ import\nis discarded.-\nint cf_flags\u00b6\nCompiler flags.\n-\nint cf_feature_version\u00b6\ncf_feature_version is the minor Python version. It should be initialized to\nPY_MINOR_VERSION\n.The field is ignored by default, it is used if and only if\nPyCF_ONLY_AST\nflag is set incf_flags\n.\nChanged in version 3.8: Added cf_feature_version field.\nThe available compiler flags are accessible as macros:\n-\nPyCF_ALLOW_TOP_LEVEL_AWAIT\u00b6\n-\nPyCF_ONLY_AST\u00b6\n-\nPyCF_OPTIMIZED_AST\u00b6\n-\nPyCF_TYPE_COMMENTS\u00b6\nSee compiler flags in documentation of the\nast\nPython module, which exports these constants under the same names.\nThe \u201c\nPyCF\n\u201d flags above can be combined with \u201cCO_FUTURE\n\u201d flags such asCO_FUTURE_ANNOTATIONS\nto enable features normally selectable using future statements. See Code Object Flags for a complete list.-\nint cf_flags\u00b6\nAvailable start symbols\u00b6\n-\nint Py_eval_input\u00b6\nThe start symbol from the Python grammar for isolated expressions; for use with\nPy_CompileString()\n.\n-\nint Py_file_input\u00b6\nThe start symbol from the Python grammar for sequences of statements as read from a file or other source; for use with\nPy_CompileString()\n. This is the symbol to use when compiling arbitrarily long Python source code.\n-\nint Py_single_input\u00b6\nThe start symbol from the Python grammar for a single statement; for use with\nPy_CompileString()\n. This is the symbol used for the interactive interpreter loop.\n-\nint Py_func_type_input\u00b6\nThe start symbol from the Python grammar for a function type; for use with\nPy_CompileString()\n. This is used to parse \u201csignature type comments\u201d from PEP 484.This requires the\nPyCF_ONLY_AST\nflag to be set.See also\nAdded in version 3.8.\nStack Effects\u00b6\nSee also\n-\nPY_INVALID_STACK_EFFECT\u00b6\nSentinel value representing an invalid stack effect.\nThis is currently equivalent to\nINT_MAX\n.Added in version 3.8.\n-\nint PyCompile_OpcodeStackEffect(int opcode, int oparg)\u00b6\nCompute the stack effect of opcode with argument oparg.\nOn success, this function returns the stack effect; on failure, this returns\nPY_INVALID_STACK_EFFECT\n.Added in version 3.4.\n-\nint PyCompile_OpcodeStackEffectWithJump(int opcode, int oparg, int jump)\u00b6\nSimilar to\nPyCompile_OpcodeStackEffect()\n, but don\u2019t include the stack effect of jumping if jump is zero.If jump is\n0\n, this will not include the stack effect of jumping, but if jump is1\nor-1\n, this will include it.On success, this function returns the stack effect; on failure, this returns\nPY_INVALID_STACK_EFFECT\n.Added in version 3.8.", "code_snippets": [], "language": "Python", "source": "python.org", "token_count": 3717} +{"url": "https://docs.python.org/3/c-api/number.html", "title": "Number Protocol", "content": "Number Protocol\u00b6\n-\nint PyNumber_Check(PyObject *o)\u00b6\n- Part of the Stable ABI.\nReturns\n1\nif the object o provides numeric protocols, and false otherwise. This function always succeeds.Changed in version 3.8: Returns\n1\nif o is an index integer.\n-\nPyObject *PyNumber_Add(PyObject *o1, PyObject *o2)\u00b6\n- Return value: New reference. Part of the Stable ABI.\nReturns the result of adding o1 and o2, or\nNULL\non failure. This is the equivalent of the Python expressiono1 + o2\n.\n-\nPyObject *PyNumber_Subtract(PyObject *o1, PyObject *o2)\u00b6\n- Return value: New reference. Part of the Stable ABI.\nReturns the result of subtracting o2 from o1, or\nNULL\non failure. This is the equivalent of the Python expressiono1 - o2\n.\n-\nPyObject *PyNumber_Multiply(PyObject *o1, PyObject *o2)\u00b6\n- Return value: New reference. Part of the Stable ABI.\nReturns the result of multiplying o1 and o2, or\nNULL\non failure. This is the equivalent of the Python expressiono1 * o2\n.\n-\nPyObject *PyNumber_MatrixMultiply(PyObject *o1, PyObject *o2)\u00b6\n- Return value: New reference. Part of the Stable ABI since version 3.7.\nReturns the result of matrix multiplication on o1 and o2, or\nNULL\non failure. This is the equivalent of the Python expressiono1 @ o2\n.Added in version 3.5.\n-\nPyObject *PyNumber_FloorDivide(PyObject *o1, PyObject *o2)\u00b6\n- Return value: New reference. Part of the Stable ABI.\nReturn the floor of o1 divided by o2, or\nNULL\non failure. This is the equivalent of the Python expressiono1 // o2\n.\n-\nPyObject *PyNumber_TrueDivide(PyObject *o1, PyObject *o2)\u00b6\n- Return value: New reference. Part of the Stable ABI.\nReturn a reasonable approximation for the mathematical value of o1 divided by o2, or\nNULL\non failure. The return value is \u201capproximate\u201d because binary floating-point numbers are approximate; it is not possible to represent all real numbers in base two. This function can return a floating-point value when passed two integers. This is the equivalent of the Python expressiono1 / o2\n.\n-\nPyObject *PyNumber_Remainder(PyObject *o1, PyObject *o2)\u00b6\n- Return value: New reference. Part of the Stable ABI.\nReturns the remainder of dividing o1 by o2, or\nNULL\non failure. This is the equivalent of the Python expressiono1 % o2\n.\n-\nPyObject *PyNumber_Divmod(PyObject *o1, PyObject *o2)\u00b6\n- Return value: New reference. Part of the Stable ABI.\nSee the built-in function\ndivmod()\n. ReturnsNULL\non failure. This is the equivalent of the Python expressiondivmod(o1, o2)\n.\n-\nPyObject *PyNumber_Power(PyObject *o1, PyObject *o2, PyObject *o3)\u00b6\n- Return value: New reference. Part of the Stable ABI.\nSee the built-in function\npow()\n. ReturnsNULL\non failure. This is the equivalent of the Python expressionpow(o1, o2, o3)\n, where o3 is optional. If o3 is to be ignored, passPy_None\nin its place (passingNULL\nfor o3 would cause an illegal memory access).\n-\nPyObject *PyNumber_Negative(PyObject *o)\u00b6\n- Return value: New reference. Part of the Stable ABI.\nReturns the negation of o on success, or\nNULL\non failure. This is the equivalent of the Python expression-o\n.\n-\nPyObject *PyNumber_Positive(PyObject *o)\u00b6\n- Return value: New reference. Part of the Stable ABI.\nReturns o on success, or\nNULL\non failure. This is the equivalent of the Python expression+o\n.\n-\nPyObject *PyNumber_Absolute(PyObject *o)\u00b6\n- Return value: New reference. Part of the Stable ABI.\nReturns the absolute value of o, or\nNULL\non failure. This is the equivalent of the Python expressionabs(o)\n.\n-\nPyObject *PyNumber_Invert(PyObject *o)\u00b6\n- Return value: New reference. Part of the Stable ABI.\nReturns the bitwise negation of o on success, or\nNULL\non failure. This is the equivalent of the Python expression~o\n.\n-\nPyObject *PyNumber_Lshift(PyObject *o1, PyObject *o2)\u00b6\n- Return value: New reference. Part of the Stable ABI.\nReturns the result of left shifting o1 by o2 on success, or\nNULL\non failure. This is the equivalent of the Python expressiono1 << o2\n.\n-\nPyObject *PyNumber_Rshift(PyObject *o1, PyObject *o2)\u00b6\n- Return value: New reference. Part of the Stable ABI.\nReturns the result of right shifting o1 by o2 on success, or\nNULL\non failure. This is the equivalent of the Python expressiono1 >> o2\n.\n-\nPyObject *PyNumber_And(PyObject *o1, PyObject *o2)\u00b6\n- Return value: New reference. Part of the Stable ABI.\nReturns the \u201cbitwise and\u201d of o1 and o2 on success and\nNULL\non failure. This is the equivalent of the Python expressiono1 & o2\n.\n-\nPyObject *PyNumber_Xor(PyObject *o1, PyObject *o2)\u00b6\n- Return value: New reference. Part of the Stable ABI.\nReturns the \u201cbitwise exclusive or\u201d of o1 by o2 on success, or\nNULL\non failure. This is the equivalent of the Python expressiono1 ^ o2\n.\n-\nPyObject *PyNumber_Or(PyObject *o1, PyObject *o2)\u00b6\n- Return value: New reference. Part of the Stable ABI.\nReturns the \u201cbitwise or\u201d of o1 and o2 on success, or\nNULL\non failure. This is the equivalent of the Python expressiono1 | o2\n.\n-\nPyObject *PyNumber_InPlaceAdd(PyObject *o1, PyObject *o2)\u00b6\n- Return value: New reference. Part of the Stable ABI.\nReturns the result of adding o1 and o2, or\nNULL\non failure. The operation is done in-place when o1 supports it. This is the equivalent of the Python statemento1 += o2\n.\n-\nPyObject *PyNumber_InPlaceSubtract(PyObject *o1, PyObject *o2)\u00b6\n- Return value: New reference. Part of the Stable ABI.\nReturns the result of subtracting o2 from o1, or\nNULL\non failure. The operation is done in-place when o1 supports it. This is the equivalent of the Python statemento1 -= o2\n.\n-\nPyObject *PyNumber_InPlaceMultiply(PyObject *o1, PyObject *o2)\u00b6\n- Return value: New reference. Part of the Stable ABI.\nReturns the result of multiplying o1 and o2, or\nNULL\non failure. The operation is done in-place when o1 supports it. This is the equivalent of the Python statemento1 *= o2\n.\n-\nPyObject *PyNumber_InPlaceMatrixMultiply(PyObject *o1, PyObject *o2)\u00b6\n- Return value: New reference. Part of the Stable ABI since version 3.7.\nReturns the result of matrix multiplication on o1 and o2, or\nNULL\non failure. The operation is done in-place when o1 supports it. This is the equivalent of the Python statemento1 @= o2\n.Added in version 3.5.\n-\nPyObject *PyNumber_InPlaceFloorDivide(PyObject *o1, PyObject *o2)\u00b6\n- Return value: New reference. Part of the Stable ABI.\nReturns the mathematical floor of dividing o1 by o2, or\nNULL\non failure. The operation is done in-place when o1 supports it. This is the equivalent of the Python statemento1 //= o2\n.\n-\nPyObject *PyNumber_InPlaceTrueDivide(PyObject *o1, PyObject *o2)\u00b6\n- Return value: New reference. Part of the Stable ABI.\nReturn a reasonable approximation for the mathematical value of o1 divided by o2, or\nNULL\non failure. The return value is \u201capproximate\u201d because binary floating-point numbers are approximate; it is not possible to represent all real numbers in base two. This function can return a floating-point value when passed two integers. The operation is done in-place when o1 supports it. This is the equivalent of the Python statemento1 /= o2\n.\n-\nPyObject *PyNumber_InPlaceRemainder(PyObject *o1, PyObject *o2)\u00b6\n- Return value: New reference. Part of the Stable ABI.\nReturns the remainder of dividing o1 by o2, or\nNULL\non failure. The operation is done in-place when o1 supports it. This is the equivalent of the Python statemento1 %= o2\n.\n-\nPyObject *PyNumber_InPlacePower(PyObject *o1, PyObject *o2, PyObject *o3)\u00b6\n- Return value: New reference. Part of the Stable ABI.\nSee the built-in function\npow()\n. ReturnsNULL\non failure. The operation is done in-place when o1 supports it. This is the equivalent of the Python statemento1 **= o2\nwhen o3 isPy_None\n, or an in-place variant ofpow(o1, o2, o3)\notherwise. If o3 is to be ignored, passPy_None\nin its place (passingNULL\nfor o3 would cause an illegal memory access).\n-\nPyObject *PyNumber_InPlaceLshift(PyObject *o1, PyObject *o2)\u00b6\n- Return value: New reference. Part of the Stable ABI.\nReturns the result of left shifting o1 by o2 on success, or\nNULL\non failure. The operation is done in-place when o1 supports it. This is the equivalent of the Python statemento1 <<= o2\n.\n-\nPyObject *PyNumber_InPlaceRshift(PyObject *o1, PyObject *o2)\u00b6\n- Return value: New reference. Part of the Stable ABI.\nReturns the result of right shifting o1 by o2 on success, or\nNULL\non failure. The operation is done in-place when o1 supports it. This is the equivalent of the Python statemento1 >>= o2\n.\n-\nPyObject *PyNumber_InPlaceAnd(PyObject *o1, PyObject *o2)\u00b6\n- Return value: New reference. Part of the Stable ABI.\nReturns the \u201cbitwise and\u201d of o1 and o2 on success and\nNULL\non failure. The operation is done in-place when o1 supports it. This is the equivalent of the Python statemento1 &= o2\n.\n-\nPyObject *PyNumber_InPlaceXor(PyObject *o1, PyObject *o2)\u00b6\n- Return value: New reference. Part of the Stable ABI.\nReturns the \u201cbitwise exclusive or\u201d of o1 by o2 on success, or\nNULL\non failure. The operation is done in-place when o1 supports it. This is the equivalent of the Python statemento1 ^= o2\n.\n-\nPyObject *PyNumber_InPlaceOr(PyObject *o1, PyObject *o2)\u00b6\n- Return value: New reference. Part of the Stable ABI.\nReturns the \u201cbitwise or\u201d of o1 and o2 on success, or\nNULL\non failure. The operation is done in-place when o1 supports it. This is the equivalent of the Python statemento1 |= o2\n.\n-\nPyObject *PyNumber_Long(PyObject *o)\u00b6\n- Return value: New reference. Part of the Stable ABI.\nReturns the o converted to an integer object on success, or\nNULL\non failure. This is the equivalent of the Python expressionint(o)\n.\n-\nPyObject *PyNumber_Float(PyObject *o)\u00b6\n- Return value: New reference. Part of the Stable ABI.\nReturns the o converted to a float object on success, or\nNULL\non failure. This is the equivalent of the Python expressionfloat(o)\n.\n-\nPyObject *PyNumber_Index(PyObject *o)\u00b6\n- Return value: New reference. Part of the Stable ABI.\nReturns the o converted to a Python int on success or\nNULL\nwith aTypeError\nexception raised on failure.Changed in version 3.10: The result always has exact type\nint\n. Previously, the result could have been an instance of a subclass ofint\n.\n-\nPyObject *PyNumber_ToBase(PyObject *n, int base)\u00b6\n- Return value: New reference. Part of the Stable ABI.\nReturns the integer n converted to base base as a string. The base argument must be one of 2, 8, 10, or 16. For base 2, 8, or 16, the returned string is prefixed with a base marker of\n'0b'\n,'0o'\n, or'0x'\n, respectively. If n is not a Python int, it is converted withPyNumber_Index()\nfirst.\n-\nPy_ssize_t PyNumber_AsSsize_t(PyObject *o, PyObject *exc)\u00b6\n- Part of the Stable ABI.\nReturns o converted to a\nPy_ssize_t\nvalue if o can be interpreted as an integer. If the call fails, an exception is raised and-1\nis returned.If o can be converted to a Python int but the attempt to convert to a\nPy_ssize_t\nvalue would raise anOverflowError\n, then the exc argument is the type of exception that will be raised (usuallyIndexError\norOverflowError\n). If exc isNULL\n, then the exception is cleared and the value is clipped toPY_SSIZE_T_MIN\nfor a negative integer orPY_SSIZE_T_MAX\nfor a positive integer.\n-\nint PyIndex_Check(PyObject *o)\u00b6\n- Part of the Stable ABI since version 3.8.\nReturns\n1\nif o is an index integer (has thenb_index\nslot of thetp_as_number\nstructure filled in), and0\notherwise. This function always succeeds.", "code_snippets": [], "language": "Python", "source": "python.org", "token_count": 2811} +{"url": "https://docs.python.org/3/library/asyncio-sync.html", "title": "Synchronization Primitives", "content": "Synchronization Primitives\u00b6\nSource code: Lib/asyncio/locks.py\nasyncio synchronization primitives are designed to be similar to\nthose of the threading\nmodule with two important caveats:\nasyncio primitives are not thread-safe, therefore they should not be used for OS thread synchronization (use\nthreading\nfor that);methods of these synchronization primitives do not accept the timeout argument; use the\nasyncio.wait_for()\nfunction to perform operations with timeouts.\nasyncio has the following basic synchronization primitives:\nLock\u00b6\n- class asyncio.Lock\u00b6\nImplements a mutex lock for asyncio tasks. Not thread-safe.\nAn asyncio lock can be used to guarantee exclusive access to a shared resource.\nThe preferred way to use a Lock is an\nasync with\nstatement:lock = asyncio.Lock() # ... later async with lock: # access shared state\nwhich is equivalent to:\nlock = asyncio.Lock() # ... later await lock.acquire() try: # access shared state finally: lock.release()\nChanged in version 3.10: Removed the loop parameter.\n- async acquire()\u00b6\nAcquire the lock.\nThis method waits until the lock is unlocked, sets it to locked and returns\nTrue\n.When more than one coroutine is blocked in\nacquire()\nwaiting for the lock to be unlocked, only one coroutine eventually proceeds.Acquiring a lock is fair: the coroutine that proceeds will be the first coroutine that started waiting on the lock.\n- release()\u00b6\nRelease the lock.\nWhen the lock is locked, reset it to unlocked and return.\nIf the lock is unlocked, a\nRuntimeError\nis raised.\n- locked()\u00b6\nReturn\nTrue\nif the lock is locked.\nEvent\u00b6\n- class asyncio.Event\u00b6\nAn event object. Not thread-safe.\nAn asyncio event can be used to notify multiple asyncio tasks that some event has happened.\nAn Event object manages an internal flag that can be set to true with the\nset()\nmethod and reset to false with theclear()\nmethod. Thewait()\nmethod blocks until the flag is set to true. The flag is set to false initially.Changed in version 3.10: Removed the loop parameter.\nExample:\nasync def waiter(event): print('waiting for it ...') await event.wait() print('... got it!') async def main(): # Create an Event object. event = asyncio.Event() # Spawn a Task to wait until 'event' is set. waiter_task = asyncio.create_task(waiter(event)) # Sleep for 1 second and set the event. await asyncio.sleep(1) event.set() # Wait until the waiter task is finished. await waiter_task asyncio.run(main())\n- async wait()\u00b6\nWait until the event is set.\nIf the event is set, return\nTrue\nimmediately. Otherwise block until another task callsset()\n.\n- set()\u00b6\nSet the event.\nAll tasks waiting for event to be set will be immediately awakened.\n- clear()\u00b6\nClear (unset) the event.\nSubsequent tasks awaiting on\nwait()\nwill now block until theset()\nmethod is called again.\n- is_set()\u00b6\nReturn\nTrue\nif the event is set.\nCondition\u00b6\n- class asyncio.Condition(lock=None)\u00b6\nA Condition object. Not thread-safe.\nAn asyncio condition primitive can be used by a task to wait for some event to happen and then get exclusive access to a shared resource.\nIn essence, a Condition object combines the functionality of an\nEvent\nand aLock\n. It is possible to have multiple Condition objects share one Lock, which allows coordinating exclusive access to a shared resource between different tasks interested in particular states of that shared resource.The optional lock argument must be a\nLock\nobject orNone\n. In the latter case a new Lock object is created automatically.Changed in version 3.10: Removed the loop parameter.\nThe preferred way to use a Condition is an\nasync with\nstatement:cond = asyncio.Condition() # ... later async with cond: await cond.wait()\nwhich is equivalent to:\ncond = asyncio.Condition() # ... later await cond.acquire() try: await cond.wait() finally: cond.release()\n- async acquire()\u00b6\nAcquire the underlying lock.\nThis method waits until the underlying lock is unlocked, sets it to locked and returns\nTrue\n.\n- notify(n=1)\u00b6\nWake up n tasks (1 by default) waiting on this condition. If fewer than n tasks are waiting they are all awakened.\nThe lock must be acquired before this method is called and released shortly after. If called with an unlocked lock a\nRuntimeError\nerror is raised.\n- locked()\u00b6\nReturn\nTrue\nif the underlying lock is acquired.\n- notify_all()\u00b6\nWake up all tasks waiting on this condition.\nThis method acts like\nnotify()\n, but wakes up all waiting tasks.The lock must be acquired before this method is called and released shortly after. If called with an unlocked lock a\nRuntimeError\nerror is raised.\n- release()\u00b6\nRelease the underlying lock.\nWhen invoked on an unlocked lock, a\nRuntimeError\nis raised.\n- async wait()\u00b6\nWait until notified.\nIf the calling task has not acquired the lock when this method is called, a\nRuntimeError\nis raised.This method releases the underlying lock, and then blocks until it is awakened by a\nnotify()\nornotify_all()\ncall. Once awakened, the Condition re-acquires its lock and this method returnsTrue\n.Note that a task may return from this call spuriously, which is why the caller should always re-check the state and be prepared to\nwait()\nagain. For this reason, you may prefer to usewait_for()\ninstead.\nSemaphore\u00b6\n- class asyncio.Semaphore(value=1)\u00b6\nA Semaphore object. Not thread-safe.\nA semaphore manages an internal counter which is decremented by each\nacquire()\ncall and incremented by eachrelease()\ncall. The counter can never go below zero; whenacquire()\nfinds that it is zero, it blocks, waiting until some task callsrelease()\n.The optional value argument gives the initial value for the internal counter (\n1\nby default). If the given value is less than0\naValueError\nis raised.Changed in version 3.10: Removed the loop parameter.\nThe preferred way to use a Semaphore is an\nasync with\nstatement:sem = asyncio.Semaphore(10) # ... later async with sem: # work with shared resource\nwhich is equivalent to:\nsem = asyncio.Semaphore(10) # ... later await sem.acquire() try: # work with shared resource finally: sem.release()\n- async acquire()\u00b6\nAcquire a semaphore.\nIf the internal counter is greater than zero, decrement it by one and return\nTrue\nimmediately. If it is zero, wait until arelease()\nis called and returnTrue\n.\n- locked()\u00b6\nReturns\nTrue\nif semaphore can not be acquired immediately.\n- release()\u00b6\nRelease a semaphore, incrementing the internal counter by one. Can wake up a task waiting to acquire the semaphore.\nUnlike\nBoundedSemaphore\n,Semaphore\nallows making morerelease()\ncalls thanacquire()\ncalls.\nBoundedSemaphore\u00b6\n- class asyncio.BoundedSemaphore(value=1)\u00b6\nA bounded semaphore object. Not thread-safe.\nBounded Semaphore is a version of\nSemaphore\nthat raises aValueError\ninrelease()\nif it increases the internal counter above the initial value.Changed in version 3.10: Removed the loop parameter.\nBarrier\u00b6\n- class asyncio.Barrier(parties)\u00b6\nA barrier object. Not thread-safe.\nA barrier is a simple synchronization primitive that allows to block until parties number of tasks are waiting on it. Tasks can wait on the\nwait()\nmethod and would be blocked until the specified number of tasks end up waiting onwait()\n. At that point all of the waiting tasks would unblock simultaneously.async with\ncan be used as an alternative to awaiting onwait()\n.The barrier can be reused any number of times.\nExample:\nasync def example_barrier(): # barrier with 3 parties b = asyncio.Barrier(3) # create 2 new waiting tasks asyncio.create_task(b.wait()) asyncio.create_task(b.wait()) await asyncio.sleep(0) print(b) # The third .wait() call passes the barrier await b.wait() print(b) print(\"barrier passed\") await asyncio.sleep(0) print(b) asyncio.run(example_barrier())\nResult of this example is:\n barrier passed \nAdded in version 3.11.\n- async wait()\u00b6\nPass the barrier. When all the tasks party to the barrier have called this function, they are all unblocked simultaneously.\nWhen a waiting or blocked task in the barrier is cancelled, this task exits the barrier which stays in the same state. If the state of the barrier is \u201cfilling\u201d, the number of waiting task decreases by 1.\nThe return value is an integer in the range of 0 to\nparties-1\n, different for each task. This can be used to select a task to do some special housekeeping, e.g.:... async with barrier as position: if position == 0: # Only one task prints this print('End of *draining phase*')\nThis method may raise a\nBrokenBarrierError\nexception if the barrier is broken or reset while a task is waiting. It could raise aCancelledError\nif a task is cancelled.\n- async reset()\u00b6\nReturn the barrier to the default, empty state. Any tasks waiting on it will receive the\nBrokenBarrierError\nexception.If a barrier is broken it may be better to just leave it and create a new one.\n- async abort()\u00b6\nPut the barrier into a broken state. This causes any active or future calls to\nwait()\nto fail with theBrokenBarrierError\n. Use this for example if one of the tasks needs to abort, to avoid infinite waiting tasks.\n- parties\u00b6\nThe number of tasks required to pass the barrier.\n- n_waiting\u00b6\nThe number of tasks currently waiting in the barrier while filling.\n- broken\u00b6\nA boolean that is\nTrue\nif the barrier is in the broken state.\n- exception asyncio.BrokenBarrierError\u00b6\nThis exception, a subclass of\nRuntimeError\n, is raised when theBarrier\nobject is reset or broken.\nChanged in version 3.9: Acquiring a lock using await lock\nor yield from lock\nand/or\nwith\nstatement (with await lock\n, with (yield from\nlock)\n) was removed. Use async with lock\ninstead.", "code_snippets": [" ", " ", "\n\n", "\n", " ", " ", "\n ", "\n", " ", " ", "\n\n", "\n", " ", "\n", "\n ", "\n", "\n ", "\n", " ", "\n ", "\n ", " ", "\n ", "\n\n", " ", "\n ", "\n ", " ", " ", "\n\n ", "\n ", " ", " ", "\n\n ", "\n ", " ", "\n ", "\n\n ", "\n ", " ", "\n\n", "\n", " ", " ", "\n\n", "\n", " ", " ", "\n ", " ", "\n", " ", " ", "\n\n", "\n", " ", "\n", "\n ", " ", "\n", "\n ", "\n", " ", " ", "\n\n", "\n", " ", " ", "\n ", "\n", " ", " ", "\n\n", "\n", " ", "\n", "\n ", "\n", "\n ", "\n", " ", "\n ", "\n ", " ", " ", "\n\n ", "\n ", "\n ", "\n\n ", " ", "\n ", "\n\n ", "\n ", " ", "\n ", "\n ", "\n\n ", " ", "\n ", "\n\n", "\n", " ", " ", " ", " ", " ", "\n", " ", " ", " ", " ", " ", "\n", " ", "\n", " ", " ", " ", " ", " ", "\n", "\n", " ", " ", " ", " ", "\n ", " ", " ", " ", "\n ", "\n ", "\n"], "language": "Python", "source": "python.org", "token_count": 2410} +{"url": "https://docs.python.org/3/library/email.compat32-message.html", "title": ": Representing an email message using the ", "content": "email.message.Message\n: Representing an email message using the compat32\nAPI\u00b6\nThe Message\nclass is very similar to the\nEmailMessage\nclass, without the methods added by that\nclass, and with the default behavior of certain other methods being slightly\ndifferent. We also document here some methods that, while supported by the\nEmailMessage\nclass, are not recommended unless you are\ndealing with legacy code.\nThe philosophy and structure of the two classes is otherwise the same.\nThis document describes the behavior under the default (for Message\n)\npolicy Compat32\n. If you are going to use another policy,\nyou should be using the EmailMessage\nclass instead.\nAn email message consists of headers and a payload. Headers must be RFC 5322 style names and values, where the field name and value are separated by a colon. The colon is not part of either the field name or the field value. The payload may be a simple text message, or a binary object, or a structured sequence of sub-messages each with their own set of headers and their own payload. The latter type of payload is indicated by the message having a MIME type such as multipart/* or message/rfc822.\nThe conceptual model provided by a Message\nobject is that of an\nordered dictionary of headers with additional methods for accessing both\nspecialized information from the headers, for accessing the payload, for\ngenerating a serialized version of the message, and for recursively walking\nover the object tree. Note that duplicate headers are supported but special\nmethods must be used to access them.\nThe Message\npseudo-dictionary is indexed by the header names, which\nmust be ASCII values. The values of the dictionary are strings that are\nsupposed to contain only ASCII characters; there is some special handling for\nnon-ASCII input, but it doesn\u2019t always produce the correct results. Headers\nare stored and returned in case-preserving form, but field names are matched\ncase-insensitively. There may also be a single envelope header, also known as\nthe Unix-From header or the From_\nheader. The payload is either a\nstring or bytes, in the case of simple message objects, or a list of\nMessage\nobjects, for MIME container documents (e.g.\nmultipart/* and message/rfc822).\nHere are the methods of the Message\nclass:\n- class email.message.Message(policy=compat32)\u00b6\nIf policy is specified (it must be an instance of a\npolicy\nclass) use the rules it specifies to update and serialize the representation of the message. If policy is not set, use thecompat32\npolicy, which maintains backward compatibility with the Python 3.2 version of the email package. For more information see thepolicy\ndocumentation.Changed in version 3.3: The policy keyword argument was added.\n- as_string(unixfrom=False, maxheaderlen=0, policy=None)\u00b6\nReturn the entire message flattened as a string. When optional unixfrom is true, the envelope header is included in the returned string. unixfrom defaults to\nFalse\n. For backward compatibility reasons, maxheaderlen defaults to0\n, so if you want a different value you must override it explicitly (the value specified for max_line_length in the policy will be ignored by this method). The policy argument may be used to override the default policy obtained from the message instance. This can be used to control some of the formatting produced by the method, since the specified policy will be passed to theGenerator\n.Flattening the message may trigger changes to the\nMessage\nif defaults need to be filled in to complete the transformation to a string (for example, MIME boundaries may be generated or modified).Note that this method is provided as a convenience and may not always format the message the way you want. For example, by default it does not do the mangling of lines that begin with\nFrom\nthat is required by the Unix mbox format. For more flexibility, instantiate aGenerator\ninstance and use itsflatten()\nmethod directly. For example:from io import StringIO from email.generator import Generator fp = StringIO() g = Generator(fp, mangle_from_=True, maxheaderlen=60) g.flatten(msg) text = fp.getvalue()\nIf the message object contains binary data that is not encoded according to RFC standards, the non-compliant data will be replaced by unicode \u201cunknown character\u201d code points. (See also\nas_bytes()\nandBytesGenerator\n.)Changed in version 3.4: the policy keyword argument was added.\n- __str__()\u00b6\nEquivalent to\nas_string()\n. Allowsstr(msg)\nto produce a string containing the formatted message.\n- as_bytes(unixfrom=False, policy=None)\u00b6\nReturn the entire message flattened as a bytes object. When optional unixfrom is true, the envelope header is included in the returned string. unixfrom defaults to\nFalse\n. The policy argument may be used to override the default policy obtained from the message instance. This can be used to control some of the formatting produced by the method, since the specified policy will be passed to theBytesGenerator\n.Flattening the message may trigger changes to the\nMessage\nif defaults need to be filled in to complete the transformation to a string (for example, MIME boundaries may be generated or modified).Note that this method is provided as a convenience and may not always format the message the way you want. For example, by default it does not do the mangling of lines that begin with\nFrom\nthat is required by the Unix mbox format. For more flexibility, instantiate aBytesGenerator\ninstance and use itsflatten()\nmethod directly. For example:from io import BytesIO from email.generator import BytesGenerator fp = BytesIO() g = BytesGenerator(fp, mangle_from_=True, maxheaderlen=60) g.flatten(msg) text = fp.getvalue()\nAdded in version 3.4.\n- __bytes__()\u00b6\nEquivalent to\nas_bytes()\n. Allowsbytes(msg)\nto produce a bytes object containing the formatted message.Added in version 3.4.\n- is_multipart()\u00b6\nReturn\nTrue\nif the message\u2019s payload is a list of sub-Message\nobjects, otherwise returnFalse\n. Whenis_multipart()\nreturnsFalse\n, the payload should be a string object (which might be a CTE encoded binary payload). (Note thatis_multipart()\nreturningTrue\ndoes not necessarily mean that \u201cmsg.get_content_maintype() == \u2018multipart\u2019\u201d will return theTrue\n. For example,is_multipart\nwill returnTrue\nwhen theMessage\nis of typemessage/rfc822\n.)\n- set_unixfrom(unixfrom)\u00b6\nSet the message\u2019s envelope header to unixfrom, which should be a string.\n- get_unixfrom()\u00b6\nReturn the message\u2019s envelope header. Defaults to\nNone\nif the envelope header was never set.\n- attach(payload)\u00b6\nAdd the given payload to the current payload, which must be\nNone\nor a list ofMessage\nobjects before the call. After the call, the payload will always be a list ofMessage\nobjects. If you want to set the payload to a scalar object (e.g. a string), useset_payload()\ninstead.This is a legacy method. On the\nEmailMessage\nclass its functionality is replaced byset_content()\nand the relatedmake\nandadd\nmethods.\n- get_payload(i=None, decode=False)\u00b6\nReturn the current payload, which will be a list of\nMessage\nobjects whenis_multipart()\nisTrue\n, or a string whenis_multipart()\nisFalse\n. If the payload is a list and you mutate the list object, you modify the message\u2019s payload in place.With optional argument i,\nget_payload()\nwill return the i-th element of the payload, counting from zero, ifis_multipart()\nisTrue\n. AnIndexError\nwill be raised if i is less than 0 or greater than or equal to the number of items in the payload. If the payload is a string (i.e.is_multipart()\nisFalse\n) and i is given, aTypeError\nis raised.Optional decode is a flag indicating whether the payload should be decoded or not, according to the Content-Transfer-Encoding header. When\nTrue\nand the message is not a multipart, the payload will be decoded if this header\u2019s value isquoted-printable\norbase64\n. If some other encoding is used, or Content-Transfer-Encoding header is missing, the payload is returned as-is (undecoded). In all cases the returned value is binary data. If the message is a multipart and the decode flag isTrue\n, thenNone\nis returned. If the payload is base64 and it was not perfectly formed (missing padding, characters outside the base64 alphabet), then an appropriate defect will be added to the message\u2019s defect property (InvalidBase64PaddingDefect\norInvalidBase64CharactersDefect\n, respectively).When decode is\nFalse\n(the default) the body is returned as a string without decoding the Content-Transfer-Encoding. However, for a Content-Transfer-Encoding of 8bit, an attempt is made to decode the original bytes using thecharset\nspecified by the Content-Type header, using thereplace\nerror handler. If nocharset\nis specified, or if thecharset\ngiven is not recognized by the email package, the body is decoded using the default ASCII charset.This is a legacy method. On the\nEmailMessage\nclass its functionality is replaced byget_content()\nanditer_parts()\n.\n- set_payload(payload, charset=None)\u00b6\nSet the entire message object\u2019s payload to payload. It is the client\u2019s responsibility to ensure the payload invariants. Optional charset sets the message\u2019s default character set; see\nset_charset()\nfor details.This is a legacy method. On the\nEmailMessage\nclass its functionality is replaced byset_content()\n.\n- set_charset(charset)\u00b6\nSet the character set of the payload to charset, which can either be a\nCharset\ninstance (seeemail.charset\n), a string naming a character set, orNone\n. If it is a string, it will be converted to aCharset\ninstance. If charset isNone\n, thecharset\nparameter will be removed from the Content-Type header (the message will not be otherwise modified). Anything else will generate aTypeError\n.If there is no existing MIME-Version header one will be added. If there is no existing Content-Type header, one will be added with a value of text/plain. Whether the Content-Type header already exists or not, its\ncharset\nparameter will be set to charset.output_charset. If charset.input_charset and charset.output_charset differ, the payload will be re-encoded to the output_charset. If there is no existing Content-Transfer-Encoding header, then the payload will be transfer-encoded, if needed, using the specifiedCharset\n, and a header with the appropriate value will be added. If a Content-Transfer-Encoding header already exists, the payload is assumed to already be correctly encoded using that Content-Transfer-Encoding and is not modified.This is a legacy method. On the\nEmailMessage\nclass its functionality is replaced by the charset parameter of theemail.message.EmailMessage.set_content()\nmethod.\n- get_charset()\u00b6\nReturn the\nCharset\ninstance associated with the message\u2019s payload.This is a legacy method. On the\nEmailMessage\nclass it always returnsNone\n.\nThe following methods implement a mapping-like interface for accessing the message\u2019s RFC 2822 headers. Note that there are some semantic differences between these methods and a normal mapping (i.e. dictionary) interface. For example, in a dictionary there are no duplicate keys, but here there may be duplicate message headers. Also, in dictionaries there is no guaranteed order to the keys returned by\nkeys()\n, but in aMessage\nobject, headers are always returned in the order they appeared in the original message, or were added to the message later. Any header deleted and then re-added are always appended to the end of the header list.These semantic differences are intentional and are biased toward maximal convenience.\nNote that in all cases, any envelope header present in the message is not included in the mapping interface.\nIn a model generated from bytes, any header values that (in contravention of the RFCs) contain non-ASCII bytes will, when retrieved through this interface, be represented as\nHeader\nobjects with a charset ofunknown-8bit\n.- __len__()\u00b6\nReturn the total number of headers, including duplicates.\n- __contains__(name)\u00b6\nReturn\nTrue\nif the message object has a field named name. Matching is done case-insensitively and name should not include the trailing colon. Used for thein\noperator, e.g.:if 'message-id' in myMessage: print('Message-ID:', myMessage['message-id'])\n- __getitem__(name)\u00b6\nReturn the value of the named header field. name should not include the colon field separator. If the header is missing,\nNone\nis returned; aKeyError\nis never raised.Note that if the named field appears more than once in the message\u2019s headers, exactly which of those field values will be returned is undefined. Use the\nget_all()\nmethod to get the values of all the extant named headers.\n- __setitem__(name, val)\u00b6\nAdd a header to the message with field name name and value val. The field is appended to the end of the message\u2019s existing fields.\nNote that this does not overwrite or delete any existing header with the same name. If you want to ensure that the new header is the only one present in the message with field name name, delete the field first, e.g.:\ndel msg['subject'] msg['subject'] = 'Python roolz!'\n- __delitem__(name)\u00b6\nDelete all occurrences of the field with name name from the message\u2019s headers. No exception is raised if the named field isn\u2019t present in the headers.\n- keys()\u00b6\nReturn a list of all the message\u2019s header field names.\n- values()\u00b6\nReturn a list of all the message\u2019s field values.\n- items()\u00b6\nReturn a list of 2-tuples containing all the message\u2019s field headers and values.\n- get(name, failobj=None)\u00b6\nReturn the value of the named header field. This is identical to\n__getitem__()\nexcept that optional failobj is returned if the named header is missing (defaults toNone\n).\nHere are some additional useful methods:\n- get_all(name, failobj=None)\u00b6\nReturn a list of all the values for the field named name. If there are no such named headers in the message, failobj is returned (defaults to\nNone\n).\n- add_header(_name, _value, **_params)\u00b6\nExtended header setting. This method is similar to\n__setitem__()\nexcept that additional header parameters can be provided as keyword arguments. _name is the header field to add and _value is the primary value for the header.For each item in the keyword argument dictionary _params, the key is taken as the parameter name, with underscores converted to dashes (since dashes are illegal in Python identifiers). Normally, the parameter will be added as\nkey=\"value\"\nunless the value isNone\n, in which case only the key will be added. If the value contains non-ASCII characters, it can be specified as a three tuple in the format(CHARSET, LANGUAGE, VALUE)\n, whereCHARSET\nis a string naming the charset to be used to encode the value,LANGUAGE\ncan usually be set toNone\nor the empty string (see RFC 2231 for other possibilities), andVALUE\nis the string value containing non-ASCII code points. If a three tuple is not passed and the value contains non-ASCII characters, it is automatically encoded in RFC 2231 format using aCHARSET\nofutf-8\nand aLANGUAGE\nofNone\n.Here\u2019s an example:\nmsg.add_header('Content-Disposition', 'attachment', filename='bud.gif')\nThis will add a header that looks like\nContent-Disposition: attachment; filename=\"bud.gif\"\nAn example with non-ASCII characters:\nmsg.add_header('Content-Disposition', 'attachment', filename=('iso-8859-1', '', 'Fu\u00dfballer.ppt'))\nWhich produces\nContent-Disposition: attachment; filename*=\"iso-8859-1''Fu%DFballer.ppt\"\n- replace_header(_name, _value)\u00b6\nReplace a header. Replace the first header found in the message that matches _name, retaining header order and field name case. If no matching header was found, a\nKeyError\nis raised.\n- get_content_type()\u00b6\nReturn the message\u2019s content type. The returned string is coerced to lower case of the form maintype/subtype. If there was no Content-Type header in the message the default type as given by\nget_default_type()\nwill be returned. Since according to RFC 2045, messages always have a default type,get_content_type()\nwill always return a value.RFC 2045 defines a message\u2019s default type to be text/plain unless it appears inside a multipart/digest container, in which case it would be message/rfc822. If the Content-Type header has an invalid type specification, RFC 2045 mandates that the default type be text/plain.\n- get_content_maintype()\u00b6\nReturn the message\u2019s main content type. This is the maintype part of the string returned by\nget_content_type()\n.\n- get_content_subtype()\u00b6\nReturn the message\u2019s sub-content type. This is the subtype part of the string returned by\nget_content_type()\n.\n- get_default_type()\u00b6\nReturn the default content type. Most messages have a default content type of text/plain, except for messages that are subparts of multipart/digest containers. Such subparts have a default content type of message/rfc822.\n- set_default_type(ctype)\u00b6\nSet the default content type. ctype should either be text/plain or message/rfc822, although this is not enforced. The default content type is not stored in the Content-Type header.\n- get_params(failobj=None, header='content-type', unquote=True)\u00b6\nReturn the message\u2019s Content-Type parameters, as a list. The elements of the returned list are 2-tuples of key/value pairs, as split on the\n'='\nsign. The left hand side of the'='\nis the key, while the right hand side is the value. If there is no'='\nsign in the parameter the value is the empty string, otherwise the value is as described inget_param()\nand is unquoted if optional unquote isTrue\n(the default).Optional failobj is the object to return if there is no Content-Type header. Optional header is the header to search instead of Content-Type.\nThis is a legacy method. On the\nEmailMessage\nclass its functionality is replaced by the params property of the individual header objects returned by the header access methods.\n- get_param(param, failobj=None, header='content-type', unquote=True)\u00b6\nReturn the value of the Content-Type header\u2019s parameter param as a string. If the message has no Content-Type header or if there is no such parameter, then failobj is returned (defaults to\nNone\n).Optional header if given, specifies the message header to use instead of Content-Type.\nParameter keys are always compared case insensitively. The return value can either be a string, or a 3-tuple if the parameter was RFC 2231 encoded. When it\u2019s a 3-tuple, the elements of the value are of the form\n(CHARSET, LANGUAGE, VALUE)\n. Note that bothCHARSET\nandLANGUAGE\ncan beNone\n, in which case you should considerVALUE\nto be encoded in theus-ascii\ncharset. You can usually ignoreLANGUAGE\n.If your application doesn\u2019t care whether the parameter was encoded as in RFC 2231, you can collapse the parameter value by calling\nemail.utils.collapse_rfc2231_value()\n, passing in the return value fromget_param()\n. This will return a suitably decoded Unicode string when the value is a tuple, or the original string unquoted if it isn\u2019t. For example:rawparam = msg.get_param('foo') param = email.utils.collapse_rfc2231_value(rawparam)\nIn any case, the parameter value (either the returned string, or the\nVALUE\nitem in the 3-tuple) is always unquoted, unless unquote is set toFalse\n.This is a legacy method. On the\nEmailMessage\nclass its functionality is replaced by the params property of the individual header objects returned by the header access methods.\n- set_param(param, value, header='Content-Type', requote=True, charset=None, language='', replace=False)\u00b6\nSet a parameter in the Content-Type header. If the parameter already exists in the header, its value will be replaced with value. If the Content-Type header as not yet been defined for this message, it will be set to text/plain and the new parameter value will be appended as per RFC 2045.\nOptional header specifies an alternative header to Content-Type, and all parameters will be quoted as necessary unless optional requote is\nFalse\n(the default isTrue\n).If optional charset is specified, the parameter will be encoded according to RFC 2231. Optional language specifies the RFC 2231 language, defaulting to the empty string. Both charset and language should be strings.\nIf replace is\nFalse\n(the default) the header is moved to the end of the list of headers. If replace isTrue\n, the header will be updated in place.Changed in version 3.4:\nreplace\nkeyword was added.\n- del_param(param, header='content-type', requote=True)\u00b6\nRemove the given parameter completely from the Content-Type header. The header will be re-written in place without the parameter or its value. All values will be quoted as necessary unless requote is\nFalse\n(the default isTrue\n). Optional header specifies an alternative to Content-Type.\n- set_type(type, header='Content-Type', requote=True)\u00b6\nSet the main type and subtype for the Content-Type header. type must be a string in the form maintype/subtype, otherwise a\nValueError\nis raised.This method replaces the Content-Type header, keeping all the parameters in place. If requote is\nFalse\n, this leaves the existing header\u2019s quoting as is, otherwise the parameters will be quoted (the default).An alternative header can be specified in the header argument. When the Content-Type header is set a MIME-Version header is also added.\nThis is a legacy method. On the\nEmailMessage\nclass its functionality is replaced by themake_\nandadd_\nmethods.\n- get_filename(failobj=None)\u00b6\nReturn the value of the\nfilename\nparameter of the Content-Disposition header of the message. If the header does not have afilename\nparameter, this method falls back to looking for thename\nparameter on the Content-Type header. If neither is found, or the header is missing, then failobj is returned. The returned string will always be unquoted as peremail.utils.unquote()\n.\n- get_boundary(failobj=None)\u00b6\nReturn the value of the\nboundary\nparameter of the Content-Type header of the message, or failobj if either the header is missing, or has noboundary\nparameter. The returned string will always be unquoted as peremail.utils.unquote()\n.\n- set_boundary(boundary)\u00b6\nSet the\nboundary\nparameter of the Content-Type header to boundary.set_boundary()\nwill always quote boundary if necessary. AHeaderParseError\nis raised if the message object has no Content-Type header.Note that using this method is subtly different than deleting the old Content-Type header and adding a new one with the new boundary via\nadd_header()\n, becauseset_boundary()\npreserves the order of the Content-Type header in the list of headers. However, it does not preserve any continuation lines which may have been present in the original Content-Type header.\n- get_content_charset(failobj=None)\u00b6\nReturn the\ncharset\nparameter of the Content-Type header, coerced to lower case. If there is no Content-Type header, or if that header has nocharset\nparameter, failobj is returned.Note that this method differs from\nget_charset()\nwhich returns theCharset\ninstance for the default encoding of the message body.\n- get_charsets(failobj=None)\u00b6\nReturn a list containing the character set names in the message. If the message is a multipart, then the list will contain one element for each subpart in the payload, otherwise, it will be a list of length 1.\nEach item in the list will be a string which is the value of the\ncharset\nparameter in the Content-Type header for the represented subpart. However, if the subpart has no Content-Type header, nocharset\nparameter, or is not of the text main MIME type, then that item in the returned list will be failobj.\n- get_content_disposition()\u00b6\nReturn the lowercased value (without parameters) of the message\u2019s Content-Disposition header if it has one, or\nNone\n. The possible values for this method are inline, attachment orNone\nif the message follows RFC 2183.Added in version 3.5.\n- walk()\u00b6\nThe\nwalk()\nmethod is an all-purpose generator which can be used to iterate over all the parts and subparts of a message object tree, in depth-first traversal order. You will typically usewalk()\nas the iterator in afor\nloop; each iteration returns the next subpart.Here\u2019s an example that prints the MIME type of every part of a multipart message structure:\n>>> for part in msg.walk(): ... print(part.get_content_type()) multipart/report text/plain message/delivery-status text/plain text/plain message/rfc822 text/plain\nwalk\niterates over the subparts of any part whereis_multipart()\nreturnsTrue\n, even thoughmsg.get_content_maintype() == 'multipart'\nmay returnFalse\n. We can see this in our example by making use of the_structure\ndebug helper function:>>> for part in msg.walk(): ... print(part.get_content_maintype() == 'multipart', ... part.is_multipart()) True True False False False True False False False False False True False False >>> _structure(msg) multipart/report text/plain message/delivery-status text/plain text/plain message/rfc822 text/plain\nHere the\nmessage\nparts are notmultiparts\n, but they do contain subparts.is_multipart()\nreturnsTrue\nandwalk\ndescends into the subparts.\nMessage\nobjects can also optionally contain two instance attributes, which can be used when generating the plain text of a MIME message.- preamble\u00b6\nThe format of a MIME document allows for some text between the blank line following the headers, and the first multipart boundary string. Normally, this text is never visible in a MIME-aware mail reader because it falls outside the standard MIME armor. However, when viewing the raw text of the message, or when viewing the message in a non-MIME aware reader, this text can become visible.\nThe preamble attribute contains this leading extra-armor text for MIME documents. When the\nParser\ndiscovers some text after the headers but before the first boundary string, it assigns this text to the message\u2019s preamble attribute. When theGenerator\nis writing out the plain text representation of a MIME message, and it finds the message has a preamble attribute, it will write this text in the area between the headers and the first boundary. Seeemail.parser\nandemail.generator\nfor details.Note that if the message object has no preamble, the preamble attribute will be\nNone\n.\n- epilogue\u00b6\nThe epilogue attribute acts the same way as the preamble attribute, except that it contains text that appears between the last boundary and the end of the message.\nYou do not need to set the epilogue to the empty string in order for the\nGenerator\nto print a newline at the end of the file.\n- defects\u00b6\nThe defects attribute contains a list of all the problems found when parsing this message. See\nemail.errors\nfor a detailed description of the possible parsing defects.", "code_snippets": [" ", "\n", " ", "\n", " ", " ", "\n", " ", " ", " ", " ", "\n", "\n", " ", " ", "\n", " ", "\n", " ", "\n", " ", " ", "\n", " ", " ", " ", " ", "\n", "\n", " ", " ", "\n", " ", " ", " ", "\n ", " ", "\n", " ", "\n", " ", " ", "\n", " ", " ", "\n", " ", " ", "\n", " ", "\n ", " ", " ", "\n", " ", " ", "\n", " ", " ", "\n", " ", " ", "\n"], "language": "Python", "source": "python.org", "token_count": 6561} +{"url": "https://docs.python.org/3/library/email.parser.html", "title": ": Parsing email messages", "content": "email.parser\n: Parsing email messages\u00b6\nSource code: Lib/email/parser.py\nMessage object structures can be created in one of two ways: they can be\ncreated from whole cloth by creating an EmailMessage\nobject, adding headers using the dictionary interface, and adding payload(s)\nusing set_content()\nand related methods, or\nthey can be created by parsing a serialized representation of the email\nmessage.\nThe email\npackage provides a standard parser that understands most email\ndocument structures, including MIME documents. You can pass the parser a\nbytes, string or file object, and the parser will return to you the root\nEmailMessage\ninstance of the object structure. For\nsimple, non-MIME messages the payload of this root object will likely be a\nstring containing the text of the message. For MIME messages, the root object\nwill return True\nfrom its is_multipart()\nmethod, and the subparts can be accessed via the payload manipulation methods,\nsuch as get_body()\n,\niter_parts()\n, and\nwalk()\n.\nThere are actually two parser interfaces available for use, the Parser\nAPI and the incremental FeedParser\nAPI. The Parser\nAPI is\nmost useful if you have the entire text of the message in memory, or if the\nentire message lives in a file on the file system. FeedParser\nis more\nappropriate when you are reading the message from a stream which might block\nwaiting for more input (such as reading an email message from a socket). The\nFeedParser\ncan consume and parse the message incrementally, and only\nreturns the root object when you close the parser.\nNote that the parser can be extended in limited ways, and of course you can\nimplement your own parser completely from scratch. All of the logic that\nconnects the email\npackage\u2019s bundled parser and the\nEmailMessage\nclass is embodied in the Policy\nclass, so a custom parser can create message object trees any way it finds\nnecessary by implementing custom versions of the appropriate Policy\nmethods.\nFeedParser API\u00b6\nThe BytesFeedParser\n, imported from the email.feedparser\nmodule,\nprovides an API that is conducive to incremental parsing of email messages,\nsuch as would be necessary when reading the text of an email message from a\nsource that can block (such as a socket). The BytesFeedParser\ncan of\ncourse be used to parse an email message fully contained in a bytes-like\nobject, string, or file, but the BytesParser\nAPI may be more\nconvenient for such use cases. The semantics and results of the two parser\nAPIs are identical.\nThe BytesFeedParser\n\u2019s API is simple; you create an instance, feed it a\nbunch of bytes until there\u2019s no more to feed it, then close the parser to\nretrieve the root message object. The BytesFeedParser\nis extremely\naccurate when parsing standards-compliant messages, and it does a very good job\nof parsing non-compliant messages, providing information about how a message\nwas deemed broken. It will populate a message object\u2019s\ndefects\nattribute with a list of any\nproblems it found in a message. See the email.errors\nmodule for the\nlist of defects that it can find.\nHere is the API for the BytesFeedParser\n:\n- class email.parser.BytesFeedParser(_factory=None, *, policy=policy.compat32)\u00b6\nCreate a\nBytesFeedParser\ninstance. Optional _factory is a no-argument callable; if not specified use themessage_factory\nfrom the policy. Call _factory whenever a new message object is needed.If policy is specified use the rules it specifies to update the representation of the message. If policy is not set, use the\ncompat32\npolicy, which maintains backward compatibility with the Python 3.2 version of the email package and providesMessage\nas the default factory. All other policies provideEmailMessage\nas the default _factory. For more information on what else policy controls, see thepolicy\ndocumentation.Note: The policy keyword should always be specified; The default will change to\nemail.policy.default\nin a future version of Python.Added in version 3.2.\nChanged in version 3.3: Added the policy keyword.\nChanged in version 3.6: _factory defaults to the policy\nmessage_factory\n.- feed(data)\u00b6\nFeed the parser some more data. data should be a bytes-like object containing one or more lines. The lines can be partial and the parser will stitch such partial lines together properly. The lines can have any of the three common line endings: carriage return, newline, or carriage return and newline (they can even be mixed).\n- class email.parser.FeedParser(_factory=None, *, policy=policy.compat32)\u00b6\nWorks like\nBytesFeedParser\nexcept that the input to thefeed()\nmethod must be a string. This is of limited utility, since the only way for such a message to be valid is for it to contain only ASCII text or, ifutf8\nisTrue\n, no binary attachments.Changed in version 3.3: Added the policy keyword.\nParser API\u00b6\nThe BytesParser\nclass, imported from the email.parser\nmodule,\nprovides an API that can be used to parse a message when the complete contents\nof the message are available in a bytes-like object or file. The\nemail.parser\nmodule also provides Parser\nfor parsing strings,\nand header-only parsers, BytesHeaderParser\nand\nHeaderParser\n, which can be used if you\u2019re only interested in the\nheaders of the message. BytesHeaderParser\nand HeaderParser\ncan be much faster in these situations, since they do not attempt to parse the\nmessage body, instead setting the payload to the raw body.\n- class email.parser.BytesParser(_class=None, *, policy=policy.compat32)\u00b6\nCreate a\nBytesParser\ninstance. The _class and policy arguments have the same meaning and semantics as the _factory and policy arguments ofBytesFeedParser\n.Note: The policy keyword should always be specified; The default will change to\nemail.policy.default\nin a future version of Python.Changed in version 3.3: Removed the strict argument that was deprecated in 2.4. Added the policy keyword.\nChanged in version 3.6: _class defaults to the policy\nmessage_factory\n.- parse(fp, headersonly=False)\u00b6\nRead all the data from the binary file-like object fp, parse the resulting bytes, and return the message object. fp must support both the\nreadline()\nand theread()\nmethods.The bytes contained in fp must be formatted as a block of RFC 5322 (or, if\nutf8\nisTrue\n, RFC 6532) style headers and header continuation lines, optionally preceded by an envelope header. The header block is terminated either by the end of the data or by a blank line. Following the header block is the body of the message (which may contain MIME-encoded subparts, including subparts with a Content-Transfer-Encoding of8bit\n).Optional headersonly is a flag specifying whether to stop parsing after reading the headers or not. The default is\nFalse\n, meaning it parses the entire contents of the file.\n- parsebytes(bytes, headersonly=False)\u00b6\nSimilar to the\nparse()\nmethod, except it takes a bytes-like object instead of a file-like object. Calling this method on a bytes-like object is equivalent to wrapping bytes in aBytesIO\ninstance first and callingparse()\n.Optional headersonly is as with the\nparse()\nmethod.\nAdded in version 3.2.\n- class email.parser.BytesHeaderParser(_class=None, *, policy=policy.compat32)\u00b6\nExactly like\nBytesParser\n, except that headersonly defaults toTrue\n.Added in version 3.3.\n- class email.parser.Parser(_class=None, *, policy=policy.compat32)\u00b6\nThis class is parallel to\nBytesParser\n, but handles string input.Changed in version 3.3: Removed the strict argument. Added the policy keyword.\nChanged in version 3.6: _class defaults to the policy\nmessage_factory\n.- parse(fp, headersonly=False)\u00b6\nRead all the data from the text-mode file-like object fp, parse the resulting text, and return the root message object. fp must support both the\nreadline()\nand theread()\nmethods on file-like objects.Other than the text mode requirement, this method operates like\nBytesParser.parse()\n.\n- class email.parser.HeaderParser(_class=None, *, policy=policy.compat32)\u00b6\nExactly like\nParser\n, except that headersonly defaults toTrue\n.\nSince creating a message object structure from a string or a file object is such\na common task, four functions are provided as a convenience. They are available\nin the top-level email\npackage namespace.\n- email.message_from_bytes(s, _class=None, *, policy=policy.compat32)\u00b6\nReturn a message object structure from a bytes-like object. This is equivalent to\nBytesParser().parsebytes(s)\n. Optional _class and policy are interpreted as with theBytesParser\nclass constructor.Added in version 3.2.\nChanged in version 3.3: Removed the strict argument. Added the policy keyword.\n- email.message_from_binary_file(fp, _class=None, *, policy=policy.compat32)\u00b6\nReturn a message object structure tree from an open binary file object. This is equivalent to\nBytesParser().parse(fp)\n. _class and policy are interpreted as with theBytesParser\nclass constructor.Added in version 3.2.\nChanged in version 3.3: Removed the strict argument. Added the policy keyword.\n- email.message_from_string(s, _class=None, *, policy=policy.compat32)\u00b6\nReturn a message object structure from a string. This is equivalent to\nParser().parsestr(s)\n. _class and policy are interpreted as with theParser\nclass constructor.Changed in version 3.3: Removed the strict argument. Added the policy keyword.\n- email.message_from_file(fp, _class=None, *, policy=policy.compat32)\u00b6\nReturn a message object structure tree from an open file object. This is equivalent to\nParser().parse(fp)\n. _class and policy are interpreted as with theParser\nclass constructor.Changed in version 3.3: Removed the strict argument. Added the policy keyword.\nChanged in version 3.6: _class defaults to the policy\nmessage_factory\n.\nHere\u2019s an example of how you might use message_from_bytes()\nat an\ninteractive Python prompt:\n>>> import email\n>>> msg = email.message_from_bytes(myBytes)\nAdditional notes\u00b6\nHere are some notes on the parsing semantics:\nMost non-multipart type messages are parsed as a single message object with a string payload. These objects will return\nFalse\nforis_multipart()\n, anditer_parts()\nwill yield an empty list.All multipart type messages will be parsed as a container message object with a list of sub-message objects for their payload. The outer container message will return\nTrue\nforis_multipart()\n, anditer_parts()\nwill yield a list of subparts.Most messages with a content type of message/* (such as message/delivery-status and message/rfc822) will also be parsed as container object containing a list payload of length 1. Their\nis_multipart()\nmethod will returnTrue\n. The single element yielded byiter_parts()\nwill be a sub-message object.Some non-standards-compliant messages may not be internally consistent about their multipart-edness. Such messages may have a Content-Type header of type multipart, but their\nis_multipart()\nmethod may returnFalse\n. If such messages were parsed with theFeedParser\n, they will have an instance of theMultipartInvariantViolationDefect\nclass in their defects attribute list. Seeemail.errors\nfor details.", "code_snippets": ["\n", " ", " ", "\n"], "language": "Python", "source": "python.org", "token_count": 2730} +{"url": "https://docs.python.org/3/using/cmdline.html", "title": "Command line and environment", "content": "1. Command line and environment\u00b6\nThe CPython interpreter scans the command line and the environment for various settings.\nCPython implementation detail: Other implementations\u2019 command line schemes may differ. See Alternate Implementations for further resources.\n1.1. Command line\u00b6\nWhen invoking Python, you may specify any of these options:\npython [-bBdEhiIOPqRsSuvVWx?] [-c command | -m module-name | script | - ] [args]\nThe most common use case is, of course, a simple invocation of a script:\npython myscript.py\n1.1.1. Interface options\u00b6\nThe interpreter interface resembles that of the UNIX shell, but provides some additional methods of invocation:\nWhen called with standard input connected to a tty device, it prompts for commands and executes them until an EOF (an end-of-file character, you can produce that with Ctrl-D on UNIX or Ctrl-Z, Enter on Windows) is read. For more on interactive mode, see Interactive Mode.\nWhen called with a file name argument or with a file as standard input, it reads and executes a script from that file.\nWhen called with a directory name argument, it reads and executes an appropriately named script from that directory.\nWhen called with\n-c command\n, it executes the Python statement(s) given as command. Here command may contain multiple statements separated by newlines. Leading whitespace is significant in Python statements!When called with\n-m module-name\n, the given module is located on the Python module path and executed as a script.\nIn non-interactive mode, the entire input is parsed before it is executed.\nAn interface option terminates the list of options consumed by the interpreter,\nall consecutive arguments will end up in sys.argv\n\u2013 note that the first\nelement, subscript zero (sys.argv[0]\n), is a string reflecting the program\u2019s\nsource.\n- -c \u00b6\nExecute the Python code in command. command can be one or more statements separated by newlines, with significant leading whitespace as in normal module code.\nIf this option is given, the first element of\nsys.argv\nwill be\"-c\"\nand the current directory will be added to the start ofsys.path\n(allowing modules in that directory to be imported as top level modules).Raises an auditing event\ncpython.run_command\nwith argumentcommand\n.Changed in version 3.14: command is automatically dedented before execution.\n- -m \u00b6\nSearch\nsys.path\nfor the named module and execute its contents as the__main__\nmodule.Since the argument is a module name, you must not give a file extension (\n.py\n). The module name should be a valid absolute Python module name, but the implementation may not always enforce this (e.g. it may allow you to use a name that includes a hyphen).Package names (including namespace packages) are also permitted. When a package name is supplied instead of a normal module, the interpreter will execute\n.__main__\nas the main module. This behaviour is deliberately similar to the handling of directories and zipfiles that are passed to the interpreter as the script argument.Note\nThis option cannot be used with built-in modules and extension modules written in C, since they do not have Python module files. However, it can still be used for precompiled modules, even if the original source file is not available.\nIf this option is given, the first element of\nsys.argv\nwill be the full path to the module file (while the module file is being located, the first element will be set to\"-m\"\n). As with the-c\noption, the current directory will be added to the start ofsys.path\n.-I\noption can be used to run the script in isolated mode wheresys.path\ncontains neither the current directory nor the user\u2019s site-packages directory. AllPYTHON*\nenvironment variables are ignored, too.Many standard library modules contain code that is invoked on their execution as a script. An example is the\ntimeit\nmodule:python -m timeit -s \"setup here\" \"benchmarked code here\" python -m timeit -h # for details\nRaises an auditing event\ncpython.run_module\nwith argumentmodule-name\n.See also\nrunpy.run_module()\nEquivalent functionality directly available to Python code\nPEP 338 \u2013 Executing modules as scripts\nChanged in version 3.1: Supply the package name to run a\n__main__\nsubmodule.Changed in version 3.4: namespace packages are also supported\n- -\nRead commands from standard input (\nsys.stdin\n). If standard input is a terminal,-i\nis implied.If this option is given, the first element of\nsys.argv\nwill be\"-\"\nand the current directory will be added to the start ofsys.path\n.Raises an auditing event\ncpython.run_stdin\nwith no arguments.\n-