diff --git "a/data/python/test.jsonl" "b/data/python/test.jsonl" new file mode 100644--- /dev/null +++ "b/data/python/test.jsonl" @@ -0,0 +1,27 @@ +{"url": "https://docs.python.org/3/c-api/extension-modules.html", "title": "Defining extension modules", "content": "Defining extension modules\u00b6\nA C extension for CPython is a shared library (for example, a .so\nfile\non Linux, .pyd\nDLL on Windows), which is loadable into the Python process\n(for example, it is compiled with compatible compiler settings), and which\nexports an initialization function.\nTo be importable by default (that is, by\nimportlib.machinery.ExtensionFileLoader\n),\nthe shared library must be available on sys.path\n,\nand must be named after the module name plus an extension listed in\nimportlib.machinery.EXTENSION_SUFFIXES\n.\nNote\nBuilding, packaging and distributing extension modules is best done with third-party tools, and is out of scope of this document. One suitable tool is Setuptools, whose documentation can be found at https://setuptools.pypa.io/en/latest/setuptools.html.\nNormally, the initialization function returns a module definition initialized\nusing PyModuleDef_Init()\n.\nThis allows splitting the creation process into several phases:\nBefore any substantial code is executed, Python can determine which capabilities the module supports, and it can adjust the environment or refuse loading an incompatible extension.\nBy default, Python itself creates the module object \u2013 that is, it does the equivalent of\nobject.__new__()\nfor classes. It also sets initial attributes like__package__\nand__loader__\n.Afterwards, the module object is initialized using extension-specific code \u2013 the equivalent of\n__init__()\non classes.\nThis is called multi-phase initialization to distinguish it from the legacy (but still supported) single-phase initialization scheme, where the initialization function returns a fully constructed module. See the single-phase-initialization section below for details.\nChanged in version 3.5: Added support for multi-phase initialization (PEP 489).\nMultiple module instances\u00b6\nBy default, extension modules are not singletons.\nFor example, if the sys.modules\nentry is removed and the module\nis re-imported, a new module object is created, and typically populated with\nfresh method and type objects.\nThe old module is subject to normal garbage collection.\nThis mirrors the behavior of pure-Python modules.\nAdditional module instances may be created in\nsub-interpreters\nor after Python runtime reinitialization\n(Py_Finalize()\nand Py_Initialize()\n).\nIn these cases, sharing Python objects between module instances would likely\ncause crashes or undefined behavior.\nTo avoid such issues, each instance of an extension module should be isolated: changes to one instance should not implicitly affect the others, and all state owned by the module, including references to Python objects, should be specific to a particular module instance. See Isolating Extension Modules for more details and a practical guide.\nA simpler way to avoid these issues is raising an error on repeated initialization.\nAll modules are expected to support\nsub-interpreters, or otherwise explicitly\nsignal a lack of support.\nThis is usually achieved by isolation or blocking repeated initialization,\nas above.\nA module may also be limited to the main interpreter using\nthe Py_mod_multiple_interpreters\nslot.\nInitialization function\u00b6\nThe initialization function defined by an extension module has the following signature:\nIts name should be PyInit_\n, with \nreplaced by the\nname of the module.\nFor modules with ASCII-only names, the function must instead be named\nPyInit_\n, with \nreplaced by the name of the module.\nWhen using Multi-phase initialization, non-ASCII module names\nare allowed. In this case, the initialization function name is\nPyInitU_\n, with \nencoded using Python\u2019s\npunycode encoding with hyphens replaced by underscores. In Python:\ndef initfunc_name(name):\ntry:\nsuffix = b'_' + name.encode('ascii')\nexcept UnicodeEncodeError:\nsuffix = b'U_' + name.encode('punycode').replace(b'-', b'_')\nreturn b'PyInit' + suffix\nIt is recommended to define the initialization function using a helper macro:\n-\nPyMODINIT_FUNC\u00b6\nDeclare an extension module initialization function. This macro:\nspecifies the PyObject* return type,\nadds any special linkage declarations required by the platform, and\nfor C++, declares the function as\nextern \"C\"\n.\nFor example, a module called spam\nwould be defined like this:\nstatic struct PyModuleDef spam_module = {\n.m_base = PyModuleDef_HEAD_INIT,\n.m_name = \"spam\",\n...\n};\nPyMODINIT_FUNC\nPyInit_spam(void)\n{\nreturn PyModuleDef_Init(&spam_module);\n}\nIt is possible to export multiple modules from a single shared library by defining multiple initialization functions. However, importing them requires using symbolic links or a custom importer, because by default only the function corresponding to the filename is found. See the Multiple modules in one library section in PEP 489 for details.\nThe initialization function is typically the only non-static\nitem defined in the module\u2019s C source.\nMulti-phase initialization\u00b6\nNormally, the initialization function\n(PyInit_modulename\n) returns a PyModuleDef\ninstance with\nnon-NULL\nm_slots\n.\nBefore it is returned, the PyModuleDef\ninstance must be initialized\nusing the following function:\n-\nPyObject *PyModuleDef_Init(PyModuleDef *def)\u00b6\n- Part of the Stable ABI since version 3.5.\nEnsure a module definition is a properly initialized Python object that correctly reports its type and a reference count.\nReturn def cast to\nPyObject*\n, orNULL\nif an error occurred.Calling this function is required for Multi-phase initialization. It should not be used in other contexts.\nNote that Python assumes that\nPyModuleDef\nstructures are statically allocated. This function may return either a new reference or a borrowed one; this reference must not be released.Added in version 3.5.\nLegacy single-phase initialization\u00b6\nAttention\nSingle-phase initialization is a legacy mechanism to initialize extension modules, with known drawbacks and design flaws. Extension module authors are encouraged to use multi-phase initialization instead.\nIn single-phase initialization, the\ninitialization function (PyInit_modulename\n)\nshould create, populate and return a module object.\nThis is typically done using PyModule_Create()\nand functions like\nPyModule_AddObjectRef()\n.\nSingle-phase initialization differs from the default in the following ways:\nSingle-phase modules are, or rather contain, \u201csingletons\u201d.\nWhen the module is first initialized, Python saves the contents of the module\u2019s\n__dict__\n(that is, typically, the module\u2019s functions and types).For subsequent imports, Python does not call the initialization function again. Instead, it creates a new module object with a new\n__dict__\n, and copies the saved contents to it. For example, given a single-phase module_testsinglephase\n[1] that defines a functionsum\nand an exception classerror\n:>>> import sys >>> import _testsinglephase as one >>> del sys.modules['_testsinglephase'] >>> import _testsinglephase as two >>> one is two False >>> one.__dict__ is two.__dict__ False >>> one.sum is two.sum True >>> one.error is two.error True\nThe exact behavior should be considered a CPython implementation detail.\nTo work around the fact that\nPyInit_modulename\ndoes not take a spec argument, some state of the import machinery is saved and applied to the first suitable module created during thePyInit_modulename\ncall. Specifically, when a sub-module is imported, this mechanism prepends the parent package name to the name of the module.A single-phase\nPyInit_modulename\nfunction should create \u201cits\u201d module object as soon as possible, before any other module objects can be created.Non-ASCII module names (\nPyInitU_modulename\n) are not supported.Single-phase modules support module lookup functions like\nPyState_FindModule()\n.", "code_snippets": [], "language": "Python", "source": "python.org", "token_count": 1915} +{"url": "https://docs.python.org/3/howto/annotations.html", "title": "Annotations Best Practices", "content": "Annotations Best Practices\u00b6\n- author:\nLarry Hastings\nAccessing The Annotations Dict Of An Object In Python 3.10 And Newer\u00b6\nPython 3.10 adds a new function to the standard library:\ninspect.get_annotations()\n. In Python versions 3.10\nthrough 3.13, calling this function is the best practice for\naccessing the annotations dict of any object that supports\nannotations. This function can also \u201cun-stringize\u201d\nstringized annotations for you.\nIn Python 3.14, there is a new annotationlib\nmodule\nwith functionality for working with annotations. This\nincludes a annotationlib.get_annotations()\nfunction,\nwhich supersedes inspect.get_annotations()\n.\nIf for some reason inspect.get_annotations()\nisn\u2019t\nviable for your use case, you may access the\n__annotations__\ndata member manually. Best practice\nfor this changed in Python 3.10 as well: as of Python 3.10,\no.__annotations__\nis guaranteed to always work\non Python functions, classes, and modules. If you\u2019re\ncertain the object you\u2019re examining is one of these three\nspecific objects, you may simply use o.__annotations__\nto get at the object\u2019s annotations dict.\nHowever, other types of callables\u2013for example,\ncallables created by functools.partial()\n\u2013may\nnot have an __annotations__\nattribute defined. When\naccessing the __annotations__\nof a possibly unknown\nobject, best practice in Python versions 3.10 and\nnewer is to call getattr()\nwith three arguments,\nfor example getattr(o, '__annotations__', None)\n.\nBefore Python 3.10, accessing __annotations__\non a class that\ndefines no annotations but that has a parent class with\nannotations would return the parent\u2019s __annotations__\n.\nIn Python 3.10 and newer, the child class\u2019s annotations\nwill be an empty dict instead.\nAccessing The Annotations Dict Of An Object In Python 3.9 And Older\u00b6\nIn Python 3.9 and older, accessing the annotations dict of an object is much more complicated than in newer versions. The problem is a design flaw in these older versions of Python, specifically to do with class annotations.\nBest practice for accessing the annotations dict of other\nobjects\u2013functions, other callables, and modules\u2013is the same\nas best practice for 3.10, assuming you aren\u2019t calling\ninspect.get_annotations()\n: you should use three-argument\ngetattr()\nto access the object\u2019s __annotations__\nattribute.\nUnfortunately, this isn\u2019t best practice for classes. The problem\nis that, since __annotations__\nis optional on classes, and\nbecause classes can inherit attributes from their base classes,\naccessing the __annotations__\nattribute of a class may\ninadvertently return the annotations dict of a base class.\nAs an example:\nclass Base:\na: int = 3\nb: str = 'abc'\nclass Derived(Base):\npass\nprint(Derived.__annotations__)\nThis will print the annotations dict from Base\n, not\nDerived\n.\nYour code will have to have a separate code path if the object\nyou\u2019re examining is a class (isinstance(o, type)\n).\nIn that case, best practice relies on an implementation detail\nof Python 3.9 and before: if a class has annotations defined,\nthey are stored in the class\u2019s __dict__\ndictionary. Since\nthe class may or may not have annotations defined, best practice\nis to call the get()\nmethod on the class dict.\nTo put it all together, here is some sample code that safely\naccesses the __annotations__\nattribute on an arbitrary\nobject in Python 3.9 and before:\nif isinstance(o, type):\nann = o.__dict__.get('__annotations__', None)\nelse:\nann = getattr(o, '__annotations__', None)\nAfter running this code, ann\nshould be either a\ndictionary or None\n. You\u2019re encouraged to double-check\nthe type of ann\nusing isinstance()\nbefore further\nexamination.\nNote that some exotic or malformed type objects may not have\na __dict__\nattribute, so for extra safety you may also wish\nto use getattr()\nto access __dict__\n.\nManually Un-Stringizing Stringized Annotations\u00b6\nIn situations where some annotations may be \u201cstringized\u201d,\nand you wish to evaluate those strings to produce the\nPython values they represent, it really is best to\ncall inspect.get_annotations()\nto do this work\nfor you.\nIf you\u2019re using Python 3.9 or older, or if for some reason\nyou can\u2019t use inspect.get_annotations()\n, you\u2019ll need\nto duplicate its logic. You\u2019re encouraged to examine the\nimplementation of inspect.get_annotations()\nin the\ncurrent Python version and follow a similar approach.\nIn a nutshell, if you wish to evaluate a stringized annotation\non an arbitrary object o\n:\nIf\no\nis a module, useo.__dict__\nas theglobals\nwhen callingeval()\n.If\no\nis a class, usesys.modules[o.__module__].__dict__\nas theglobals\n, anddict(vars(o))\nas thelocals\n, when callingeval()\n.If\no\nis a wrapped callable usingfunctools.update_wrapper()\n,functools.wraps()\n, orfunctools.partial()\n, iteratively unwrap it by accessing eithero.__wrapped__\noro.func\nas appropriate, until you have found the root unwrapped function.If\no\nis a callable (but not a class), useo.__globals__\nas the globals when callingeval()\n.\nHowever, not all string values used as annotations can\nbe successfully turned into Python values by eval()\n.\nString values could theoretically contain any valid string,\nand in practice there are valid use cases for type hints that\nrequire annotating with string values that specifically\ncan\u2019t be evaluated. For example:\nPEP 604 union types using\n|\n, before support for this was added to Python 3.10.Definitions that aren\u2019t needed at runtime, only imported when\ntyping.TYPE_CHECKING\nis true.\nIf eval()\nattempts to evaluate such values, it will\nfail and raise an exception. So, when designing a library\nAPI that works with annotations, it\u2019s recommended to only\nattempt to evaluate string values when explicitly requested\nto by the caller.\nBest Practices For __annotations__\nIn Any Python Version\u00b6\nYou should avoid assigning to the\n__annotations__\nmember of objects directly. Let Python manage setting__annotations__\n.If you do assign directly to the\n__annotations__\nmember of an object, you should always set it to adict\nobject.You should avoid accessing\n__annotations__\ndirectly on any object. Instead, useannotationlib.get_annotations()\n(Python 3.14+) orinspect.get_annotations()\n(Python 3.10+).If you do directly access the\n__annotations__\nmember of an object, you should ensure that it\u2019s a dictionary before attempting to examine its contents.You should avoid modifying\n__annotations__\ndicts.You should avoid deleting the\n__annotations__\nattribute of an object.\n__annotations__\nQuirks\u00b6\nIn all versions of Python 3, function\nobjects lazy-create an annotations dict if no annotations\nare defined on that object. You can delete the __annotations__\nattribute using del fn.__annotations__\n, but if you then\naccess fn.__annotations__\nthe object will create a new empty dict\nthat it will store and return as its annotations. Deleting the\nannotations on a function before it has lazily created its annotations\ndict will throw an AttributeError\n; using del fn.__annotations__\ntwice in a row is guaranteed to always throw an AttributeError\n.\nEverything in the above paragraph also applies to class and module objects in Python 3.10 and newer.\nIn all versions of Python 3, you can set __annotations__\non a function object to None\n. However, subsequently\naccessing the annotations on that object using fn.__annotations__\nwill lazy-create an empty dictionary as per the first paragraph of\nthis section. This is not true of modules and classes, in any Python\nversion; those objects permit setting __annotations__\nto any\nPython value, and will retain whatever value is set.\nIf Python stringizes your annotations for you\n(using from __future__ import annotations\n), and you\nspecify a string as an annotation, the string will\nitself be quoted. In effect the annotation is quoted\ntwice. For example:\nfrom __future__ import annotations\ndef foo(a: \"str\"): pass\nprint(foo.__annotations__)\nThis prints {'a': \"'str'\"}\n. This shouldn\u2019t really be considered\na \u201cquirk\u201d; it\u2019s mentioned here simply because it might be surprising.\nIf you use a class with a custom metaclass and access __annotations__\non the class, you may observe unexpected behavior; see\n749 for some examples. You can avoid these\nquirks by using annotationlib.get_annotations()\non Python 3.14+ or\ninspect.get_annotations()\non Python 3.10+. On earlier versions of\nPython, you can avoid these bugs by accessing the annotations from the\nclass\u2019s __dict__\n(for example, cls.__dict__.get('__annotations__', None)\n).\nIn some versions of Python, instances of classes may have an __annotations__\nattribute. However, this is not supported functionality. If you need the\nannotations of an instance, you can use type()\nto access its class\n(for example, annotationlib.get_annotations(type(myinstance))\non Python 3.14+).", "code_snippets": ["\n ", " ", " ", " ", "\n ", " ", " ", " ", "\n\n", "\n ", "\n\n", "\n", " ", " ", "\n ", " ", " ", " ", "\n", "\n ", " ", " ", " ", " ", "\n", " ", "\n", " ", " ", "\n\n", "\n"], "language": "Python", "source": "python.org", "token_count": 2166} +{"url": "https://docs.python.org/3/howto/pyporting.html", "title": "How to port Python 2 Code to Python 3", "content": "How to port Python 2 Code to Python 3\u00b6\n- author:\nBrett Cannon\nPython 2 reached its official end-of-life at the start of 2020. This means that no new bug reports, fixes, or changes will be made to Python 2 - it\u2019s no longer supported: see PEP 373 and status of Python versions.\nIf you are looking to port an extension module instead of pure Python code, please see Porting Extension Modules to Python 3.\nThe archived python-porting mailing list may contain some useful guidance.\nSince Python 3.11 the original porting guide was discontinued. You can find the old guide in the archive.\nThird-party guides\u00b6\nThere are also multiple third-party guides that might be useful:", "code_snippets": [], "language": "Python", "source": "python.org", "token_count": 167} +{"url": "https://docs.python.org/3/whatsnew/3.2.html", "title": "What\u2019s New In Python 3.2", "content": "What\u2019s New In Python 3.2\u00b6\n- Author:\nRaymond Hettinger\nThis article explains the new features in Python 3.2 as compared to 3.1. Python 3.2 was released on February 20, 2011. It focuses on a few highlights and gives a few examples. For full details, see the Misc/NEWS file.\nSee also\nPEP 392 - Python 3.2 Release Schedule\nPEP 384: Defining a Stable ABI\u00b6\nIn the past, extension modules built for one Python version were often not usable with other Python versions. Particularly on Windows, every feature release of Python required rebuilding all extension modules that one wanted to use. This requirement was the result of the free access to Python interpreter internals that extension modules could use.\nWith Python 3.2, an alternative approach becomes available: extension modules which restrict themselves to a limited API (by defining Py_LIMITED_API) cannot use many of the internals, but are constrained to a set of API functions that are promised to be stable for several releases. As a consequence, extension modules built for 3.2 in that mode will also work with 3.3, 3.4, and so on. Extension modules that make use of details of memory structures can still be built, but will need to be recompiled for every feature release.\nSee also\n- PEP 384 - Defining a Stable ABI\nPEP written by Martin von L\u00f6wis.\nPEP 389: Argparse Command Line Parsing Module\u00b6\nA new module for command line parsing, argparse\n, was introduced to\novercome the limitations of optparse\nwhich did not provide support for\npositional arguments (not just options), subcommands, required options and other\ncommon patterns of specifying and validating options.\nThis module has already had widespread success in the community as a\nthird-party module. Being more fully featured than its predecessor, the\nargparse\nmodule is now the preferred module for command-line processing.\nThe older module is still being kept available because of the substantial amount\nof legacy code that depends on it.\nHere\u2019s an annotated example parser showing features like limiting results to a set of choices, specifying a metavar in the help screen, validating that one or more positional arguments is present, and making a required option:\nimport argparse\nparser = argparse.ArgumentParser(\ndescription = 'Manage servers', # main description for help\nepilog = 'Tested on Solaris and Linux') # displayed after help\nparser.add_argument('action', # argument name\nchoices = ['deploy', 'start', 'stop'], # three allowed values\nhelp = 'action on each target') # help msg\nparser.add_argument('targets',\nmetavar = 'HOSTNAME', # var name used in help msg\nnargs = '+', # require one or more targets\nhelp = 'url for target machines') # help msg explanation\nparser.add_argument('-u', '--user', # -u or --user option\nrequired = True, # make it a required argument\nhelp = 'login as user')\nExample of calling the parser on a command string:\n>>> cmd = 'deploy sneezy.example.com sleepy.example.com -u skycaptain'\n>>> result = parser.parse_args(cmd.split())\n>>> result.action\n'deploy'\n>>> result.targets\n['sneezy.example.com', 'sleepy.example.com']\n>>> result.user\n'skycaptain'\nExample of the parser\u2019s automatically generated help:\n>>> parser.parse_args('-h'.split())\nusage: manage_cloud.py [-h] -u USER\n{deploy,start,stop} HOSTNAME [HOSTNAME ...]\nManage servers\npositional arguments:\n{deploy,start,stop} action on each target\nHOSTNAME url for target machines\noptional arguments:\n-h, --help show this help message and exit\n-u USER, --user USER login as user\nTested on Solaris and Linux\nAn especially nice argparse\nfeature is the ability to define subparsers,\neach with their own argument patterns and help displays:\nimport argparse\nparser = argparse.ArgumentParser(prog='HELM')\nsubparsers = parser.add_subparsers()\nparser_l = subparsers.add_parser('launch', help='Launch Control') # first subgroup\nparser_l.add_argument('-m', '--missiles', action='store_true')\nparser_l.add_argument('-t', '--torpedos', action='store_true')\nparser_m = subparsers.add_parser('move', help='Move Vessel', # second subgroup\naliases=('steer', 'turn')) # equivalent names\nparser_m.add_argument('-c', '--course', type=int, required=True)\nparser_m.add_argument('-s', '--speed', type=int, default=0)\n$ ./helm.py --help # top level help (launch and move)\n$ ./helm.py launch --help # help for launch options\n$ ./helm.py launch --missiles # set missiles=True and torpedos=False\n$ ./helm.py steer --course 180 --speed 5 # set movement parameters\nSee also\n- PEP 389 - New Command Line Parsing Module\nPEP written by Steven Bethard.\nMigrating optparse code to argparse for details on the differences from optparse\n.\nPEP 391: Dictionary Based Configuration for Logging\u00b6\nThe logging\nmodule provided two kinds of configuration, one style with\nfunction calls for each option or another style driven by an external file saved\nin a configparser\nformat. Those options did not provide the flexibility\nto create configurations from JSON or YAML files, nor did they support\nincremental configuration, which is needed for specifying logger options from a\ncommand line.\nTo support a more flexible style, the module now offers\nlogging.config.dictConfig()\nfor specifying logging configuration with\nplain Python dictionaries. The configuration options include formatters,\nhandlers, filters, and loggers. Here\u2019s a working example of a configuration\ndictionary:\n{\"version\": 1,\n\"formatters\": {\"brief\": {\"format\": \"%(levelname)-8s: %(name)-15s: %(message)s\"},\n\"full\": {\"format\": \"%(asctime)s %(name)-15s %(levelname)-8s %(message)s\"}\n},\n\"handlers\": {\"console\": {\n\"class\": \"logging.StreamHandler\",\n\"formatter\": \"brief\",\n\"level\": \"INFO\",\n\"stream\": \"ext://sys.stdout\"},\n\"console_priority\": {\n\"class\": \"logging.StreamHandler\",\n\"formatter\": \"full\",\n\"level\": \"ERROR\",\n\"stream\": \"ext://sys.stderr\"}\n},\n\"root\": {\"level\": \"DEBUG\", \"handlers\": [\"console\", \"console_priority\"]}}\nIf that dictionary is stored in a file called conf.json\n, it can be\nloaded and called with code like this:\n>>> import json, logging.config\n>>> with open('conf.json') as f:\n... conf = json.load(f)\n...\n>>> logging.config.dictConfig(conf)\n>>> logging.info(\"Transaction completed normally\")\nINFO : root : Transaction completed normally\n>>> logging.critical(\"Abnormal termination\")\n2011-02-17 11:14:36,694 root CRITICAL Abnormal termination\nSee also\n- PEP 391 - Dictionary Based Configuration for Logging\nPEP written by Vinay Sajip.\nPEP 3148: The concurrent.futures\nmodule\u00b6\nCode for creating and managing concurrency is being collected in a new top-level namespace, concurrent. Its first member is a futures package which provides a uniform high-level interface for managing threads and processes.\nThe design for concurrent.futures\nwas inspired by the\njava.util.concurrent package. In that model, a running call and its result\nare represented by a Future\nobject that abstracts\nfeatures common to threads, processes, and remote procedure calls. That object\nsupports status checks (running or done), timeouts, cancellations, adding\ncallbacks, and access to results or exceptions.\nThe primary offering of the new module is a pair of executor classes for launching and managing calls. The goal of the executors is to make it easier to use existing tools for making parallel calls. They save the effort needed to setup a pool of resources, launch the calls, create a results queue, add time-out handling, and limit the total number of threads, processes, or remote procedure calls.\nIdeally, each application should share a single executor across multiple components so that process and thread limits can be centrally managed. This solves the design challenge that arises when each component has its own competing strategy for resource management.\nBoth classes share a common interface with three methods:\nsubmit()\nfor scheduling a callable and\nreturning a Future\nobject;\nmap()\nfor scheduling many asynchronous calls\nat a time, and shutdown()\nfor freeing\nresources. The class is a context manager and can be used in a\nwith\nstatement to assure that resources are automatically released\nwhen currently pending futures are done executing.\nA simple of example of ThreadPoolExecutor\nis a\nlaunch of four parallel threads for copying files:\nimport concurrent.futures, shutil\nwith concurrent.futures.ThreadPoolExecutor(max_workers=4) as e:\ne.submit(shutil.copy, 'src1.txt', 'dest1.txt')\ne.submit(shutil.copy, 'src2.txt', 'dest2.txt')\ne.submit(shutil.copy, 'src3.txt', 'dest3.txt')\ne.submit(shutil.copy, 'src3.txt', 'dest4.txt')\nSee also\n- PEP 3148 - Futures \u2013 Execute Computations Asynchronously\nPEP written by Brian Quinlan.\nCode for Threaded Parallel URL reads, an example using threads to fetch multiple web pages in parallel.\nCode for computing prime numbers in\nparallel, an example demonstrating\nProcessPoolExecutor\n.\nPEP 3147: PYC Repository Directories\u00b6\nPython\u2019s scheme for caching bytecode in .pyc files did not work well in environments with multiple Python interpreters. If one interpreter encountered a cached file created by another interpreter, it would recompile the source and overwrite the cached file, thus losing the benefits of caching.\nThe issue of \u201cpyc fights\u201d has become more pronounced as it has become commonplace for Linux distributions to ship with multiple versions of Python. These conflicts also arise with CPython alternatives such as Unladen Swallow.\nTo solve this problem, Python\u2019s import machinery has been extended to use distinct filenames for each interpreter. Instead of Python 3.2 and Python 3.3 and Unladen Swallow each competing for a file called \u201cmymodule.pyc\u201d, they will now look for \u201cmymodule.cpython-32.pyc\u201d, \u201cmymodule.cpython-33.pyc\u201d, and \u201cmymodule.unladen10.pyc\u201d. And to prevent all of these new files from cluttering source directories, the pyc files are now collected in a \u201c__pycache__\u201d directory stored under the package directory.\nAside from the filenames and target directories, the new scheme has a few aspects that are visible to the programmer:\nImported modules now have a\n__cached__\nattribute which stores the name of the actual file that was imported:>>> import collections >>> collections.__cached__ 'c:/py32/lib/__pycache__/collections.cpython-32.pyc'\nThe tag that is unique to each interpreter is accessible from the\nimp\nmodule:>>> import imp >>> imp.get_tag() 'cpython-32'\nScripts that try to deduce source filename from the imported file now need to be smarter. It is no longer sufficient to simply strip the \u201cc\u201d from a \u201c.pyc\u201d filename. Instead, use the new functions in the\nimp\nmodule:>>> imp.source_from_cache('c:/py32/lib/__pycache__/collections.cpython-32.pyc') 'c:/py32/lib/collections.py' >>> imp.cache_from_source('c:/py32/lib/collections.py') 'c:/py32/lib/__pycache__/collections.cpython-32.pyc'\nThe\npy_compile\nandcompileall\nmodules have been updated to reflect the new naming convention and target directory. The command-line invocation of compileall has new options:-i\nfor specifying a list of files and directories to compile and-b\nwhich causes bytecode files to be written to their legacy location rather than __pycache__.The\nimportlib.abc\nmodule has been updated with new abstract base classes for loading bytecode files. The obsolete ABCs,PyLoader\nandPyPycLoader\n, have been deprecated (instructions on how to stay Python 3.1 compatible are included with the documentation).\nSee also\n- PEP 3147 - PYC Repository Directories\nPEP written by Barry Warsaw.\nPEP 3149: ABI Version Tagged .so Files\u00b6\nThe PYC repository directory allows multiple bytecode cache files to be co-located. This PEP implements a similar mechanism for shared object files by giving them a common directory and distinct names for each version.\nThe common directory is \u201cpyshared\u201d and the file names are made distinct by identifying the Python implementation (such as CPython, PyPy, Jython, etc.), the major and minor version numbers, and optional build flags (such as \u201cd\u201d for debug, \u201cm\u201d for pymalloc, \u201cu\u201d for wide-unicode). For an arbitrary package \u201cfoo\u201d, you may see these files when the distribution package is installed:\n/usr/share/pyshared/foo.cpython-32m.so\n/usr/share/pyshared/foo.cpython-33md.so\nIn Python itself, the tags are accessible from functions in the sysconfig\nmodule:\n>>> import sysconfig\n>>> sysconfig.get_config_var('SOABI') # find the version tag\n'cpython-32mu'\n>>> sysconfig.get_config_var('EXT_SUFFIX') # find the full filename extension\n'.cpython-32mu.so'\nSee also\n- PEP 3149 - ABI Version Tagged .so Files\nPEP written by Barry Warsaw.\nPEP 3333: Python Web Server Gateway Interface v1.0.1\u00b6\nThis informational PEP clarifies how bytes/text issues are to be handled by the\nWSGI protocol. The challenge is that string handling in Python 3 is most\nconveniently handled with the str\ntype even though the HTTP protocol\nis itself bytes oriented.\nThe PEP differentiates so-called native strings that are used for request/response headers and metadata versus byte strings which are used for the bodies of requests and responses.\nThe native strings are always of type str\nbut are restricted to code\npoints between U+0000 through U+00FF which are translatable to bytes using\nLatin-1 encoding. These strings are used for the keys and values in the\nenvironment dictionary and for response headers and statuses in the\nstart_response()\nfunction. They must follow RFC 2616 with respect to\nencoding. That is, they must either be ISO-8859-1 characters or use\nRFC 2047 MIME encoding.\nFor developers porting WSGI applications from Python 2, here are the salient points:\nIf the app already used strings for headers in Python 2, no change is needed.\nIf instead, the app encoded output headers or decoded input headers, then the headers will need to be re-encoded to Latin-1. For example, an output header encoded in utf-8 was using\nh.encode('utf-8')\nnow needs to convert from bytes to native strings usingh.encode('utf-8').decode('latin-1')\n.Values yielded by an application or sent using the\nwrite()\nmethod must be byte strings. Thestart_response()\nfunction and environ must use native strings. The two cannot be mixed.\nFor server implementers writing CGI-to-WSGI pathways or other CGI-style\nprotocols, the users must to be able access the environment using native strings\neven though the underlying platform may have a different convention. To bridge\nthis gap, the wsgiref\nmodule has a new function,\nwsgiref.handlers.read_environ()\nfor transcoding CGI variables from\nos.environ\ninto native strings and returning a new dictionary.\nSee also\n- PEP 3333 - Python Web Server Gateway Interface v1.0.1\nPEP written by Phillip Eby.\nOther Language Changes\u00b6\nSome smaller changes made to the core Python language are:\nString formatting for\nformat()\nandstr.format()\ngained new capabilities for the format character #. Previously, for integers in binary, octal, or hexadecimal, it caused the output to be prefixed with \u20180b\u2019, \u20180o\u2019, or \u20180x\u2019 respectively. Now it can also handle floats, complex, and Decimal, causing the output to always have a decimal point even when no digits follow it.>>> format(20, '#o') '0o24' >>> format(12.34, '#5.0f') ' 12.'\n(Suggested by Mark Dickinson and implemented by Eric Smith in bpo-7094.)\nThere is also a new\nstr.format_map()\nmethod that extends the capabilities of the existingstr.format()\nmethod by accepting arbitrary mapping objects. This new method makes it possible to use string formatting with any of Python\u2019s many dictionary-like objects such asdefaultdict\n,Shelf\n,ConfigParser\n, ordbm\n. It is also useful with customdict\nsubclasses that normalize keys before look-up or that supply a__missing__()\nmethod for unknown keys:>>> import shelve >>> d = shelve.open('tmp.shl') >>> 'The {project_name} status is {status} as of {date}'.format_map(d) 'The testing project status is green as of February 15, 2011' >>> class LowerCasedDict(dict): ... def __getitem__(self, key): ... return dict.__getitem__(self, key.lower()) ... >>> lcd = LowerCasedDict(part='widgets', quantity=10) >>> 'There are {QUANTITY} {Part} in stock'.format_map(lcd) 'There are 10 widgets in stock' >>> class PlaceholderDict(dict): ... def __missing__(self, key): ... return '<{}>'.format(key) ... >>> 'Hello {name}, welcome to {location}'.format_map(PlaceholderDict()) 'Hello , welcome to '\n(Suggested by Raymond Hettinger and implemented by Eric Smith in bpo-6081.)\nThe interpreter can now be started with a quiet option,\n-q\n, to prevent the copyright and version information from being displayed in the interactive mode. The option can be introspected using thesys.flags\nattribute:$ python -q >>> sys.flags sys.flags(debug=0, division_warning=0, inspect=0, interactive=0, optimize=0, dont_write_bytecode=0, no_user_site=0, no_site=0, ignore_environment=0, verbose=0, bytes_warning=0, quiet=1)\n(Contributed by Marcin Wojdyr in bpo-1772833).\nThe\nhasattr()\nfunction works by callinggetattr()\nand detecting whether an exception is raised. This technique allows it to detect methods created dynamically by__getattr__()\nor__getattribute__()\nwhich would otherwise be absent from the class dictionary. Formerly, hasattr would catch any exception, possibly masking genuine errors. Now, hasattr has been tightened to only catchAttributeError\nand let other exceptions pass through:>>> class A: ... @property ... def f(self): ... return 1 // 0 ... >>> a = A() >>> hasattr(a, 'f') Traceback (most recent call last): ... ZeroDivisionError: integer division or modulo by zero\n(Discovered by Yury Selivanov and fixed by Benjamin Peterson; bpo-9666.)\nThe\nstr()\nof a float or complex number is now the same as itsrepr()\n. Previously, thestr()\nform was shorter but that just caused confusion and is no longer needed now that the shortest possiblerepr()\nis displayed by default:>>> import math >>> repr(math.pi) '3.141592653589793' >>> str(math.pi) '3.141592653589793'\n(Proposed and implemented by Mark Dickinson; bpo-9337.)\nmemoryview\nobjects now have arelease()\nmethod and they also now support the context management protocol. This allows timely release of any resources that were acquired when requesting a buffer from the original object.>>> with memoryview(b'abcdefgh') as v: ... print(v.tolist()) [97, 98, 99, 100, 101, 102, 103, 104]\n(Added by Antoine Pitrou; bpo-9757.)\nPreviously it was illegal to delete a name from the local namespace if it occurs as a free variable in a nested block:\ndef outer(x): def inner(): return x inner() del x\nThis is now allowed. Remember that the target of an\nexcept\nclause is cleared, so this code which used to work with Python 2.6, raised aSyntaxError\nwith Python 3.1 and now works again:def f(): def print_error(): print(e) try: something except Exception as e: print_error() # implicit \"del e\" here\n(See bpo-4617.)\nStruct sequence types are now subclasses of tuple. This means that C structures like those returned by\nos.stat()\n,time.gmtime()\n, andsys.version_info\nnow work like a named tuple and now work with functions and methods that expect a tuple as an argument. This is a big step forward in making the C structures as flexible as their pure Python counterparts:>>> import sys >>> isinstance(sys.version_info, tuple) True >>> 'Version %d.%d.%d %s(%d)' % sys.version_info 'Version 3.2.0 final(0)'\n(Suggested by Arfrever Frehtes Taifersar Arahesis and implemented by Benjamin Peterson in bpo-8413.)\nWarnings are now easier to control using the\nPYTHONWARNINGS\nenvironment variable as an alternative to using-W\nat the command line:$ export PYTHONWARNINGS='ignore::RuntimeWarning::,once::UnicodeWarning::'\n(Suggested by Barry Warsaw and implemented by Philip Jenvey in bpo-7301.)\nA new warning category,\nResourceWarning\n, has been added. It is emitted when potential issues with resource consumption or cleanup are detected. It is silenced by default in normal release builds but can be enabled through the means provided by thewarnings\nmodule, or on the command line.A\nResourceWarning\nis issued at interpreter shutdown if thegc.garbage\nlist isn\u2019t empty, and ifgc.DEBUG_UNCOLLECTABLE\nis set, all uncollectable objects are printed. This is meant to make the programmer aware that their code contains object finalization issues.A\nResourceWarning\nis also issued when a file object is destroyed without having been explicitly closed. While the deallocator for such object ensures it closes the underlying operating system resource (usually, a file descriptor), the delay in deallocating the object could produce various issues, especially under Windows. Here is an example of enabling the warning from the command line:$ python -q -Wdefault >>> f = open(\"foo\", \"wb\") >>> del f __main__:1: ResourceWarning: unclosed file <_io.BufferedWriter name='foo'>\n(Added by Antoine Pitrou and Georg Brandl in bpo-10093 and bpo-477863.)\nrange\nobjects now support index and count methods. This is part of an effort to make more objects fully implement thecollections.Sequence\nabstract base class. As a result, the language will have a more uniform API. In addition,range\nobjects now support slicing and negative indices, even with values larger thansys.maxsize\n. This makes range more interoperable with lists:>>> range(0, 100, 2).count(10) 1 >>> range(0, 100, 2).index(10) 5 >>> range(0, 100, 2)[5] 10 >>> range(0, 100, 2)[0:5] range(0, 10, 2)\n(Contributed by Daniel Stutzbach in bpo-9213, by Alexander Belopolsky in bpo-2690, and by Nick Coghlan in bpo-10889.)\nThe\ncallable()\nbuiltin function from Py2.x was resurrected. It provides a concise, readable alternative to using an abstract base class in an expression likeisinstance(x, collections.Callable)\n:>>> callable(max) True >>> callable(20) False\n(See bpo-10518.)\nPython\u2019s import mechanism can now load modules installed in directories with non-ASCII characters in the path name. This solved an aggravating problem with home directories for users with non-ASCII characters in their usernames.\n(Required extensive work by Victor Stinner in bpo-9425.)\nNew, Improved, and Deprecated Modules\u00b6\nPython\u2019s standard library has undergone significant maintenance efforts and quality improvements.\nThe biggest news for Python 3.2 is that the email\npackage, mailbox\nmodule, and nntplib\nmodules now work correctly with the bytes/text model\nin Python 3. For the first time, there is correct handling of messages with\nmixed encodings.\nThroughout the standard library, there has been more careful attention to encodings and text versus bytes issues. In particular, interactions with the operating system are now better able to exchange non-ASCII data using the Windows MBCS encoding, locale-aware encodings, or UTF-8.\nAnother significant win is the addition of substantially better support for SSL connections and security certificates.\nIn addition, more classes now implement a context manager to support\nconvenient and reliable resource clean-up using a with\nstatement.\nemail\u00b6\nThe usability of the email\npackage in Python 3 has been mostly fixed by\nthe extensive efforts of R. David Murray. The problem was that emails are\ntypically read and stored in the form of bytes\nrather than str\ntext, and they may contain multiple encodings within a single email. So, the\nemail package had to be extended to parse and generate email messages in bytes\nformat.\nNew functions\nmessage_from_bytes()\nandmessage_from_binary_file()\n, and new classesBytesFeedParser\nandBytesParser\nallow binary message data to be parsed into model objects.Given bytes input to the model,\nget_payload()\nwill by default decode a message body that has a Content-Transfer-Encoding of 8bit using the charset specified in the MIME headers and return the resulting string.Given bytes input to the model,\nGenerator\nwill convert message bodies that have a Content-Transfer-Encoding of 8bit to instead have a 7bit Content-Transfer-Encoding.Headers with unencoded non-ASCII bytes are deemed to be RFC 2047-encoded using the unknown-8bit character set.\nA new class\nBytesGenerator\nproduces bytes as output, preserving any unchanged non-ASCII data that was present in the input used to build the model, including message bodies with a Content-Transfer-Encoding of 8bit.The\nsmtplib\nSMTP\nclass now accepts a byte string for the msg argument to thesendmail()\nmethod, and a new method,send_message()\naccepts aMessage\nobject and can optionally obtain the from_addr and to_addrs addresses directly from the object.\n(Proposed and implemented by R. David Murray, bpo-4661 and bpo-10321.)\nelementtree\u00b6\nThe xml.etree.ElementTree\npackage and its xml.etree.cElementTree\ncounterpart have been updated to version 1.3.\nSeveral new and useful functions and methods have been added:\nxml.etree.ElementTree.fromstringlist()\nwhich builds an XML document from a sequence of fragmentsxml.etree.ElementTree.register_namespace()\nfor registering a global namespace prefixxml.etree.ElementTree.tostringlist()\nfor string representation including all sublistsxml.etree.ElementTree.Element.extend()\nfor appending a sequence of zero or more elementsxml.etree.ElementTree.Element.iterfind()\nsearches an element and subelementsxml.etree.ElementTree.Element.itertext()\ncreates a text iterator over an element and its subelementsxml.etree.ElementTree.TreeBuilder.end()\ncloses the current elementxml.etree.ElementTree.TreeBuilder.doctype()\nhandles a doctype declaration\nTwo methods have been deprecated:\nxml.etree.ElementTree.getchildren()\nuselist(elem)\ninstead.xml.etree.ElementTree.getiterator()\nuseElement.iter\ninstead.\nFor details of the update, see Introducing ElementTree on Fredrik Lundh\u2019s website.\n(Contributed by Florent Xicluna and Fredrik Lundh, bpo-6472.)\nfunctools\u00b6\nThe\nfunctools\nmodule includes a new decorator for caching function calls.functools.lru_cache()\ncan save repeated queries to an external resource whenever the results are expected to be the same.For example, adding a caching decorator to a database query function can save database accesses for popular searches:\n>>> import functools >>> @functools.lru_cache(maxsize=300) ... def get_phone_number(name): ... c = conn.cursor() ... c.execute('SELECT phonenumber FROM phonelist WHERE name=?', (name,)) ... return c.fetchone()[0]\n>>> for name in user_requests: ... get_phone_number(name) # cached lookup\nTo help with choosing an effective cache size, the wrapped function is instrumented for tracking cache statistics:\n>>> get_phone_number.cache_info() CacheInfo(hits=4805, misses=980, maxsize=300, currsize=300)\nIf the phonelist table gets updated, the outdated contents of the cache can be cleared with:\n>>> get_phone_number.cache_clear()\n(Contributed by Raymond Hettinger and incorporating design ideas from Jim Baker, Miki Tebeka, and Nick Coghlan; see recipe 498245, recipe 577479, bpo-10586, and bpo-10593.)\nThe\nfunctools.wraps()\ndecorator now adds a__wrapped__\nattribute pointing to the original callable function. This allows wrapped functions to be introspected. It also copies__annotations__\nif defined. And now it also gracefully skips over missing attributes such as__doc__\nwhich might not be defined for the wrapped callable.In the above example, the cache can be removed by recovering the original function:\n>>> get_phone_number = get_phone_number.__wrapped__ # uncached function\n(By Nick Coghlan and Terrence Cole; bpo-9567, bpo-3445, and bpo-8814.)\nTo help write classes with rich comparison methods, a new decorator\nfunctools.total_ordering()\nwill use existing equality and inequality methods to fill in the remaining methods.For example, supplying __eq__ and __lt__ will enable\ntotal_ordering()\nto fill-in __le__, __gt__ and __ge__:@total_ordering class Student: def __eq__(self, other): return ((self.lastname.lower(), self.firstname.lower()) == (other.lastname.lower(), other.firstname.lower())) def __lt__(self, other): return ((self.lastname.lower(), self.firstname.lower()) < (other.lastname.lower(), other.firstname.lower()))\nWith the total_ordering decorator, the remaining comparison methods are filled in automatically.\n(Contributed by Raymond Hettinger.)\nTo aid in porting programs from Python 2, the\nfunctools.cmp_to_key()\nfunction converts an old-style comparison function to modern key function:>>> # locale-aware sort order >>> sorted(iterable, key=cmp_to_key(locale.strcoll))\nFor sorting examples and a brief sorting tutorial, see the Sorting HowTo tutorial.\n(Contributed by Raymond Hettinger.)\nitertools\u00b6\nThe\nitertools\nmodule has a newaccumulate()\nfunction modeled on APL\u2019s scan operator and Numpy\u2019s accumulate function:>>> from itertools import accumulate >>> list(accumulate([8, 2, 50])) [8, 10, 60]\n>>> prob_dist = [0.1, 0.4, 0.2, 0.3] >>> list(accumulate(prob_dist)) # cumulative probability distribution [0.1, 0.5, 0.7, 1.0]\nFor an example using\naccumulate()\n, see the examples for the random module.(Contributed by Raymond Hettinger and incorporating design suggestions from Mark Dickinson.)\ncollections\u00b6\nThe\ncollections.Counter\nclass now has two forms of in-place subtraction, the existing -= operator for saturating subtraction and the newsubtract()\nmethod for regular subtraction. The former is suitable for multisets which only have positive counts, and the latter is more suitable for use cases that allow negative counts:>>> from collections import Counter >>> tally = Counter(dogs=5, cats=3) >>> tally -= Counter(dogs=2, cats=8) # saturating subtraction >>> tally Counter({'dogs': 3})\n>>> tally = Counter(dogs=5, cats=3) >>> tally.subtract(dogs=2, cats=8) # regular subtraction >>> tally Counter({'dogs': 3, 'cats': -5})\n(Contributed by Raymond Hettinger.)\nThe\ncollections.OrderedDict\nclass has a new methodmove_to_end()\nwhich takes an existing key and moves it to either the first or last position in the ordered sequence.The default is to move an item to the last position. This is equivalent of renewing an entry with\nod[k] = od.pop(k)\n.A fast move-to-end operation is useful for resequencing entries. For example, an ordered dictionary can be used to track order of access by aging entries from the oldest to the most recently accessed.\n>>> from collections import OrderedDict >>> d = OrderedDict.fromkeys(['a', 'b', 'X', 'd', 'e']) >>> list(d) ['a', 'b', 'X', 'd', 'e'] >>> d.move_to_end('X') >>> list(d) ['a', 'b', 'd', 'e', 'X']\n(Contributed by Raymond Hettinger.)\nThe\ncollections.deque\nclass grew two new methodscount()\nandreverse()\nthat make them more substitutable forlist\nobjects:>>> from collections import deque >>> d = deque('simsalabim') >>> d.count('s') 2 >>> d.reverse() >>> d deque(['m', 'i', 'b', 'a', 'l', 'a', 's', 'm', 'i', 's'])\n(Contributed by Raymond Hettinger.)\nthreading\u00b6\nThe threading\nmodule has a new Barrier\nsynchronization class for making multiple threads wait until all of them have\nreached a common barrier point. Barriers are useful for making sure that a task\nwith multiple preconditions does not run until all of the predecessor tasks are\ncomplete.\nBarriers can work with an arbitrary number of threads. This is a generalization of a Rendezvous which is defined for only two threads.\nImplemented as a two-phase cyclic barrier, Barrier\nobjects\nare suitable for use in loops. The separate filling and draining phases\nassure that all threads get released (drained) before any one of them can loop\nback and re-enter the barrier. The barrier fully resets after each cycle.\nExample of using barriers:\nfrom threading import Barrier, Thread\ndef get_votes(site):\nballots = conduct_election(site)\nall_polls_closed.wait() # do not count until all polls are closed\ntotals = summarize(ballots)\npublish(site, totals)\nall_polls_closed = Barrier(len(sites))\nfor site in sites:\nThread(target=get_votes, args=(site,)).start()\nIn this example, the barrier enforces a rule that votes cannot be counted at any\npolling site until all polls are closed. Notice how a solution with a barrier\nis similar to one with threading.Thread.join()\n, but the threads stay alive\nand continue to do work (summarizing ballots) after the barrier point is\ncrossed.\nIf any of the predecessor tasks can hang or be delayed, a barrier can be created\nwith an optional timeout parameter. Then if the timeout period elapses before\nall the predecessor tasks reach the barrier point, all waiting threads are\nreleased and a BrokenBarrierError\nexception is raised:\ndef get_votes(site):\nballots = conduct_election(site)\ntry:\nall_polls_closed.wait(timeout=midnight - time.now())\nexcept BrokenBarrierError:\nlockbox = seal_ballots(ballots)\nqueue.put(lockbox)\nelse:\ntotals = summarize(ballots)\npublish(site, totals)\nIn this example, the barrier enforces a more robust rule. If some election sites do not finish before midnight, the barrier times-out and the ballots are sealed and deposited in a queue for later handling.\nSee Barrier Synchronization Patterns for more examples of how barriers can be used in parallel computing. Also, there is a simple but thorough explanation of barriers in The Little Book of Semaphores, section 3.6.\n(Contributed by Kristj\u00e1n Valur J\u00f3nsson with an API review by Jeffrey Yasskin in bpo-8777.)\ndatetime and time\u00b6\nThe\ndatetime\nmodule has a new typetimezone\nthat implements thetzinfo\ninterface by returning a fixed UTC offset and timezone name. This makes it easier to create timezone-aware datetime objects:>>> from datetime import datetime, timezone >>> datetime.now(timezone.utc) datetime.datetime(2010, 12, 8, 21, 4, 2, 923754, tzinfo=datetime.timezone.utc) >>> datetime.strptime(\"01/01/2000 12:00 +0000\", \"%m/%d/%Y %H:%M %z\") datetime.datetime(2000, 1, 1, 12, 0, tzinfo=datetime.timezone.utc)\nAlso,\ntimedelta\nobjects can now be multiplied byfloat\nand divided byfloat\nandint\nobjects. Andtimedelta\nobjects can now divide one another.The\ndatetime.date.strftime()\nmethod is no longer restricted to years after 1900. The new supported year range is from 1000 to 9999 inclusive.Whenever a two-digit year is used in a time tuple, the interpretation has been governed by\ntime.accept2dyear\n. The default isTrue\nwhich means that for a two-digit year, the century is guessed according to the POSIX rules governing the%y\nstrptime format.Starting with Py3.2, use of the century guessing heuristic will emit a\nDeprecationWarning\n. Instead, it is recommended thattime.accept2dyear\nbe set toFalse\nso that large date ranges can be used without guesswork:>>> import time, warnings >>> warnings.resetwarnings() # remove the default warning filters >>> time.accept2dyear = True # guess whether 11 means 11 or 2011 >>> time.asctime((11, 1, 1, 12, 34, 56, 4, 1, 0)) Warning (from warnings module): ... DeprecationWarning: Century info guessed for a 2-digit year. 'Fri Jan 1 12:34:56 2011' >>> time.accept2dyear = False # use the full range of allowable dates >>> time.asctime((11, 1, 1, 12, 34, 56, 4, 1, 0)) 'Fri Jan 1 12:34:56 11'\nSeveral functions now have significantly expanded date ranges. When\ntime.accept2dyear\nis false, thetime.asctime()\nfunction will accept any year that fits in a C int, while thetime.mktime()\nandtime.strftime()\nfunctions will accept the full range supported by the corresponding operating system functions.\n(Contributed by Alexander Belopolsky and Victor Stinner in bpo-1289118, bpo-5094, bpo-6641, bpo-2706, bpo-1777412, bpo-8013, and bpo-10827.)\nmath\u00b6\nThe math\nmodule has been updated with six new functions inspired by the\nC99 standard.\nThe isfinite()\nfunction provides a reliable and fast way to detect\nspecial values. It returns True\nfor regular numbers and False\nfor Nan or\nInfinity:\n>>> from math import isfinite\n>>> [isfinite(x) for x in (123, 4.56, float('Nan'), float('Inf'))]\n[True, True, False, False]\nThe expm1()\nfunction computes e**x-1\nfor small values of x\nwithout incurring the loss of precision that usually accompanies the subtraction\nof nearly equal quantities:\n>>> from math import expm1\n>>> expm1(0.013671875) # more accurate way to compute e**x-1 for a small x\n0.013765762467652909\nThe erf()\nfunction computes a probability integral or Gaussian\nerror function. The\ncomplementary error function, erfc()\n, is 1 - erf(x)\n:\n>>> from math import erf, erfc, sqrt\n>>> erf(1.0/sqrt(2.0)) # portion of normal distribution within 1 standard deviation\n0.682689492137086\n>>> erfc(1.0/sqrt(2.0)) # portion of normal distribution outside 1 standard deviation\n0.31731050786291404\n>>> erf(1.0/sqrt(2.0)) + erfc(1.0/sqrt(2.0))\n1.0\nThe gamma()\nfunction is a continuous extension of the factorial\nfunction. See https://en.wikipedia.org/wiki/Gamma_function for details. Because\nthe function is related to factorials, it grows large even for small values of\nx, so there is also a lgamma()\nfunction for computing the natural\nlogarithm of the gamma function:\n>>> from math import gamma, lgamma\n>>> gamma(7.0) # six factorial\n720.0\n>>> lgamma(801.0) # log(800 factorial)\n4551.950730698041\n(Contributed by Mark Dickinson.)\nabc\u00b6\nThe abc\nmodule now supports abstractclassmethod()\nand\nabstractstaticmethod()\n.\nThese tools make it possible to define an abstract base class that\nrequires a particular classmethod()\nor staticmethod()\nto be\nimplemented:\nclass Temperature(metaclass=abc.ABCMeta):\n@abc.abstractclassmethod\ndef from_fahrenheit(cls, t):\n...\n@abc.abstractclassmethod\ndef from_celsius(cls, t):\n...\n(Patch submitted by Daniel Urban; bpo-5867.)\nio\u00b6\nThe io.BytesIO\nhas a new method, getbuffer()\n, which\nprovides functionality similar to memoryview()\n. It creates an editable\nview of the data without making a copy. The buffer\u2019s random access and support\nfor slice notation are well-suited to in-place editing:\n>>> REC_LEN, LOC_START, LOC_LEN = 34, 7, 11\n>>> def change_location(buffer, record_number, location):\n... start = record_number * REC_LEN + LOC_START\n... buffer[start: start+LOC_LEN] = location\n>>> import io\n>>> byte_stream = io.BytesIO(\n... b'G3805 storeroom Main chassis '\n... b'X7899 shipping Reserve cog '\n... b'L6988 receiving Primary sprocket'\n... )\n>>> buffer = byte_stream.getbuffer()\n>>> change_location(buffer, 1, b'warehouse ')\n>>> change_location(buffer, 0, b'showroom ')\n>>> print(byte_stream.getvalue())\nb'G3805 showroom Main chassis '\nb'X7899 warehouse Reserve cog '\nb'L6988 receiving Primary sprocket'\n(Contributed by Antoine Pitrou in bpo-5506.)\nreprlib\u00b6\nWhen writing a __repr__()\nmethod for a custom container, it is easy to\nforget to handle the case where a member refers back to the container itself.\nPython\u2019s builtin objects such as list\nand set\nhandle\nself-reference by displaying \u201c\u2026\u201d in the recursive part of the representation\nstring.\nTo help write such __repr__()\nmethods, the reprlib\nmodule has a new\ndecorator, recursive_repr()\n, for detecting recursive calls to\n__repr__()\nand substituting a placeholder string instead:\n>>> class MyList(list):\n... @recursive_repr()\n... def __repr__(self):\n... return '<' + '|'.join(map(repr, self)) + '>'\n...\n>>> m = MyList('abc')\n>>> m.append(m)\n>>> m.append('x')\n>>> print(m)\n<'a'|'b'|'c'|...|'x'>\n(Contributed by Raymond Hettinger in bpo-9826 and bpo-9840.)\nlogging\u00b6\nIn addition to dictionary-based configuration described above, the\nlogging\npackage has many other improvements.\nThe logging documentation has been augmented by a basic tutorial, an advanced tutorial, and a cookbook of logging recipes. These documents are the fastest way to learn about logging.\nThe logging.basicConfig()\nset-up function gained a style argument to\nsupport three different types of string formatting. It defaults to \u201c%\u201d for\ntraditional %-formatting, can be set to \u201c{\u201d for the new str.format()\nstyle, or\ncan be set to \u201c$\u201d for the shell-style formatting provided by\nstring.Template\n. The following three configurations are equivalent:\n>>> from logging import basicConfig\n>>> basicConfig(style='%', format=\"%(name)s -> %(levelname)s: %(message)s\")\n>>> basicConfig(style='{', format=\"{name} -> {levelname} {message}\")\n>>> basicConfig(style='$', format=\"$name -> $levelname: $message\")\nIf no configuration is set-up before a logging event occurs, there is now a\ndefault configuration using a StreamHandler\ndirected to\nsys.stderr\nfor events of WARNING\nlevel or higher. Formerly, an\nevent occurring before a configuration was set-up would either raise an\nexception or silently drop the event depending on the value of\nlogging.raiseExceptions\n. The new default handler is stored in\nlogging.lastResort\n.\nThe use of filters has been simplified. Instead of creating a\nFilter\nobject, the predicate can be any Python callable that\nreturns True\nor False\n.\nThere were a number of other improvements that add flexibility and simplify configuration. See the module documentation for a full listing of changes in Python 3.2.\ncsv\u00b6\nThe csv\nmodule now supports a new dialect, unix_dialect\n,\nwhich applies quoting for all fields and a traditional Unix style with '\\n'\nas\nthe line terminator. The registered dialect name is unix\n.\nThe csv.DictWriter\nhas a new method,\nwriteheader()\nfor writing-out an initial row to document\nthe field names:\n>>> import csv, sys\n>>> w = csv.DictWriter(sys.stdout, ['name', 'dept'], dialect='unix')\n>>> w.writeheader()\n\"name\",\"dept\"\n>>> w.writerows([\n... {'name': 'tom', 'dept': 'accounting'},\n... {'name': 'susan', 'dept': 'Salesl'}])\n\"tom\",\"accounting\"\n\"susan\",\"sales\"\n(New dialect suggested by Jay Talbot in bpo-5975, and the new method suggested by Ed Abraham in bpo-1537721.)\ncontextlib\u00b6\nThere is a new and slightly mind-blowing tool\nContextDecorator\nthat is helpful for creating a\ncontext manager that does double duty as a function decorator.\nAs a convenience, this new functionality is used by\ncontextmanager()\nso that no extra effort is needed to support\nboth roles.\nThe basic idea is that both context managers and function decorators can be used\nfor pre-action and post-action wrappers. Context managers wrap a group of\nstatements using a with\nstatement, and function decorators wrap a\ngroup of statements enclosed in a function. So, occasionally there is a need to\nwrite a pre-action or post-action wrapper that can be used in either role.\nFor example, it is sometimes useful to wrap functions or groups of statements\nwith a logger that can track the time of entry and time of exit. Rather than\nwriting both a function decorator and a context manager for the task, the\ncontextmanager()\nprovides both capabilities in a single\ndefinition:\nfrom contextlib import contextmanager\nimport logging\nlogging.basicConfig(level=logging.INFO)\n@contextmanager\ndef track_entry_and_exit(name):\nlogging.info('Entering: %s', name)\nyield\nlogging.info('Exiting: %s', name)\nFormerly, this would have only been usable as a context manager:\nwith track_entry_and_exit('widget loader'):\nprint('Some time consuming activity goes here')\nload_widget()\nNow, it can be used as a decorator as well:\n@track_entry_and_exit('widget loader')\ndef activity():\nprint('Some time consuming activity goes here')\nload_widget()\nTrying to fulfill two roles at once places some limitations on the technique.\nContext managers normally have the flexibility to return an argument usable by\na with\nstatement, but there is no parallel for function decorators.\nIn the above example, there is not a clean way for the track_entry_and_exit context manager to return a logging instance for use in the body of enclosed statements.\n(Contributed by Michael Foord in bpo-9110.)\ndecimal and fractions\u00b6\nMark Dickinson crafted an elegant and efficient scheme for assuring that different numeric datatypes will have the same hash value whenever their actual values are equal (bpo-8188):\nassert hash(Fraction(3, 2)) == hash(1.5) == \\\nhash(Decimal(\"1.5\")) == hash(complex(1.5, 0))\nSome of the hashing details are exposed through a new attribute,\nsys.hash_info\n, which describes the bit width of the hash value, the\nprime modulus, the hash values for infinity and nan, and the multiplier\nused for the imaginary part of a number:\n>>> sys.hash_info\nsys.hash_info(width=64, modulus=2305843009213693951, inf=314159, nan=0, imag=1000003)\nAn early decision to limit the interoperability of various numeric types has\nbeen relaxed. It is still unsupported (and ill-advised) to have implicit\nmixing in arithmetic expressions such as Decimal('1.1') + float('1.1')\nbecause the latter loses information in the process of constructing the binary\nfloat. However, since existing floating-point value can be converted losslessly\nto either a decimal or rational representation, it makes sense to add them to\nthe constructor and to support mixed-type comparisons.\nThe\ndecimal.Decimal\nconstructor now acceptsfloat\nobjects directly so there in no longer a need to use thefrom_float()\nmethod (bpo-8257).Mixed type comparisons are now fully supported so that\nDecimal\nobjects can be directly compared withfloat\nandfractions.Fraction\n(bpo-2531 and bpo-8188).\nSimilar changes were made to fractions.Fraction\nso that the\nfrom_float()\nand from_decimal()\nmethods are no longer needed (bpo-8294):\n>>> from decimal import Decimal\n>>> from fractions import Fraction\n>>> Decimal(1.1)\nDecimal('1.100000000000000088817841970012523233890533447265625')\n>>> Fraction(1.1)\nFraction(2476979795053773, 2251799813685248)\nAnother useful change for the decimal\nmodule is that the\nContext.clamp\nattribute is now public. This is useful in creating\ncontexts that correspond to the decimal interchange formats specified in IEEE\n754 (see bpo-8540).\n(Contributed by Mark Dickinson and Raymond Hettinger.)\nftp\u00b6\nThe ftplib.FTP\nclass now supports the context management protocol to\nunconditionally consume socket.error\nexceptions and to close the FTP\nconnection when done:\n>>> from ftplib import FTP\n>>> with FTP(\"ftp1.at.proftpd.org\") as ftp:\nftp.login()\nftp.dir()\n'230 Anonymous login ok, restrictions apply.'\ndr-xr-xr-x 9 ftp ftp 154 May 6 10:43 .\ndr-xr-xr-x 9 ftp ftp 154 May 6 10:43 ..\ndr-xr-xr-x 5 ftp ftp 4096 May 6 10:43 CentOS\ndr-xr-xr-x 3 ftp ftp 18 Jul 10 2008 Fedora\nOther file-like objects such as mmap.mmap\nand fileinput.input()\nalso grew auto-closing context managers:\nwith fileinput.input(files=('log1.txt', 'log2.txt')) as f:\nfor line in f:\nprocess(line)\n(Contributed by Tarek Ziad\u00e9 and Giampaolo Rodol\u00e0 in bpo-4972, and by Georg Brandl in bpo-8046 and bpo-1286.)\nThe FTP_TLS\nclass now accepts a context parameter, which is a\nssl.SSLContext\nobject allowing bundling SSL configuration options,\ncertificates and private keys into a single (potentially long-lived) structure.\n(Contributed by Giampaolo Rodol\u00e0; bpo-8806.)\npopen\u00b6\nThe os.popen()\nand subprocess.Popen()\nfunctions now support\nwith\nstatements for auto-closing of the file descriptors.\n(Contributed by Antoine Pitrou and Brian Curtin in bpo-7461 and bpo-10554.)\nselect\u00b6\nThe select\nmodule now exposes a new, constant attribute,\nPIPE_BUF\n, which gives the minimum number of bytes which are\nguaranteed not to block when select.select()\nsays a pipe is ready\nfor writing.\n>>> import select\n>>> select.PIPE_BUF\n512\n(Available on Unix systems. Patch by S\u00e9bastien Sabl\u00e9 in bpo-9862)\ngzip and zipfile\u00b6\ngzip.GzipFile\nnow implements the io.BufferedIOBase\nabstract base class (except for truncate()\n). It also has a\npeek()\nmethod and supports unseekable as well as\nzero-padded file objects.\nThe gzip\nmodule also gains the compress()\nand\ndecompress()\nfunctions for easier in-memory compression and\ndecompression. Keep in mind that text needs to be encoded as bytes\nbefore compressing and decompressing:\n>>> import gzip\n>>> s = 'Three shall be the number thou shalt count, '\n>>> s += 'and the number of the counting shall be three'\n>>> b = s.encode() # convert to utf-8\n>>> len(b)\n89\n>>> c = gzip.compress(b)\n>>> len(c)\n77\n>>> gzip.decompress(c).decode()[:42] # decompress and convert to text\n'Three shall be the number thou shalt count'\n(Contributed by Anand B. Pillai in bpo-3488; and by Antoine Pitrou, Nir Aides and Brian Curtin in bpo-9962, bpo-1675951, bpo-7471 and bpo-2846.)\nAlso, the zipfile.ZipExtFile\nclass was reworked internally to represent\nfiles stored inside an archive. The new implementation is significantly faster\nand can be wrapped in an io.BufferedReader\nobject for more speedups. It\nalso solves an issue where interleaved calls to read and readline gave the\nwrong results.\n(Patch submitted by Nir Aides in bpo-7610.)\ntarfile\u00b6\nThe TarFile\nclass can now be used as a context manager. In\naddition, its add()\nmethod has a new option, filter,\nthat controls which files are added to the archive and allows the file metadata\nto be edited.\nThe new filter option replaces the older, less flexible exclude parameter\nwhich is now deprecated. If specified, the optional filter parameter needs to\nbe a keyword argument. The user-supplied filter function accepts a\nTarInfo\nobject and returns an updated\nTarInfo\nobject, or if it wants the file to be excluded, the\nfunction can return None\n:\n>>> import tarfile, glob\n>>> def myfilter(tarinfo):\n... if tarinfo.isfile(): # only save real files\n... tarinfo.uname = 'monty' # redact the user name\n... return tarinfo\n>>> with tarfile.open(name='myarchive.tar.gz', mode='w:gz') as tf:\n... for filename in glob.glob('*.txt'):\n... tf.add(filename, filter=myfilter)\n... tf.list()\n-rw-r--r-- monty/501 902 2011-01-26 17:59:11 annotations.txt\n-rw-r--r-- monty/501 123 2011-01-26 17:59:11 general_questions.txt\n-rw-r--r-- monty/501 3514 2011-01-26 17:59:11 prion.txt\n-rw-r--r-- monty/501 124 2011-01-26 17:59:11 py_todo.txt\n-rw-r--r-- monty/501 1399 2011-01-26 17:59:11 semaphore_notes.txt\n(Proposed by Tarek Ziad\u00e9 and implemented by Lars Gust\u00e4bel in bpo-6856.)\nhashlib\u00b6\nThe hashlib\nmodule has two new constant attributes listing the hashing\nalgorithms guaranteed to be present in all implementations and those available\non the current implementation:\n>>> import hashlib\n>>> hashlib.algorithms_guaranteed\n{'sha1', 'sha224', 'sha384', 'sha256', 'sha512', 'md5'}\n>>> hashlib.algorithms_available\n{'md2', 'SHA256', 'SHA512', 'dsaWithSHA', 'mdc2', 'SHA224', 'MD4', 'sha256',\n'sha512', 'ripemd160', 'SHA1', 'MDC2', 'SHA', 'SHA384', 'MD2',\n'ecdsa-with-SHA1','md4', 'md5', 'sha1', 'DSA-SHA', 'sha224',\n'dsaEncryption', 'DSA', 'RIPEMD160', 'sha', 'MD5', 'sha384'}\n(Suggested by Carl Chenet in bpo-7418.)\nast\u00b6\nThe ast\nmodule has a wonderful a general-purpose tool for safely\nevaluating expression strings using the Python literal\nsyntax. The ast.literal_eval()\nfunction serves as a secure alternative to\nthe builtin eval()\nfunction which is easily abused. Python 3.2 adds\nbytes\nand set\nliterals to the list of supported types:\nstrings, bytes, numbers, tuples, lists, dicts, sets, booleans, and None\n.\n>>> from ast import literal_eval\n>>> request = \"{'req': 3, 'func': 'pow', 'args': (2, 0.5)}\"\n>>> literal_eval(request)\n{'args': (2, 0.5), 'req': 3, 'func': 'pow'}\n>>> request = \"os.system('do something harmful')\"\n>>> literal_eval(request)\nTraceback (most recent call last):\n...\nValueError: malformed node or string: <_ast.Call object at 0x101739a10>\n(Implemented by Benjamin Peterson and Georg Brandl.)\nos\u00b6\nDifferent operating systems use various encodings for filenames and environment\nvariables. The os\nmodule provides two new functions,\nfsencode()\nand fsdecode()\n, for encoding and decoding\nfilenames:\n>>> import os\n>>> filename = 'Sehensw\u00fcrdigkeiten'\n>>> os.fsencode(filename)\nb'Sehensw\\xc3\\xbcrdigkeiten'\nSome operating systems allow direct access to encoded bytes in the\nenvironment. If so, the os.supports_bytes_environ\nconstant will be\ntrue.\nFor direct access to encoded environment variables (if available),\nuse the new os.getenvb()\nfunction or use os.environb\nwhich is a bytes version of os.environ\n.\n(Contributed by Victor Stinner.)\nshutil\u00b6\nThe shutil.copytree()\nfunction has two new options:\nignore_dangling_symlinks: when\nsymlinks=False\nso that the function copies a file pointed to by a symlink, not the symlink itself. This option will silence the error raised if the file doesn\u2019t exist.copy_function: is a callable that will be used to copy files.\nshutil.copy2()\nis used by default.\n(Contributed by Tarek Ziad\u00e9.)\nIn addition, the shutil\nmodule now supports archiving operations for zipfiles, uncompressed tarfiles, gzipped tarfiles,\nand bzipped tarfiles. And there are functions for registering additional\narchiving file formats (such as xz compressed tarfiles or custom formats).\nThe principal functions are make_archive()\nand\nunpack_archive()\n. By default, both operate on the current\ndirectory (which can be set by os.chdir()\n) and on any sub-directories.\nThe archive filename needs to be specified with a full pathname. The archiving\nstep is non-destructive (the original files are left unchanged).\n>>> import shutil, pprint\n>>> os.chdir('mydata') # change to the source directory\n>>> f = shutil.make_archive('/var/backup/mydata',\n... 'zip') # archive the current directory\n>>> f # show the name of archive\n'/var/backup/mydata.zip'\n>>> os.chdir('tmp') # change to an unpacking\n>>> shutil.unpack_archive('/var/backup/mydata.zip') # recover the data\n>>> pprint.pprint(shutil.get_archive_formats()) # display known formats\n[('bztar', \"bzip2'ed tar-file\"),\n('gztar', \"gzip'ed tar-file\"),\n('tar', 'uncompressed tar file'),\n('zip', 'ZIP file')]\n>>> shutil.register_archive_format( # register a new archive format\n... name='xz',\n... function=xz.compress, # callable archiving function\n... extra_args=[('level', 8)], # arguments to the function\n... description='xz compression'\n... )\n(Contributed by Tarek Ziad\u00e9.)\nsqlite3\u00b6\nThe sqlite3\nmodule was updated to pysqlite version 2.6.0. It has two new capabilities.\nThe\nsqlite3.Connection.in_transit\nattribute is true if there is an active transaction for uncommitted changes.The\nsqlite3.Connection.enable_load_extension()\nandsqlite3.Connection.load_extension()\nmethods allows you to load SQLite extensions from \u201c.so\u201d files. One well-known extension is the fulltext-search extension distributed with SQLite.\n(Contributed by R. David Murray and Shashwat Anand; bpo-8845.)\nhtml\u00b6\nA new html\nmodule was introduced with only a single function,\nescape()\n, which is used for escaping reserved characters from HTML\nmarkup:\n>>> import html\n>>> html.escape('x > 2 && x < 7')\n'x > 2 && x < 7'\nsocket\u00b6\nThe socket\nmodule has two new improvements.\nSocket objects now have a\ndetach()\nmethod which puts the socket into closed state without actually closing the underlying file descriptor. The latter can then be reused for other purposes. (Added by Antoine Pitrou; bpo-8524.)socket.create_connection()\nnow supports the context management protocol to unconditionally consumesocket.error\nexceptions and to close the socket when done. (Contributed by Giampaolo Rodol\u00e0; bpo-9794.)\nssl\u00b6\nThe ssl\nmodule added a number of features to satisfy common requirements\nfor secure (encrypted, authenticated) internet connections:\nA new class,\nSSLContext\n, serves as a container for persistent SSL data, such as protocol settings, certificates, private keys, and various other options. It includes awrap_socket()\nfor creating an SSL socket from an SSL context.A new function,\nssl.match_hostname()\n, supports server identity verification for higher-level protocols by implementing the rules of HTTPS (from RFC 2818) which are also suitable for other protocols.The\nssl.wrap_socket()\nconstructor function now takes a ciphers argument. The ciphers string lists the allowed encryption algorithms using the format described in the OpenSSL documentation.When linked against recent versions of OpenSSL, the\nssl\nmodule now supports the Server Name Indication extension to the TLS protocol, allowing multiple \u201cvirtual hosts\u201d using different certificates on a single IP port. This extension is only supported in client mode, and is activated by passing the server_hostname argument tossl.SSLContext.wrap_socket()\n.Various options have been added to the\nssl\nmodule, such asOP_NO_SSLv2\nwhich disables the insecure and obsolete SSLv2 protocol.The extension now loads all the OpenSSL ciphers and digest algorithms. If some SSL certificates cannot be verified, they are reported as an \u201cunknown algorithm\u201d error.\nThe version of OpenSSL being used is now accessible using the module attributes\nssl.OPENSSL_VERSION\n(a string),ssl.OPENSSL_VERSION_INFO\n(a 5-tuple), andssl.OPENSSL_VERSION_NUMBER\n(an integer).\n(Contributed by Antoine Pitrou in bpo-8850, bpo-1589, bpo-8322, bpo-5639, bpo-4870, bpo-8484, and bpo-8321.)\nnntp\u00b6\nThe nntplib\nmodule has a revamped implementation with better bytes and\ntext semantics as well as more practical APIs. These improvements break\ncompatibility with the nntplib version in Python 3.1, which was partly\ndysfunctional in itself.\nSupport for secure connections through both implicit (using\nnntplib.NNTP_SSL\n) and explicit (using nntplib.NNTP.starttls()\n)\nTLS has also been added.\n(Contributed by Antoine Pitrou in bpo-9360 and Andrew Vant in bpo-1926.)\ncertificates\u00b6\nhttp.client.HTTPSConnection\n, urllib.request.HTTPSHandler\nand urllib.request.urlopen()\nnow take optional arguments to allow for\nserver certificate checking against a set of Certificate Authorities,\nas recommended in public uses of HTTPS.\n(Added by Antoine Pitrou, bpo-9003.)\nimaplib\u00b6\nSupport for explicit TLS on standard IMAP4 connections has been added through\nthe new imaplib.IMAP4.starttls\nmethod.\n(Contributed by Lorenzo M. Catucci and Antoine Pitrou, bpo-4471.)\nhttp.client\u00b6\nThere were a number of small API improvements in the http.client\nmodule.\nThe old-style HTTP 0.9 simple responses are no longer supported and the strict\nparameter is deprecated in all classes.\nThe HTTPConnection\nand\nHTTPSConnection\nclasses now have a source_address\nparameter for a (host, port) tuple indicating where the HTTP connection is made\nfrom.\nSupport for certificate checking and HTTPS virtual hosts were added to\nHTTPSConnection\n.\nThe request()\nmethod on connection objects\nallowed an optional body argument so that a file object could be used\nto supply the content of the request. Conveniently, the body argument now\nalso accepts an iterable object so long as it includes an explicit\nContent-Length\nheader. This extended interface is much more flexible than\nbefore.\nTo establish an HTTPS connection through a proxy server, there is a new\nset_tunnel()\nmethod that sets the host and\nport for HTTP Connect tunneling.\nTo match the behavior of http.server\n, the HTTP client library now also\nencodes headers with ISO-8859-1 (Latin-1) encoding. It was already doing that\nfor incoming headers, so now the behavior is consistent for both incoming and\noutgoing traffic. (See work by Armin Ronacher in bpo-10980.)\nunittest\u00b6\nThe unittest module has a number of improvements supporting test discovery for packages, easier experimentation at the interactive prompt, new testcase methods, improved diagnostic messages for test failures, and better method names.\nThe command-line call\npython -m unittest\ncan now accept file paths instead of module names for running specific tests (bpo-10620). The new test discovery can find tests within packages, locating any test importable from the top-level directory. The top-level directory can be specified with the-t\noption, a pattern for matching files with-p\n, and a directory to start discovery with-s\n:$ python -m unittest discover -s my_proj_dir -p _test.py\n(Contributed by Michael Foord.)\nExperimentation at the interactive prompt is now easier because the\nunittest.TestCase\nclass can now be instantiated without arguments:>>> from unittest import TestCase >>> TestCase().assertEqual(pow(2, 3), 8)\n(Contributed by Michael Foord.)\nThe\nunittest\nmodule has two new methods,assertWarns()\nandassertWarnsRegex()\nto verify that a given warning type is triggered by the code under test:with self.assertWarns(DeprecationWarning): legacy_function('XYZ')\n(Contributed by Antoine Pitrou, bpo-9754.)\nAnother new method,\nassertCountEqual()\nis used to compare two iterables to determine if their element counts are equal (whether the same elements are present with the same number of occurrences regardless of order):def test_anagram(self): self.assertCountEqual('algorithm', 'logarithm')\n(Contributed by Raymond Hettinger.)\nA principal feature of the unittest module is an effort to produce meaningful diagnostics when a test fails. When possible, the failure is recorded along with a diff of the output. This is especially helpful for analyzing log files of failed test runs. However, since diffs can sometime be voluminous, there is a new\nmaxDiff\nattribute that sets maximum length of diffs displayed.In addition, the method names in the module have undergone a number of clean-ups.\nFor example,\nassertRegex()\nis the new name forassertRegexpMatches()\nwhich was misnamed because the test usesre.search()\n, notre.match()\n. Other methods using regular expressions are now named using short form \u201cRegex\u201d in preference to \u201cRegexp\u201d \u2013 this matches the names used in other unittest implementations, matches Python\u2019s old name for there\nmodule, and it has unambiguous camel-casing.(Contributed by Raymond Hettinger and implemented by Ezio Melotti.)\nTo improve consistency, some long-standing method aliases are being deprecated in favor of the preferred names:\nOld Name\nPreferred Name\nassert_()\nassertEquals()\nassertNotEquals()\nassertAlmostEquals()\nassertNotAlmostEquals()\nLikewise, the\nTestCase.fail*\nmethods deprecated in Python 3.1 are expected to be removed in Python 3.3.(Contributed by Ezio Melotti; bpo-9424.)\nThe\nassertDictContainsSubset()\nmethod was deprecated because it was misimplemented with the arguments in the wrong order. This created hard-to-debug optical illusions where tests likeTestCase().assertDictContainsSubset({'a':1, 'b':2}, {'a':1})\nwould fail.(Contributed by Raymond Hettinger.)\nrandom\u00b6\nThe integer methods in the random\nmodule now do a better job of producing\nuniform distributions. Previously, they computed selections with\nint(n*random())\nwhich had a slight bias whenever n was not a power of two.\nNow, multiple selections are made from a range up to the next power of two and a\nselection is kept only when it falls within the range 0 <= x < n\n. The\nfunctions and methods affected are randrange()\n,\nrandint()\n, choice()\n, shuffle()\nand\nsample()\n.\n(Contributed by Raymond Hettinger; bpo-9025.)\npoplib\u00b6\nPOP3_SSL\nclass now accepts a context parameter, which is a\nssl.SSLContext\nobject allowing bundling SSL configuration options,\ncertificates and private keys into a single (potentially long-lived)\nstructure.\n(Contributed by Giampaolo Rodol\u00e0; bpo-8807.)\nasyncore\u00b6\nasyncore.dispatcher\nnow provides a\nhandle_accepted()\nmethod\nreturning a (sock, addr)\npair which is called when a connection has actually\nbeen established with a new remote endpoint. This is supposed to be used as a\nreplacement for old handle_accept()\nand avoids\nthe user to call accept()\ndirectly.\n(Contributed by Giampaolo Rodol\u00e0; bpo-6706.)\ntempfile\u00b6\nThe tempfile\nmodule has a new context manager,\nTemporaryDirectory\nwhich provides easy deterministic\ncleanup of temporary directories:\nwith tempfile.TemporaryDirectory() as tmpdirname:\nprint('created temporary dir:', tmpdirname)\n(Contributed by Neil Schemenauer and Nick Coghlan; bpo-5178.)\ninspect\u00b6\nThe\ninspect\nmodule has a new functiongetgeneratorstate()\nto easily identify the current state of a generator-iterator:>>> from inspect import getgeneratorstate >>> def gen(): ... yield 'demo' ... >>> g = gen() >>> getgeneratorstate(g) 'GEN_CREATED' >>> next(g) 'demo' >>> getgeneratorstate(g) 'GEN_SUSPENDED' >>> next(g, None) >>> getgeneratorstate(g) 'GEN_CLOSED'\n(Contributed by Rodolpho Eckhardt and Nick Coghlan, bpo-10220.)\nTo support lookups without the possibility of activating a dynamic attribute, the\ninspect\nmodule has a new function,getattr_static()\n. Unlikehasattr()\n, this is a true read-only search, guaranteed not to change state while it is searching:>>> class A: ... @property ... def f(self): ... print('Running') ... return 10 ... >>> a = A() >>> getattr(a, 'f') Running 10 >>> inspect.getattr_static(a, 'f') \n(Contributed by Michael Foord.)\npydoc\u00b6\nThe pydoc\nmodule now provides a much-improved web server interface, as\nwell as a new command-line option -b\nto automatically open a browser window\nto display that server:\n$ pydoc3.2 -b\n(Contributed by Ron Adam; bpo-2001.)\ndis\u00b6\nThe dis\nmodule gained two new functions for inspecting code,\ncode_info()\nand show_code()\n. Both provide detailed code\nobject information for the supplied function, method, source code string or code\nobject. The former returns a string and the latter prints it:\n>>> import dis, random\n>>> dis.show_code(random.choice)\nName: choice\nFilename: /Library/Frameworks/Python.framework/Versions/3.2/lib/python3.2/random.py\nArgument count: 2\nKw-only arguments: 0\nNumber of locals: 3\nStack size: 11\nFlags: OPTIMIZED, NEWLOCALS, NOFREE\nConstants:\n0: 'Choose a random element from a non-empty sequence.'\n1: 'Cannot choose from an empty sequence'\nNames:\n0: _randbelow\n1: len\n2: ValueError\n3: IndexError\nVariable names:\n0: self\n1: seq\n2: i\nIn addition, the dis()\nfunction now accepts string arguments\nso that the common idiom dis(compile(s, '', 'eval'))\ncan be shortened\nto dis(s)\n:\n>>> dis('3*x+1 if x%2==1 else x//2')\n1 0 LOAD_NAME 0 (x)\n3 LOAD_CONST 0 (2)\n6 BINARY_MODULO\n7 LOAD_CONST 1 (1)\n10 COMPARE_OP 2 (==)\n13 POP_JUMP_IF_FALSE 28\n16 LOAD_CONST 2 (3)\n19 LOAD_NAME 0 (x)\n22 BINARY_MULTIPLY\n23 LOAD_CONST 1 (1)\n26 BINARY_ADD\n27 RETURN_VALUE\n>> 28 LOAD_NAME 0 (x)\n31 LOAD_CONST 0 (2)\n34 BINARY_FLOOR_DIVIDE\n35 RETURN_VALUE\nTaken together, these improvements make it easier to explore how CPython is implemented and to see for yourself what the language syntax does under-the-hood.\n(Contributed by Nick Coghlan in bpo-9147.)\ndbm\u00b6\nAll database modules now support the get()\nand setdefault()\nmethods.\n(Suggested by Ray Allen in bpo-9523.)\nctypes\u00b6\nA new type, ctypes.c_ssize_t\nrepresents the C ssize_t\ndatatype.\nsite\u00b6\nThe site\nmodule has three new functions useful for reporting on the\ndetails of a given Python installation.\ngetsitepackages()\nlists all global site-packages directories.getuserbase()\nreports on the user\u2019s base directory where data can be stored.getusersitepackages()\nreveals the user-specific site-packages directory path.\n>>> import site\n>>> site.getsitepackages()\n['/Library/Frameworks/Python.framework/Versions/3.2/lib/python3.2/site-packages',\n'/Library/Frameworks/Python.framework/Versions/3.2/lib/site-python',\n'/Library/Python/3.2/site-packages']\n>>> site.getuserbase()\n'/Users/raymondhettinger/Library/Python/3.2'\n>>> site.getusersitepackages()\n'/Users/raymondhettinger/Library/Python/3.2/lib/python/site-packages'\nConveniently, some of site\u2019s functionality is accessible directly from the command-line:\n$ python -m site --user-base\n/Users/raymondhettinger/.local\n$ python -m site --user-site\n/Users/raymondhettinger/.local/lib/python3.2/site-packages\n(Contributed by Tarek Ziad\u00e9 in bpo-6693.)\nsysconfig\u00b6\nThe new sysconfig\nmodule makes it straightforward to discover\ninstallation paths and configuration variables that vary across platforms and\ninstallations.\nThe module offers access simple access functions for platform and version information:\nget_platform()\nreturning values like linux-i586 or macosx-10.6-ppc.get_python_version()\nreturns a Python version string such as \u201c3.2\u201d.\nIt also provides access to the paths and variables corresponding to one of\nseven named schemes used by distutils\n. Those include posix_prefix,\nposix_home, posix_user, nt, nt_user, os2, os2_home:\nget_paths()\nmakes a dictionary containing installation paths for the current installation scheme.get_config_vars()\nreturns a dictionary of platform specific variables.\nThere is also a convenient command-line interface:\nC:\\Python32>python -m sysconfig\nPlatform: \"win32\"\nPython version: \"3.2\"\nCurrent installation scheme: \"nt\"\nPaths:\ndata = \"C:\\Python32\"\ninclude = \"C:\\Python32\\Include\"\nplatinclude = \"C:\\Python32\\Include\"\nplatlib = \"C:\\Python32\\Lib\\site-packages\"\nplatstdlib = \"C:\\Python32\\Lib\"\npurelib = \"C:\\Python32\\Lib\\site-packages\"\nscripts = \"C:\\Python32\\Scripts\"\nstdlib = \"C:\\Python32\\Lib\"\nVariables:\nBINDIR = \"C:\\Python32\"\nBINLIBDEST = \"C:\\Python32\\Lib\"\nEXE = \".exe\"\nINCLUDEPY = \"C:\\Python32\\Include\"\nLIBDEST = \"C:\\Python32\\Lib\"\nSO = \".pyd\"\nVERSION = \"32\"\nabiflags = \"\"\nbase = \"C:\\Python32\"\nexec_prefix = \"C:\\Python32\"\nplatbase = \"C:\\Python32\"\nprefix = \"C:\\Python32\"\nprojectbase = \"C:\\Python32\"\npy_version = \"3.2\"\npy_version_nodot = \"32\"\npy_version_short = \"3.2\"\nsrcdir = \"C:\\Python32\"\nuserbase = \"C:\\Documents and Settings\\Raymond\\Application Data\\Python\"\n(Moved out of Distutils by Tarek Ziad\u00e9.)\npdb\u00b6\nThe pdb\ndebugger module gained a number of usability improvements:\npdb.py\nnow has a-c\noption that executes commands as given in a.pdbrc\nscript file.A\n.pdbrc\nscript file can containcontinue\nandnext\ncommands that continue debugging.The\nPdb\nclass constructor now accepts a nosigint argument.New commands:\nl(list)\n,ll(long list)\nandsource\nfor listing source code.New commands:\ndisplay\nandundisplay\nfor showing or hiding the value of an expression if it has changed.New command:\ninteract\nfor starting an interactive interpreter containing the global and local names found in the current scope.Breakpoints can be cleared by breakpoint number.\n(Contributed by Georg Brandl, Antonio Cuni and Ilya Sandler.)\nconfigparser\u00b6\nThe configparser\nmodule was modified to improve usability and\npredictability of the default parser and its supported INI syntax. The old\nConfigParser\nclass was removed in favor of SafeConfigParser\nwhich has in turn been renamed to ConfigParser\n. Support\nfor inline comments is now turned off by default and section or option\nduplicates are not allowed in a single configuration source.\nConfig parsers gained a new API based on the mapping protocol:\n>>> parser = ConfigParser()\n>>> parser.read_string(\"\"\"\n... [DEFAULT]\n... location = upper left\n... visible = yes\n... editable = no\n... color = blue\n...\n... [main]\n... title = Main Menu\n... color = green\n...\n... [options]\n... title = Options\n... \"\"\")\n>>> parser['main']['color']\n'green'\n>>> parser['main']['editable']\n'no'\n>>> section = parser['options']\n>>> section['title']\n'Options'\n>>> section['title'] = 'Options (editable: %(editable)s)'\n>>> section['title']\n'Options (editable: no)'\nThe new API is implemented on top of the classical API, so custom parser subclasses should be able to use it without modifications.\nThe INI file structure accepted by config parsers can now be customized. Users can specify alternative option/value delimiters and comment prefixes, change the name of the DEFAULT section or switch the interpolation syntax.\nThere is support for pluggable interpolation including an additional interpolation\nhandler ExtendedInterpolation\n:\n>>> parser = ConfigParser(interpolation=ExtendedInterpolation())\n>>> parser.read_dict({'buildout': {'directory': '/home/ambv/zope9'},\n... 'custom': {'prefix': '/usr/local'}})\n>>> parser.read_string(\"\"\"\n... [buildout]\n... parts =\n... zope9\n... instance\n... find-links =\n... ${buildout:directory}/downloads/dist\n...\n... [zope9]\n... recipe = plone.recipe.zope9install\n... location = /opt/zope\n...\n... [instance]\n... recipe = plone.recipe.zope9instance\n... zope9-location = ${zope9:location}\n... zope-conf = ${custom:prefix}/etc/zope.conf\n... \"\"\")\n>>> parser['buildout']['find-links']\n'\\n/home/ambv/zope9/downloads/dist'\n>>> parser['instance']['zope-conf']\n'/usr/local/etc/zope.conf'\n>>> instance = parser['instance']\n>>> instance['zope-conf']\n'/usr/local/etc/zope.conf'\n>>> instance['zope9-location']\n'/opt/zope'\nA number of smaller features were also introduced, like support for specifying encoding in read operations, specifying fallback values for get-functions, or reading directly from dictionaries and strings.\n(All changes contributed by \u0141ukasz Langa.)\nurllib.parse\u00b6\nA number of usability improvements were made for the urllib.parse\nmodule.\nThe urlparse()\nfunction now supports IPv6 addresses as described in RFC 2732:\n>>> import urllib.parse\n>>> urllib.parse.urlparse('http://[dead:beef:cafe:5417:affe:8FA3:deaf:feed]/foo/')\nParseResult(scheme='http',\nnetloc='[dead:beef:cafe:5417:affe:8FA3:deaf:feed]',\npath='/foo/',\nparams='',\nquery='',\nfragment='')\nThe urldefrag()\nfunction now returns a named tuple:\n>>> r = urllib.parse.urldefrag('http://python.org/about/#target')\n>>> r\nDefragResult(url='http://python.org/about/', fragment='target')\n>>> r[0]\n'http://python.org/about/'\n>>> r.fragment\n'target'\nAnd, the urlencode()\nfunction is now much more flexible,\naccepting either a string or bytes type for the query argument. If it is a\nstring, then the safe, encoding, and error parameters are sent to\nquote_plus()\nfor encoding:\n>>> urllib.parse.urlencode([\n... ('type', 'telenovela'),\n... ('name', '\u00bfD\u00f3nde Est\u00e1 Elisa?')],\n... encoding='latin-1')\n'type=telenovela&name=%BFD%F3nde+Est%E1+Elisa%3F'\nAs detailed in Parsing ASCII Encoded Bytes, all the urllib.parse\nfunctions now accept ASCII-encoded byte strings as input, so long as they are\nnot mixed with regular strings. If ASCII-encoded byte strings are given as\nparameters, the return types will also be an ASCII-encoded byte strings:\n>>> urllib.parse.urlparse(b'http://www.python.org:80/about/')\nParseResultBytes(scheme=b'http', netloc=b'www.python.org:80',\npath=b'/about/', params=b'', query=b'', fragment=b'')\n(Work by Nick Coghlan, Dan Mahn, and Senthil Kumaran in bpo-2987, bpo-5468, and bpo-9873.)\nmailbox\u00b6\nThanks to a concerted effort by R. David Murray, the mailbox\nmodule has\nbeen fixed for Python 3.2. The challenge was that mailbox had been originally\ndesigned with a text interface, but email messages are best represented with\nbytes\nbecause various parts of a message may have different encodings.\nThe solution harnessed the email\npackage\u2019s binary support for parsing\narbitrary email messages. In addition, the solution required a number of API\nchanges.\nAs expected, the add()\nmethod for\nmailbox.Mailbox\nobjects now accepts binary input.\nStringIO\nand text file input are deprecated. Also, string input\nwill fail early if non-ASCII characters are used. Previously it would fail when\nthe email was processed in a later step.\nThere is also support for binary output. The get_file()\nmethod now returns a file in the binary mode (where it used to incorrectly set\nthe file to text-mode). There is also a new get_bytes()\nmethod that returns a bytes\nrepresentation of a message corresponding\nto a given key.\nIt is still possible to get non-binary output using the old API\u2019s\nget_string()\nmethod, but that approach\nis not very useful. Instead, it is best to extract messages from\na Message\nobject or to load them from binary input.\n(Contributed by R. David Murray, with efforts from Steffen Daode Nurpmeso and an initial patch by Victor Stinner in bpo-9124.)\nturtledemo\u00b6\nThe demonstration code for the turtle\nmodule was moved from the Demo\ndirectory to main library. It includes over a dozen sample scripts with\nlively displays. Being on sys.path\n, it can now be run directly\nfrom the command-line:\n$ python -m turtledemo\n(Moved from the Demo directory by Alexander Belopolsky in bpo-10199.)\nMulti-threading\u00b6\nThe mechanism for serializing execution of concurrently running Python threads (generally known as the GIL or Global Interpreter Lock) has been rewritten. Among the objectives were more predictable switching intervals and reduced overhead due to lock contention and the number of ensuing system calls. The notion of a \u201ccheck interval\u201d to allow thread switches has been abandoned and replaced by an absolute duration expressed in seconds. This parameter is tunable through\nsys.setswitchinterval()\n. It currently defaults to 5 milliseconds.Additional details about the implementation can be read from a python-dev mailing-list message (however, \u201cpriority requests\u201d as exposed in this message have not been kept for inclusion).\n(Contributed by Antoine Pitrou.)\nRegular and recursive locks now accept an optional timeout argument to their\nacquire()\nmethod. (Contributed by Antoine Pitrou; bpo-7316.)Similarly,\nthreading.Semaphore.acquire()\nalso gained a timeout argument. (Contributed by Torsten Landschoff; bpo-850728.)Regular and recursive lock acquisitions can now be interrupted by signals on platforms using Pthreads. This means that Python programs that deadlock while acquiring locks can be successfully killed by repeatedly sending SIGINT to the process (by pressing Ctrl+C in most shells). (Contributed by Reid Kleckner; bpo-8844.)\nOptimizations\u00b6\nA number of small performance enhancements have been added:\nPython\u2019s peephole optimizer now recognizes patterns such\nx in {1, 2, 3}\nas being a test for membership in a set of constants. The optimizer recasts theset\nas afrozenset\nand stores the pre-built constant.Now that the speed penalty is gone, it is practical to start writing membership tests using set-notation. This style is both semantically clear and operationally fast:\nextension = name.rpartition('.')[2] if extension in {'xml', 'html', 'xhtml', 'css'}: handle(name)\n(Patch and additional tests contributed by Dave Malcolm; bpo-6690).\nSerializing and unserializing data using the\npickle\nmodule is now several times faster.(Contributed by Alexandre Vassalotti, Antoine Pitrou and the Unladen Swallow team in bpo-9410 and bpo-3873.)\nThe Timsort algorithm used in\nlist.sort()\nandsorted()\nnow runs faster and uses less memory when called with a key function. Previously, every element of a list was wrapped with a temporary object that remembered the key value associated with each element. Now, two arrays of keys and values are sorted in parallel. This saves the memory consumed by the sort wrappers, and it saves time lost to delegating comparisons.(Patch by Daniel Stutzbach in bpo-9915.)\nJSON decoding performance is improved and memory consumption is reduced whenever the same string is repeated for multiple keys. Also, JSON encoding now uses the C speedups when the\nsort_keys\nargument is true.(Contributed by Antoine Pitrou in bpo-7451 and by Raymond Hettinger and Antoine Pitrou in bpo-10314.)\nRecursive locks (created with the\nthreading.RLock()\nAPI) now benefit from a C implementation which makes them as fast as regular locks, and between 10x and 15x faster than their previous pure Python implementation.(Contributed by Antoine Pitrou; bpo-3001.)\nThe fast-search algorithm in stringlib is now used by the\nsplit()\n,rsplit()\n,splitlines()\nandreplace()\nmethods onbytes\n,bytearray\nandstr\nobjects. Likewise, the algorithm is also used byrfind()\n,rindex()\n,rsplit()\nandrpartition()\n.Integer to string conversions now work two \u201cdigits\u201d at a time, reducing the number of division and modulo operations.\n(bpo-6713 by Gawain Bolton, Mark Dickinson, and Victor Stinner.)\nThere were several other minor optimizations. Set differencing now runs faster\nwhen one operand is much larger than the other (patch by Andress Bennetts in\nbpo-8685). The array.repeat()\nmethod has a faster implementation\n(bpo-1569291 by Alexander Belopolsky). The BaseHTTPRequestHandler\nhas more efficient buffering (bpo-3709 by Andrew Schaaf). The\noperator.attrgetter()\nfunction has been sped-up (bpo-10160 by\nChristos Georgiou). And ConfigParser\nloads multi-line arguments a bit\nfaster (bpo-7113 by \u0141ukasz Langa).\nUnicode\u00b6\nPython has been updated to Unicode 6.0.0. The update to the standard adds over 2,000 new characters including emoji symbols which are important for mobile phones.\nIn addition, the updated standard has altered the character properties for two Kannada characters (U+0CF1, U+0CF2) and one New Tai Lue numeric character (U+19DA), making the former eligible for use in identifiers while disqualifying the latter. For more information, see Unicode Character Database Changes.\nCodecs\u00b6\nSupport was added for cp720 Arabic DOS encoding (bpo-1616979).\nMBCS encoding no longer ignores the error handler argument. In the default\nstrict mode, it raises an UnicodeDecodeError\nwhen it encounters an\nundecodable byte sequence and an UnicodeEncodeError\nfor an unencodable\ncharacter.\nThe MBCS codec supports 'strict'\nand 'ignore'\nerror handlers for\ndecoding, and 'strict'\nand 'replace'\nfor encoding.\nTo emulate Python3.1 MBCS encoding, select the 'ignore'\nhandler for decoding\nand the 'replace'\nhandler for encoding.\nOn Mac OS X, Python decodes command line arguments with 'utf-8'\nrather than\nthe locale encoding.\nBy default, tarfile\nuses 'utf-8'\nencoding on Windows (instead of\n'mbcs'\n) and the 'surrogateescape'\nerror handler on all operating\nsystems.\nDocumentation\u00b6\nThe documentation continues to be improved.\nA table of quick links has been added to the top of lengthy sections such as Built-in Functions. In the case of\nitertools\n, the links are accompanied by tables of cheatsheet-style summaries to provide an overview and memory jog without having to read all of the docs.In some cases, the pure Python source code can be a helpful adjunct to the documentation, so now many modules now feature quick links to the latest version of the source code. For example, the\nfunctools\nmodule documentation has a quick link at the top labeled:Source code Lib/functools.py.\n(Contributed by Raymond Hettinger; see rationale.)\nThe docs now contain more examples and recipes. In particular,\nre\nmodule has an extensive section, Regular Expression Examples. Likewise, theitertools\nmodule continues to be updated with new Itertools Recipes.The\ndatetime\nmodule now has an auxiliary implementation in pure Python. No functionality was changed. This just provides an easier-to-read alternate implementation.(Contributed by Alexander Belopolsky in bpo-9528.)\nThe unmaintained\nDemo\ndirectory has been removed. Some demos were integrated into the documentation, some were moved to theTools/demo\ndirectory, and others were removed altogether.(Contributed by Georg Brandl in bpo-7962.)\nIDLE\u00b6\nCode Repository\u00b6\nIn addition to the existing Subversion code repository at https://svn.python.org there is now a Mercurial repository at https://hg.python.org/.\nAfter the 3.2 release, there are plans to switch to Mercurial as the primary repository. This distributed version control system should make it easier for members of the community to create and share external changesets. See PEP 385 for details.\nTo learn to use the new version control system, see the Quick Start or the Guide to Mercurial Workflows.\nBuild and C API Changes\u00b6\nChanges to Python\u2019s build process and to the C API include:\nThe idle, pydoc and 2to3 scripts are now installed with a version-specific suffix on\nmake altinstall\n(bpo-10679).The C functions that access the Unicode Database now accept and return characters from the full Unicode range, even on narrow unicode builds (Py_UNICODE_TOLOWER, Py_UNICODE_ISDECIMAL, and others). A visible difference in Python is that\nunicodedata.numeric()\nnow returns the correct value for large code points, andrepr()\nmay consider more characters as printable.(Reported by Bupjoe Lee and fixed by Amaury Forgeot D\u2019Arc; bpo-5127.)\nComputed gotos are now enabled by default on supported compilers (which are detected by the configure script). They can still be disabled selectively by specifying\n--without-computed-gotos\n.(Contributed by Antoine Pitrou; bpo-9203.)\nThe option\n--with-wctype-functions\nwas removed. The built-in unicode database is now used for all functions.(Contributed by Amaury Forgeot D\u2019Arc; bpo-9210.)\nHash values are now values of a new type,\nPy_hash_t\n, which is defined to be the same size as a pointer. Previously they were of type long, which on some 64-bit operating systems is still only 32 bits long. As a result of this fix,set\nanddict\ncan now hold more than2**32\nentries on builds with 64-bit pointers (previously, they could grow to that size but their performance degraded catastrophically).(Suggested by Raymond Hettinger and implemented by Benjamin Peterson; bpo-9778.)\nA new macro\nPy_VA_COPY\ncopies the state of the variable argument list. It is equivalent to C99 va_copy but available on all Python platforms (bpo-2443).A new C API function\nPySys_SetArgvEx()\nallows an embedded interpreter to setsys.argv\nwithout also modifyingsys.path\n(bpo-5753).PyEval_CallObject()\nis now only available in macro form. The function declaration, which was kept for backwards compatibility reasons, is now removed \u2013 the macro was introduced in 1997 (bpo-8276).There is a new function\nPyLong_AsLongLongAndOverflow()\nwhich is analogous toPyLong_AsLongAndOverflow()\n. They both serve to convert Pythonint\ninto a native fixed-width type while providing detection of cases where the conversion won\u2019t fit (bpo-7767).The\nPyUnicode_CompareWithASCIIString()\nfunction now returns not equal if the Python string is NUL terminated.There is a new function\nPyErr_NewExceptionWithDoc()\nthat is likePyErr_NewException()\nbut allows a docstring to be specified. This lets C exceptions have the same self-documenting capabilities as their pure Python counterparts (bpo-7033).When compiled with the\n--with-valgrind\noption, the pymalloc allocator will be automatically disabled when running under Valgrind. This gives improved memory leak detection when running under Valgrind, while taking advantage of pymalloc at other times (bpo-2422).Removed the\nO?\nformat from the PyArg_Parse functions. The format is no longer used and it had never been documented (bpo-8837).\nThere were a number of other small changes to the C-API. See the Misc/NEWS file for a complete list.\nAlso, there were a number of updates to the Mac OS X build, see Mac/BuildScript/README.txt for details. For users running a 32/64-bit build, there is a known problem with the default Tcl/Tk on Mac OS X 10.6. Accordingly, we recommend installing an updated alternative such as ActiveState Tcl/Tk 8.5.9. See https://www.python.org/download/mac/tcltk/ for additional details.\nPorting to Python 3.2\u00b6\nThis section lists previously described changes and other bugfixes that may require changes to your code:\nThe\nconfigparser\nmodule has a number of clean-ups. The major change is to replace the oldConfigParser\nclass with long-standing preferred alternativeSafeConfigParser\n. In addition there are a number of smaller incompatibilities:The interpolation syntax is now validated on\nget()\nandset()\noperations. In the default interpolation scheme, only two tokens with percent signs are valid:%(name)s\nand%%\n, the latter being an escaped percent sign.The\nset()\nandadd_section()\nmethods now verify that values are actual strings. Formerly, unsupported types could be introduced unintentionally.Duplicate sections or options from a single source now raise either\nDuplicateSectionError\norDuplicateOptionError\n. Formerly, duplicates would silently overwrite a previous entry.Inline comments are now disabled by default so now the ; character can be safely used in values.\nComments now can be indented. Consequently, for ; or # to appear at the start of a line in multiline values, it has to be interpolated. This keeps comment prefix characters in values from being mistaken as comments.\n\"\"\nis now a valid value and is no longer automatically converted to an empty string. For empty strings, use\"option =\"\nin a line.\nThe\nnntplib\nmodule was reworked extensively, meaning that its APIs are often incompatible with the 3.1 APIs.bytearray\nobjects can no longer be used as filenames; instead, they should be converted tobytes\n.The\narray.tostring()\nandarray.fromstring()\nhave been renamed toarray.tobytes()\nandarray.frombytes()\nfor clarity. The old names have been deprecated. (See bpo-8990.)PyArg_Parse*()\nfunctions:\u201ct#\u201d format has been removed: use \u201cs#\u201d or \u201cs*\u201d instead\n\u201cw\u201d and \u201cw#\u201d formats has been removed: use \u201cw*\u201d instead\nThe\nPyCObject\ntype, deprecated in 3.1, has been removed. To wrap opaque C pointers in Python objects, thePyCapsule\nAPI should be used instead; the new type has a well-defined interface for passing typing safety information and a less complicated signature for calling a destructor.The\nsys.setfilesystemencoding()\nfunction was removed because it had a flawed design.The\nrandom.seed()\nfunction and method now salt string seeds with an sha512 hash function. To access the previous version of seed in order to reproduce Python 3.1 sequences, set the version argument to 1,random.seed(s, version=1)\n.The previously deprecated\nstring.maketrans()\nfunction has been removed in favor of the static methodsbytes.maketrans()\nandbytearray.maketrans()\n. This change solves the confusion around which types were supported by thestring\nmodule. Now,str\n,bytes\n, andbytearray\neach have their own maketrans and translate methods with intermediate translation tables of the appropriate type.(Contributed by Georg Brandl; bpo-5675.)\nThe previously deprecated\ncontextlib.nested()\nfunction has been removed in favor of a plainwith\nstatement which can accept multiple context managers. The latter technique is faster (because it is built-in), and it does a better job finalizing multiple context managers when one of them raises an exception:with open('mylog.txt') as infile, open('a.out', 'w') as outfile: for line in infile: if '' in line: outfile.write(line)\n(Contributed by Georg Brandl and Mattias Br\u00e4ndstr\u00f6m; appspot issue 53094.)\nstruct.pack()\nnow only allows bytes for thes\nstring pack code. Formerly, it would accept text arguments and implicitly encode them to bytes using UTF-8. This was problematic because it made assumptions about the correct encoding and because a variable-length encoding can fail when writing to fixed length segment of a structure.Code such as\nstruct.pack('<6sHHBBB', 'GIF87a', x, y)\nshould be rewritten with to use bytes instead of text,struct.pack('<6sHHBBB', b'GIF87a', x, y)\n.(Discovered by David Beazley and fixed by Victor Stinner; bpo-10783.)\nThe\nxml.etree.ElementTree\nclass now raises anxml.etree.ElementTree.ParseError\nwhen a parse fails. Previously it raised anxml.parsers.expat.ExpatError\n.The new, longer\nstr()\nvalue on floats may break doctests which rely on the old output format.In\nsubprocess.Popen\n, the default value for close_fds is nowTrue\nunder Unix; under Windows, it isTrue\nif the three standard streams are set toNone\n,False\notherwise. Previously, close_fds was alwaysFalse\nby default, which produced difficult to solve bugs or race conditions when open file descriptors would leak into the child process.Support for legacy HTTP 0.9 has been removed from\nurllib.request\nandhttp.client\n. Such support is still present on the server side (inhttp.server\n).(Contributed by Antoine Pitrou, bpo-10711.)\nSSL sockets in timeout mode now raise\nsocket.timeout\nwhen a timeout occurs, rather than a genericSSLError\n.(Contributed by Antoine Pitrou, bpo-10272.)\nThe misleading functions\nPyEval_AcquireLock()\nandPyEval_ReleaseLock()\nhave been officially deprecated. The thread-state aware APIs (such asPyEval_SaveThread()\nandPyEval_RestoreThread()\n) should be used instead.Due to security risks,\nasyncore.handle_accept()\nhas been deprecated, and a new function,asyncore.handle_accepted()\n, was added to replace it.(Contributed by Giampaolo Rodola in bpo-6706.)\nDue to the new GIL implementation,\nPyEval_InitThreads()\ncannot be called beforePy_Initialize()\nanymore.", "code_snippets": ["\n", " ", " ", "\n ", " ", " ", " ", "\n ", " ", " ", " ", "\n", " ", "\n ", " ", " ", " ", " ", " ", "\n ", " ", " ", " ", "\n", "\n ", " ", " ", " ", "\n ", " ", " ", " ", "\n ", " ", " ", " ", "\n", " ", " ", "\n ", " ", " ", " ", "\n ", " ", " ", "\n", " ", " ", "\n", " ", " ", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n\n", "\n", "\n\n", "\n\n", "\n", "\n", "\n\n", "\n", "\n", "\n\n", "\n", "\n", " ", " ", "\n", " ", " ", "\n\n", " ", " ", " ", " ", "\n", " ", " ", "\n", " ", " ", "\n\n", " ", " ", " ", " ", "\n ", " ", " ", "\n", " ", " ", " ", "\n", " ", " ", " ", "\n", " ", "\n ", " ", " ", " ", "\n ", " ", " ", "\n ", "\n ", " ", " ", "\n ", " ", "\n ", " ", "\n ", " ", "\n ", " ", "\n ", " ", "\n ", " ", "\n ", " ", "\n ", " ", "\n ", " ", "\n ", "\n ", " ", " ", " ", " ", " ", "\n", "\n", " ", " ", " ", "\n", " ", " ", " ", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", " ", " ", " ", "\n ", " ", " ", "\n ", " ", " ", "\n ", " ", " ", "\n ", " ", " ", "\n", "\n", "\n", "\n", " ", "\n", "\n", " ", "\n", "\n", "\n", " ", " ", "\n", "\n", "\n\n", "\n", " ", " ", "\n", " ", " ", " ", "\n", "\n", " ", " ", " ", "\n", "\n", "\n\n", "\n", " ", " ", "\n", " ", " ", "\n", "\n", "\n", "\n", "\n", " ", "\n", " ", "\n", " ", " ", " ", " ", "\n", "\n", " ", " ", "\n", " ", "\n", "\n", "\n", ": ", "\n", "\n ", "\n ", " ", "\n ", "\n ", " ", "\n", "\n ", "\n ", "\n ", "\n ", "\n ", " ", " ", " ", "\n ", "\n ", "\n", " ", " ", "\n", "\n", " ", " ", "\n", "\n", " ", " ", "\n", "\n", " ", " ", "\n", "\n", "\n", "\n ", " ", "\n ", " ", " ", " ", "\n ", " ", "\n\n ", " ", "\n ", " ", " ", " ", "\n ", " ", "\n", " ", " ", "\n\n", "\n ", " ", " ", "\n ", " ", "\n ", " ", " ", "\n ", " ", "\n\n", " ", " ", "\n", " ", " ", " ", "\n ", " ", "\n", "\n ", " ", " ", "\n ", "\n ", " ", " ", "\n ", " ", "\n ", " ", " ", "\n ", "\n ", "\n ", " ", " ", "\n ", " ", "\n", " ", " ", "\n\n", "\n", "\n\n", " ", "\n", "\n", "\n", " ", "\n\n", " ", " ", " ", "\n", " ", " ", " ", " ", " ", " ", " ", " ", "\n", "\n", "\n", "\n", "\n\n", " ", " ", " ", "\n", " ", " ", " ", " ", " ", " ", " ", " ", "\n", "\n", "\n ", "\n ", " ", "\n ", "\n ", "\n ", " ", "\n ", "\n", " ", " ", " ", " ", " ", " ", "\n\n", " ", " ", "\n", " ", " ", " ", " ", " ", " ", " ", "\n", " ", " ", " ", " ", "\n\n", "\n\n", " ", " ", "\n", " ", "\n", " ", "\n", " ", "\n", "\n", " ", " ", "\n", " ", " ", "\n", " ", " ", "\n", "\n", "\n", "\n", "\n", "\n", " ", "\n", " ", "\n", " ", " ", " ", " ", " ", " ", " ", "\n", "\n", " ", " ", "\n", "\n", "\n", "\n", "\n", " ", "\n", " ", "\n", " ", "\n", " ", "\n", "\n", " ", " ", " ", " ", " ", "\n", "\n", "\n", "\n", " ", " ", " ", " ", "\n", " ", " ", " ", " ", "\n", "\n", "\n", " ", "\n", "\n\n", "\n\n", "\n", "\n ", " ", "\n ", "\n ", " ", "\n", " ", "\n ", "\n ", "\n", "\n", "\n ", "\n ", "\n", " ", " ", " ", " ", " ", " \\\n ", " ", " ", " ", "\n", " ", "\n", " ", " ", " ", "\n", "\n", "\n\n", "\n", "\n", "\n", "\n", "\n", " ", " ", " ", " ", "\n ", " ", " ", " ", "\n ", "\n", "\n\n", "\n", " ", " ", " ", "\n", " ", " ", " ", " ", "\n", " ", " ", "\n\n", " ", " ", " ", " ", "\n", " ", " ", " ", " ", "\n", " ", " ", "\n", " ", "\n", "\n", "\n", "\n", "\n", "\n", "\n\n", "\n", "\n\n", "\n", "\n", "\n", "\n", "\n", " ", "\n\n", " ", " ", "\n", "\n", "\n\n", " ", " ", "\n", "\n", "\n", "\n", ": ", "\n", "\n\n", " ", "\n", " ", " ", "\n", " ", " ", "\n", " ", "\n", "\n", " ", "\n", " ", "\n\n", " ", "\n", "\n", "\n", "\n", "\n\n", " ", "\n", " ", "\n", " ", " ", "\n", " ", " ", " ", "\n", " ", "\n", "\n", " ", "\n ", "\n", "\n ", " ", "\n", " ", " ", " ", "\n ", " ", "\n", " ", "\n", "\n", " ", " ", "\n", "\n", " ", " ", "\n", "\n", "\n", "\n", "\n", "\n", "\n", " ", "\n", "\n", "\n", "\n", " ", "\n", " ", "\n", " ", "\n", " ", " ", "\n", "\n", " ", " ", "\n", " ", "\n", "\n", "\n", " ", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", " ", " ", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", " ", " ", "\n", "\n", "\n", " ", " ", "\n", "\n", "\n", " ", " ", "\n", " ", " ", "\n", " ", " ", " ", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", " ", " ", "\n", "\n", "\n", "\n", "\n", " ", " ", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", " ", " ", "\n", " ", " ", "\n", " ", "\n", "\n", " ", " ", "\n", " ", " ", " ", " ", " ", " ", "\n ", "\n", " ", " ", " ", " ", " ", " ", " ", "\n ", " ", " ", " ", "\n ", " ", " ", " ", "\n ", "\n"], "language": "Python", "source": "python.org", "token_count": 22777} +{"url": "https://docs.python.org/3/deprecations/index.html", "title": "Deprecations", "content": "Deprecations\u00b6\nPending removal in Python 3.15\u00b6\nThe import system:\nSetting\n__cached__\non a module while failing to set__spec__.cached\nis deprecated. In Python 3.15,__cached__\nwill cease to be set or take into consideration by the import system or standard library. (gh-97879)Setting\n__package__\non a module while failing to set__spec__.parent\nis deprecated. In Python 3.15,__package__\nwill cease to be set or take into consideration by the import system or standard library. (gh-97879)\n-\nThe undocumented\nctypes.SetPointerType()\nfunction has been deprecated since Python 3.13.\n-\nThe obsolete and rarely used\nCGIHTTPRequestHandler\nhas been deprecated since Python 3.13. No direct replacement exists. Anything is better than CGI to interface a web server with a request handler.The\n--cgi\nflag to the python -m http.server command-line interface has been deprecated since Python 3.13.\n-\nload_module()\nmethod: useexec_module()\ninstead.\n-\nThe\ngetdefaultlocale()\nfunction has been deprecated since Python 3.11. Its removal was originally planned for Python 3.13 (gh-90817), but has been postponed to Python 3.15. Usegetlocale()\n,setlocale()\n, andgetencoding()\ninstead. (Contributed by Hugo van Kemenade in gh-111187.)\n-\nPurePath.is_reserved()\nhas been deprecated since Python 3.13. Useos.path.isreserved()\nto detect reserved paths on Windows.\n-\njava_ver()\nhas been deprecated since Python 3.13. This function is only useful for Jython support, has a confusing API, and is largely untested.\n-\nThe check_home argument of\nsysconfig.is_python_build()\nhas been deprecated since Python 3.12.\n-\nRLock()\nwill take no arguments in Python 3.15. Passing any arguments has been deprecated since Python 3.14, as the Python version does not permit any arguments, but the C version allows any number of positional or keyword arguments, ignoring every argument.\n-\ntypes.CodeType\n: Accessingco_lnotab\nwas deprecated in PEP 626 since 3.10 and was planned to be removed in 3.12, but it only got a properDeprecationWarning\nin 3.12. May be removed in 3.15. (Contributed by Nikita Sobolev in gh-101866.)\n-\nThe undocumented keyword argument syntax for creating\nNamedTuple\nclasses (for example,Point = NamedTuple(\"Point\", x=int, y=int)\n) has been deprecated since Python 3.13. Use the class-based syntax or the functional syntax instead.When using the functional syntax of\nTypedDict\ns, failing to pass a value to the fields parameter (TD = TypedDict(\"TD\")\n) or passingNone\n(TD = TypedDict(\"TD\", None)\n) has been deprecated since Python 3.13. Useclass TD(TypedDict): pass\norTD = TypedDict(\"TD\", {})\nto create a TypedDict with zero field.The\ntyping.no_type_check_decorator()\ndecorator function has been deprecated since Python 3.13. After eight years in thetyping\nmodule, it has yet to be supported by any major type checker.\nwave\n:The\ngetmark()\n,setmark()\n, andgetmarkers()\nmethods of theWave_read\nandWave_write\nclasses have been deprecated since Python 3.13.\n-\nload_module()\nhas been deprecated since Python 3.10. Useexec_module()\ninstead. (Contributed by Jiahao Li in gh-125746.)\nPending removal in Python 3.16\u00b6\nThe import system:\nSetting\n__loader__\non a module while failing to set__spec__.loader\nis deprecated. In Python 3.16,__loader__\nwill cease to be set or taken into consideration by the import system or the standard library.\n-\nThe\n'u'\nformat code (wchar_t\n) has been deprecated in documentation since Python 3.3 and at runtime since Python 3.13. Use the'w'\nformat code (Py_UCS4\n) for Unicode characters instead.\n-\nasyncio.iscoroutinefunction()\nis deprecated and will be removed in Python 3.16; useinspect.iscoroutinefunction()\ninstead. (Contributed by Jiahao Li and Kumar Aditya in gh-122875.)asyncio\npolicy system is deprecated and will be removed in Python 3.16. In particular, the following classes and functions are deprecated:Users should use\nasyncio.run()\norasyncio.Runner\nwith loop_factory to use the desired event loop implementation.For example, to use\nasyncio.SelectorEventLoop\non Windows:import asyncio async def main(): ... asyncio.run(main(), loop_factory=asyncio.SelectorEventLoop)\n(Contributed by Kumar Aditya in gh-127949.)\n-\nBitwise inversion on boolean types,\n~True\nor~False\nhas been deprecated since Python 3.12, as it produces surprising and unintuitive results (-2\nand-1\n). Usenot x\ninstead for the logical negation of a Boolean. In the rare case that you need the bitwise inversion of the underlying integer, convert toint\nexplicitly (~int(x)\n).\n-\nCalling the Python implementation of\nfunctools.reduce()\nwith function or sequence as keyword arguments has been deprecated since Python 3.14.\n-\nSupport for custom logging handlers with the strm argument is deprecated and scheduled for removal in Python 3.16. Define handlers with the stream argument instead. (Contributed by Mariusz Felisiak in gh-115032.)\n-\nValid extensions start with a \u2018.\u2019 or are empty for\nmimetypes.MimeTypes.add_type()\n. Undotted extensions are deprecated and will raise aValueError\nin Python 3.16. (Contributed by Hugo van Kemenade in gh-75223.)\n-\nThe\nExecError\nexception has been deprecated since Python 3.14. It has not been used by any function inshutil\nsince Python 3.4, and is now an alias ofRuntimeError\n.\n-\nThe\nClass.get_methods\nmethod has been deprecated since Python 3.14.\nsys\n:The\n_enablelegacywindowsfsencoding()\nfunction has been deprecated since Python 3.13. Use thePYTHONLEGACYWINDOWSFSENCODING\nenvironment variable instead.\n-\nThe\nsysconfig.expand_makefile_vars()\nfunction has been deprecated since Python 3.14. Use thevars\nargument ofsysconfig.get_paths()\ninstead.\n-\nThe undocumented and unused\nTarFile.tarfile\nattribute has been deprecated since Python 3.13.\nPending removal in Python 3.17\u00b6\n-\ncollections.abc.ByteString\nis scheduled for removal in Python 3.17.Use\nisinstance(obj, collections.abc.Buffer)\nto test ifobj\nimplements the buffer protocol at runtime. For use in type annotations, either useBuffer\nor a union that explicitly specifies the types your code supports (e.g.,bytes | bytearray | memoryview\n).ByteString\nwas originally intended to be an abstract class that would serve as a supertype of bothbytes\nandbytearray\n. However, since the ABC never had any methods, knowing that an object was an instance ofByteString\nnever actually told you anything useful about the object. Other common buffer types such asmemoryview\nwere also never understood as subtypes ofByteString\n(either at runtime or by static type checkers).See PEP 688 for more details. (Contributed by Shantanu Jain in gh-91896.)\n-\nBefore Python 3.14, old-style unions were implemented using the private class\ntyping._UnionGenericAlias\n. This class is no longer needed for the implementation, but it has been retained for backward compatibility, with removal scheduled for Python 3.17. Users should use documented introspection helpers liketyping.get_origin()\nandtyping.get_args()\ninstead of relying on private implementation details.typing.ByteString\n, deprecated since Python 3.9, is scheduled for removal in Python 3.17.Use\nisinstance(obj, collections.abc.Buffer)\nto test ifobj\nimplements the buffer protocol at runtime. For use in type annotations, either useBuffer\nor a union that explicitly specifies the types your code supports (e.g.,bytes | bytearray | memoryview\n).ByteString\nwas originally intended to be an abstract class that would serve as a supertype of bothbytes\nandbytearray\n. However, since the ABC never had any methods, knowing that an object was an instance ofByteString\nnever actually told you anything useful about the object. Other common buffer types such asmemoryview\nwere also never understood as subtypes ofByteString\n(either at runtime or by static type checkers).See PEP 688 for more details. (Contributed by Shantanu Jain in gh-91896.)\nPending removal in Python 3.18\u00b6\nPending removal in Python 3.19\u00b6\nPending removal in future versions\u00b6\nThe following APIs will be removed in the future, although there is currently no date scheduled for their removal.\n-\nNesting argument groups and nesting mutually exclusive groups are deprecated.\nPassing the undocumented keyword argument prefix_chars to\nadd_argument_group()\nis now deprecated.The\nargparse.FileType\ntype converter is deprecated.\n-\nGenerators:\nthrow(type, exc, tb)\nandathrow(type, exc, tb)\nsignature is deprecated: usethrow(exc)\nandathrow(exc)\ninstead, the single argument signature.Currently Python accepts numeric literals immediately followed by keywords, for example\n0in x\n,1or x\n,0if 1else 2\n. It allows confusing and ambiguous expressions like[0x1for x in y]\n(which can be interpreted as[0x1 for x in y]\nor[0x1f or x in y]\n). A syntax warning is raised if the numeric literal is immediately followed by one of keywordsand\n,else\n,for\n,if\n,in\n,is\nandor\n. In a future release it will be changed to a syntax error. (gh-87999)Support for\n__index__()\nand__int__()\nmethod returning non-int type: these methods will be required to return an instance of a strict subclass ofint\n.Support for\n__float__()\nmethod returning a strict subclass offloat\n: these methods will be required to return an instance offloat\n.Support for\n__complex__()\nmethod returning a strict subclass ofcomplex\n: these methods will be required to return an instance ofcomplex\n.Passing a complex number as the real or imag argument in the\ncomplex()\nconstructor is now deprecated; it should only be passed as a single positional argument. (Contributed by Serhiy Storchaka in gh-109218.)\ncalendar\n:calendar.January\nandcalendar.February\nconstants are deprecated and replaced bycalendar.JANUARY\nandcalendar.FEBRUARY\n. (Contributed by Prince Roshan in gh-103636.)codecs\n: useopen()\ninstead ofcodecs.open()\n. (gh-133038)codeobject.co_lnotab\n: use thecodeobject.co_lines()\nmethod instead.-\nutcnow()\n: usedatetime.datetime.now(tz=datetime.UTC)\n.utcfromtimestamp()\n: usedatetime.datetime.fromtimestamp(timestamp, tz=datetime.UTC)\n.\ngettext\n: Plural value must be an integer.-\ncache_from_source()\ndebug_override parameter is deprecated: use the optimization parameter instead.\n-\nEntryPoints\ntuple interface.Implicit\nNone\non return values.\nlogging\n: thewarn()\nmethod has been deprecated since Python 3.3, usewarning()\ninstead.mailbox\n: Use of StringIO input and text mode is deprecated, use BytesIO and binary mode instead.os\n: Callingos.register_at_fork()\nin multi-threaded process.pydoc.ErrorDuringImport\n: A tuple value for exc_info parameter is deprecated, use an exception instance.re\n: More strict rules are now applied for numerical group references and group names in regular expressions. Only sequence of ASCII digits is now accepted as a numerical reference. The group name in bytes patterns and replacement strings can now only contain ASCII letters and digits and underscore. (Contributed by Serhiy Storchaka in gh-91760.)sre_compile\n,sre_constants\nandsre_parse\nmodules.shutil\n:rmtree()\n\u2019s onerror parameter is deprecated in Python 3.12; use the onexc parameter instead.ssl\noptions and protocols:ssl.SSLContext\nwithout protocol argument is deprecated.ssl.SSLContext\n:set_npn_protocols()\nandselected_npn_protocol()\nare deprecated: use ALPN instead.ssl.OP_NO_SSL*\noptionsssl.OP_NO_TLS*\noptionsssl.PROTOCOL_SSLv3\nssl.PROTOCOL_TLS\nssl.PROTOCOL_TLSv1\nssl.PROTOCOL_TLSv1_1\nssl.PROTOCOL_TLSv1_2\nssl.TLSVersion.SSLv3\nssl.TLSVersion.TLSv1\nssl.TLSVersion.TLSv1_1\nthreading\nmethods:threading.Condition.notifyAll()\n: usenotify_all()\n.threading.Event.isSet()\n: useis_set()\n.threading.Thread.isDaemon()\n,threading.Thread.setDaemon()\n: usethreading.Thread.daemon\nattribute.threading.Thread.getName()\n,threading.Thread.setName()\n: usethreading.Thread.name\nattribute.threading.currentThread()\n: usethreading.current_thread()\n.threading.activeCount()\n: usethreading.active_count()\n.\nThe internal class\ntyping._UnionGenericAlias\nis no longer used to implementtyping.Union\n. To preserve compatibility with users using this private class, a compatibility shim will be provided until at least Python 3.17. (Contributed by Jelle Zijlstra in gh-105499.)unittest.IsolatedAsyncioTestCase\n: it is deprecated to return a value that is notNone\nfrom a test case.urllib.parse\ndeprecated functions:urlparse()\ninsteadsplitattr()\nsplithost()\nsplitnport()\nsplitpasswd()\nsplitport()\nsplitquery()\nsplittag()\nsplittype()\nsplituser()\nsplitvalue()\nto_bytes()\nwsgiref\n:SimpleHandler.stdout.write()\nshould not do partial writes.xml.etree.ElementTree\n: Testing the truth value of anElement\nis deprecated. In a future release it will always returnTrue\n. Prefer explicitlen(elem)\norelem is not None\ntests instead.sys._clear_type_cache()\nis deprecated: usesys._clear_internal_caches()\ninstead.\nC API deprecations\u00b6\nPending removal in Python 3.15\u00b6\nThe\nPyImport_ImportModuleNoBlock()\n: UsePyImport_ImportModule()\ninstead.PyWeakref_GetObject()\nandPyWeakref_GET_OBJECT()\n: UsePyWeakref_GetRef()\ninstead. The pythoncapi-compat project can be used to getPyWeakref_GetRef()\non Python 3.12 and older.Py_UNICODE\ntype and thePy_UNICODE_WIDE\nmacro: Usewchar_t\ninstead.PyUnicode_AsDecodedObject()\n: UsePyCodec_Decode()\ninstead.PyUnicode_AsDecodedUnicode()\n: UsePyCodec_Decode()\ninstead; Note that some codecs (for example, \u201cbase64\u201d) may return a type other thanstr\n, such asbytes\n.PyUnicode_AsEncodedObject()\n: UsePyCodec_Encode()\ninstead.PyUnicode_AsEncodedUnicode()\n: UsePyCodec_Encode()\ninstead; Note that some codecs (for example, \u201cbase64\u201d) may return a type other thanbytes\n, such asstr\n.Python initialization functions, deprecated in Python 3.13:\nPy_GetPath()\n: UsePyConfig_Get(\"module_search_paths\")\n(sys.path\n) instead.Py_GetPrefix()\n: UsePyConfig_Get(\"base_prefix\")\n(sys.base_prefix\n) instead. UsePyConfig_Get(\"prefix\")\n(sys.prefix\n) if virtual environments need to be handled.Py_GetExecPrefix()\n: UsePyConfig_Get(\"base_exec_prefix\")\n(sys.base_exec_prefix\n) instead. UsePyConfig_Get(\"exec_prefix\")\n(sys.exec_prefix\n) if virtual environments need to be handled.Py_GetProgramFullPath()\n: UsePyConfig_Get(\"executable\")\n(sys.executable\n) instead.Py_GetProgramName()\n: UsePyConfig_Get(\"executable\")\n(sys.executable\n) instead.Py_GetPythonHome()\n: UsePyConfig_Get(\"home\")\nor thePYTHONHOME\nenvironment variable instead.\nThe pythoncapi-compat project can be used to get\nPyConfig_Get()\non Python 3.13 and older.Functions to configure Python\u2019s initialization, deprecated in Python 3.11:\nPySys_SetArgvEx()\n: SetPyConfig.argv\ninstead.PySys_SetArgv()\n: SetPyConfig.argv\ninstead.Py_SetProgramName()\n: SetPyConfig.program_name\ninstead.Py_SetPythonHome()\n: SetPyConfig.home\ninstead.PySys_ResetWarnOptions()\n: Clearsys.warnoptions\nandwarnings.filters\ninstead.\nThe\nPy_InitializeFromConfig()\nAPI should be used withPyConfig\ninstead.Global configuration variables:\nPy_DebugFlag\n: UsePyConfig.parser_debug\norPyConfig_Get(\"parser_debug\")\ninstead.Py_VerboseFlag\n: UsePyConfig.verbose\norPyConfig_Get(\"verbose\")\ninstead.Py_QuietFlag\n: UsePyConfig.quiet\norPyConfig_Get(\"quiet\")\ninstead.Py_InteractiveFlag\n: UsePyConfig.interactive\norPyConfig_Get(\"interactive\")\ninstead.Py_InspectFlag\n: UsePyConfig.inspect\norPyConfig_Get(\"inspect\")\ninstead.Py_OptimizeFlag\n: UsePyConfig.optimization_level\norPyConfig_Get(\"optimization_level\")\ninstead.Py_NoSiteFlag\n: UsePyConfig.site_import\norPyConfig_Get(\"site_import\")\ninstead.Py_BytesWarningFlag\n: UsePyConfig.bytes_warning\norPyConfig_Get(\"bytes_warning\")\ninstead.Py_FrozenFlag\n: UsePyConfig.pathconfig_warnings\norPyConfig_Get(\"pathconfig_warnings\")\ninstead.Py_IgnoreEnvironmentFlag\n: UsePyConfig.use_environment\norPyConfig_Get(\"use_environment\")\ninstead.Py_DontWriteBytecodeFlag\n: UsePyConfig.write_bytecode\norPyConfig_Get(\"write_bytecode\")\ninstead.Py_NoUserSiteDirectory\n: UsePyConfig.user_site_directory\norPyConfig_Get(\"user_site_directory\")\ninstead.Py_UnbufferedStdioFlag\n: UsePyConfig.buffered_stdio\norPyConfig_Get(\"buffered_stdio\")\ninstead.Py_HashRandomizationFlag\n: UsePyConfig.use_hash_seed\nandPyConfig.hash_seed\norPyConfig_Get(\"hash_seed\")\ninstead.Py_IsolatedFlag\n: UsePyConfig.isolated\norPyConfig_Get(\"isolated\")\ninstead.Py_LegacyWindowsFSEncodingFlag\n: UsePyPreConfig.legacy_windows_fs_encoding\norPyConfig_Get(\"legacy_windows_fs_encoding\")\ninstead.Py_LegacyWindowsStdioFlag\n: UsePyConfig.legacy_windows_stdio\norPyConfig_Get(\"legacy_windows_stdio\")\ninstead.Py_FileSystemDefaultEncoding\n,Py_HasFileSystemDefaultEncoding\n: UsePyConfig.filesystem_encoding\norPyConfig_Get(\"filesystem_encoding\")\ninstead.Py_FileSystemDefaultEncodeErrors\n: UsePyConfig.filesystem_errors\norPyConfig_Get(\"filesystem_errors\")\ninstead.Py_UTF8Mode\n: UsePyPreConfig.utf8_mode\norPyConfig_Get(\"utf8_mode\")\ninstead. (seePy_PreInitialize()\n)\nThe\nPy_InitializeFromConfig()\nAPI should be used withPyConfig\nto set these options. OrPyConfig_Get()\ncan be used to get these options at runtime.\nPending removal in Python 3.18\u00b6\nThe following private functions are deprecated and planned for removal in Python 3.18:\n_PyBytes_Join()\n: usePyBytes_Join()\n._PyDict_GetItemStringWithError()\n: usePyDict_GetItemStringRef()\n._PyDict_Pop()\n: usePyDict_Pop()\n._PyLong_Sign()\n: usePyLong_GetSign()\n._PyLong_FromDigits()\nand_PyLong_New()\n: usePyLongWriter_Create()\n._PyThreadState_UncheckedGet()\n: usePyThreadState_GetUnchecked()\n._PyUnicode_AsString()\n: usePyUnicode_AsUTF8()\n._PyUnicodeWriter_Init()\n: replace_PyUnicodeWriter_Init(&writer)\nwithwriter = PyUnicodeWriter_Create(0)\n._PyUnicodeWriter_Finish()\n: replace_PyUnicodeWriter_Finish(&writer)\nwithPyUnicodeWriter_Finish(writer)\n._PyUnicodeWriter_Dealloc()\n: replace_PyUnicodeWriter_Dealloc(&writer)\nwithPyUnicodeWriter_Discard(writer)\n._PyUnicodeWriter_WriteChar()\n: replace_PyUnicodeWriter_WriteChar(&writer, ch)\nwithPyUnicodeWriter_WriteChar(writer, ch)\n._PyUnicodeWriter_WriteStr()\n: replace_PyUnicodeWriter_WriteStr(&writer, str)\nwithPyUnicodeWriter_WriteStr(writer, str)\n._PyUnicodeWriter_WriteSubstring()\n: replace_PyUnicodeWriter_WriteSubstring(&writer, str, start, end)\nwithPyUnicodeWriter_WriteSubstring(writer, str, start, end)\n._PyUnicodeWriter_WriteASCIIString()\n: replace_PyUnicodeWriter_WriteASCIIString(&writer, str)\nwithPyUnicodeWriter_WriteASCII(writer, str)\n._PyUnicodeWriter_WriteLatin1String()\n: replace_PyUnicodeWriter_WriteLatin1String(&writer, str)\nwithPyUnicodeWriter_WriteUTF8(writer, str)\n._PyUnicodeWriter_Prepare()\n: (no replacement)._PyUnicodeWriter_PrepareKind()\n: (no replacement)._Py_HashPointer()\n: usePy_HashPointer()\n._Py_fopen_obj()\n: usePy_fopen()\n.\nThe pythoncapi-compat project can be used to get these new public functions on Python 3.13 and older. (Contributed by Victor Stinner in gh-128863.)\nPending removal in future versions\u00b6\nThe following APIs are deprecated and will be removed, although there is currently no date scheduled for their removal.\nPy_TPFLAGS_HAVE_FINALIZE\n: Unneeded since Python 3.8.PyErr_Fetch()\n: UsePyErr_GetRaisedException()\ninstead.PyErr_NormalizeException()\n: UsePyErr_GetRaisedException()\ninstead.PyErr_Restore()\n: UsePyErr_SetRaisedException()\ninstead.PyModule_GetFilename()\n: UsePyModule_GetFilenameObject()\ninstead.PyOS_AfterFork()\n: UsePyOS_AfterFork_Child()\ninstead.PySlice_GetIndicesEx()\n: UsePySlice_Unpack()\nandPySlice_AdjustIndices()\ninstead.PyUnicode_READY()\n: Unneeded since Python 3.12PyErr_Display()\n: UsePyErr_DisplayException()\ninstead._PyErr_ChainExceptions()\n: Use_PyErr_ChainExceptions1()\ninstead.PyBytesObject.ob_shash\nmember: callPyObject_Hash()\ninstead.Thread Local Storage (TLS) API:\nPyThread_create_key()\n: UsePyThread_tss_alloc()\ninstead.PyThread_delete_key()\n: UsePyThread_tss_free()\ninstead.PyThread_set_key_value()\n: UsePyThread_tss_set()\ninstead.PyThread_get_key_value()\n: UsePyThread_tss_get()\ninstead.PyThread_delete_key_value()\n: UsePyThread_tss_delete()\ninstead.PyThread_ReInitTLS()\n: Unneeded since Python 3.7.", "code_snippets": ["\n\n", " ", "\n ", "\n\n", " ", "\n"], "language": "Python", "source": "python.org", "token_count": 4898} +{"url": "https://docs.python.org/3/whatsnew/2.2.html", "title": "What\u2019s New in Python 2.2", "content": "What\u2019s New in Python 2.2\u00b6\n- Author:\nA.M. Kuchling\nIntroduction\u00b6\nThis article explains the new features in Python 2.2.2, released on October 14, 2002. Python 2.2.2 is a bugfix release of Python 2.2, originally released on December 21, 2001.\nPython 2.2 can be thought of as the \u201ccleanup release\u201d. There are some features such as generators and iterators that are completely new, but most of the changes, significant and far-reaching though they may be, are aimed at cleaning up irregularities and dark corners of the language design.\nThis article doesn\u2019t attempt to provide a complete specification of the new features, but instead provides a convenient overview. For full details, you should refer to the documentation for Python 2.2, such as the Python Library Reference and the Python Reference Manual. If you want to understand the complete implementation and design rationale for a change, refer to the PEP for a particular new feature.\nPEPs 252 and 253: Type and Class Changes\u00b6\nThe largest and most far-reaching changes in Python 2.2 are to Python\u2019s model of objects and classes. The changes should be backward compatible, so it\u2019s likely that your code will continue to run unchanged, but the changes provide some amazing new capabilities. Before beginning this, the longest and most complicated section of this article, I\u2019ll provide an overview of the changes and offer some comments.\nA long time ago I wrote a web page listing flaws in Python\u2019s design. One of the\nmost significant flaws was that it\u2019s impossible to subclass Python types\nimplemented in C. In particular, it\u2019s not possible to subclass built-in types,\nso you can\u2019t just subclass, say, lists in order to add a single useful method to\nthem. The UserList\nmodule provides a class that supports all of the\nmethods of lists and that can be subclassed further, but there\u2019s lots of C code\nthat expects a regular Python list and won\u2019t accept a UserList\ninstance.\nPython 2.2 fixes this, and in the process adds some exciting new capabilities. A brief summary:\nYou can subclass built-in types such as lists and even integers, and your subclasses should work in every place that requires the original type.\nIt\u2019s now possible to define static and class methods, in addition to the instance methods available in previous versions of Python.\nIt\u2019s also possible to automatically call methods on accessing or setting an instance attribute by using a new mechanism called properties. Many uses of\n__getattr__()\ncan be rewritten to use properties instead, making the resulting code simpler and faster. As a small side benefit, attributes can now have docstrings, too.The list of legal attributes for an instance can be limited to a particular set using slots, making it possible to safeguard against typos and perhaps make more optimizations possible in future versions of Python.\nSome users have voiced concern about all these changes. Sure, they say, the new features are neat and lend themselves to all sorts of tricks that weren\u2019t possible in previous versions of Python, but they also make the language more complicated. Some people have said that they\u2019ve always recommended Python for its simplicity, and feel that its simplicity is being lost.\nPersonally, I think there\u2019s no need to worry. Many of the new features are quite esoteric, and you can write a lot of Python code without ever needed to be aware of them. Writing a simple class is no more difficult than it ever was, so you don\u2019t need to bother learning or teaching them unless they\u2019re actually needed. Some very complicated tasks that were previously only possible from C will now be possible in pure Python, and to my mind that\u2019s all for the better.\nI\u2019m not going to attempt to cover every single corner case and small change that were required to make the new features work. Instead this section will paint only the broad strokes. See section Related Links, \u201cRelated Links\u201d, for further sources of information about Python 2.2\u2019s new object model.\nOld and New Classes\u00b6\nFirst, you should know that Python 2.2 really has two kinds of classes: classic or old-style classes, and new-style classes. The old-style class model is exactly the same as the class model in earlier versions of Python. All the new features described in this section apply only to new-style classes. This divergence isn\u2019t intended to last forever; eventually old-style classes will be dropped, possibly in Python 3.0.\nSo how do you define a new-style class? You do it by subclassing an existing\nnew-style class. Most of Python\u2019s built-in types, such as integers, lists,\ndictionaries, and even files, are new-style classes now. A new-style class\nnamed object\n, the base class for all built-in types, has also been\nadded so if no built-in type is suitable, you can just subclass\nobject\n:\nclass C(object):\ndef __init__ (self):\n...\n...\nThis means that class\nstatements that don\u2019t have any base classes are\nalways classic classes in Python 2.2. (Actually you can also change this by\nsetting a module-level variable named __metaclass__\n\u2014 see PEP 253\nfor the details \u2014 but it\u2019s easier to just subclass object\n.)\nThe type objects for the built-in types are available as built-ins, named using\na clever trick. Python has always had built-in functions named int()\n,\nfloat()\n, and str()\n. In 2.2, they aren\u2019t functions any more, but\ntype objects that behave as factories when called.\n>>> int\n\n>>> int('123')\n123\nTo make the set of types complete, new type objects such as dict()\nand\nfile()\nhave been added. Here\u2019s a more interesting example, adding a\nlock()\nmethod to file objects:\nclass LockableFile(file):\ndef lock (self, operation, length=0, start=0, whence=0):\nimport fcntl\nreturn fcntl.lockf(self.fileno(), operation,\nlength, start, whence)\nThe now-obsolete posixfile\nmodule contained a class that emulated all of\na file object\u2019s methods and also added a lock()\nmethod, but this class\ncouldn\u2019t be passed to internal functions that expected a built-in file,\nsomething which is possible with our new LockableFile\n.\nDescriptors\u00b6\nIn previous versions of Python, there was no consistent way to discover what\nattributes and methods were supported by an object. There were some informal\nconventions, such as defining __members__\nand __methods__\nattributes that were lists of names, but often the author of an extension type\nor a class wouldn\u2019t bother to define them. You could fall back on inspecting\nthe __dict__\nof an object, but when class inheritance or an arbitrary\n__getattr__()\nhook were in use this could still be inaccurate.\nThe one big idea underlying the new class model is that an API for describing the attributes of an object using descriptors has been formalized. Descriptors specify the value of an attribute, stating whether it\u2019s a method or a field. With the descriptor API, static methods and class methods become possible, as well as more exotic constructs.\nAttribute descriptors are objects that live inside class objects, and have a few attributes of their own:\n__name__\nis the attribute\u2019s name.__doc__\nis the attribute\u2019s docstring.__get__(object)\nis a method that retrieves the attribute value from object.__set__(object, value)\nsets the attribute on object to value.__delete__(object, value)\ndeletes the value attribute of object.\nFor example, when you write obj.x\n, the steps that Python actually performs\nare:\ndescriptor = obj.__class__.x\ndescriptor.__get__(obj)\nFor methods, descriptor.__get__\nreturns a temporary\nobject that\u2019s\ncallable, and wraps up the instance and the method to be called on it. This is\nalso why static methods and class methods are now possible; they have\ndescriptors that wrap up just the method, or the method and the class. As a\nbrief explanation of these new kinds of methods, static methods aren\u2019t passed\nthe instance, and therefore resemble regular functions. Class methods are\npassed the class of the object, but not the object itself. Static and class\nmethods are defined like this:\nclass C(object):\ndef f(arg1, arg2):\n...\nf = staticmethod(f)\ndef g(cls, arg1, arg2):\n...\ng = classmethod(g)\nThe staticmethod()\nfunction takes the function f()\n, and returns it\nwrapped up in a descriptor so it can be stored in the class object. You might\nexpect there to be special syntax for creating such methods (def static f\n,\ndefstatic f()\n, or something like that) but no such syntax has been defined\nyet; that\u2019s been left for future versions of Python.\nMore new features, such as slots and properties, are also implemented as new kinds of descriptors, and it\u2019s not difficult to write a descriptor class that does something novel. For example, it would be possible to write a descriptor class that made it possible to write Eiffel-style preconditions and postconditions for a method. A class that used this feature might be defined like this:\nfrom eiffel import eiffelmethod\nclass C(object):\ndef f(self, arg1, arg2):\n# The actual function\n...\ndef pre_f(self):\n# Check preconditions\n...\ndef post_f(self):\n# Check postconditions\n...\nf = eiffelmethod(f, pre_f, post_f)\nNote that a person using the new eiffelmethod()\ndoesn\u2019t have to understand\nanything about descriptors. This is why I think the new features don\u2019t increase\nthe basic complexity of the language. There will be a few wizards who need to\nknow about it in order to write eiffelmethod()\nor the ZODB or whatever,\nbut most users will just write code on top of the resulting libraries and ignore\nthe implementation details.\nMultiple Inheritance: The Diamond Rule\u00b6\nMultiple inheritance has also been made more useful through changing the rules under which names are resolved. Consider this set of classes (diagram taken from PEP 253 by Guido van Rossum):\nclass A:\n^ ^ def save(self): ...\n/ \\\n/ \\\n/ \\\n/ \\\nclass B class C:\n^ ^ def save(self): ...\n\\ /\n\\ /\n\\ /\n\\ /\nclass D\nThe lookup rule for classic classes is simple but not very smart; the base\nclasses are searched depth-first, going from left to right. A reference to\nD.save()\nwill search the classes D\n, B\n, and then\nA\n, where save()\nwould be found and returned. C.save()\nwould never be found at all. This is bad, because if C\n\u2019s save()\nmethod is saving some internal state specific to C\n, not calling it will\nresult in that state never getting saved.\nNew-style classes follow a different algorithm that\u2019s a bit more complicated to explain, but does the right thing in this situation. (Note that Python 2.3 changes this algorithm to one that produces the same results in most cases, but produces more useful results for really complicated inheritance graphs.)\nList all the base classes, following the classic lookup rule and include a class multiple times if it\u2019s visited repeatedly. In the above example, the list of visited classes is [\nD\n,B\n,A\n,C\n,A\n].Scan the list for duplicated classes. If any are found, remove all but one occurrence, leaving the last one in the list. In the above example, the list becomes [\nD\n,B\n,C\n,A\n] after dropping duplicates.\nFollowing this rule, referring to D.save()\nwill return C.save()\n,\nwhich is the behaviour we\u2019re after. This lookup rule is the same as the one\nfollowed by Common Lisp. A new built-in function, super()\n, provides a way\nto get at a class\u2019s superclasses without having to reimplement Python\u2019s\nalgorithm. The most commonly used form will be super(class, obj)\n, which\nreturns a bound superclass object (not the actual class object). This form\nwill be used in methods to call a method in the superclass; for example,\nD\n\u2019s save()\nmethod would look like this:\nclass D (B,C):\ndef save (self):\n# Call superclass .save()\nsuper(D, self).save()\n# Save D's private information here\n...\nsuper()\ncan also return unbound superclass objects when called as\nsuper(class)\nor super(class1, class2)\n, but this probably won\u2019t\noften be useful.\nAttribute Access\u00b6\nA fair number of sophisticated Python classes define hooks for attribute access\nusing __getattr__()\n; most commonly this is done for convenience, to make\ncode more readable by automatically mapping an attribute access such as\nobj.parent\ninto a method call such as obj.get_parent\n. Python 2.2 adds\nsome new ways of controlling attribute access.\nFirst, __getattr__(attr_name)\nis still supported by new-style classes,\nand nothing about it has changed. As before, it will be called when an attempt\nis made to access obj.foo\nand no attribute named foo\nis found in the\ninstance\u2019s dictionary.\nNew-style classes also support a new method,\n__getattribute__(attr_name)\n. The difference between the two methods is\nthat __getattribute__()\nis always called whenever any attribute is\naccessed, while the old __getattr__()\nis only called if foo\nisn\u2019t\nfound in the instance\u2019s dictionary.\nHowever, Python 2.2\u2019s support for properties will often be a simpler way\nto trap attribute references. Writing a __getattr__()\nmethod is\ncomplicated because to avoid recursion you can\u2019t use regular attribute accesses\ninside them, and instead have to mess around with the contents of\n__dict__\n. __getattr__()\nmethods also end up being called by Python\nwhen it checks for other methods such as __repr__()\nor __coerce__()\n,\nand so have to be written with this in mind. Finally, calling a function on\nevery attribute access results in a sizable performance loss.\nproperty\nis a new built-in type that packages up three functions that\nget, set, or delete an attribute, and a docstring. For example, if you want to\ndefine a size\nattribute that\u2019s computed, but also settable, you could\nwrite:\nclass C(object):\ndef get_size (self):\nresult = ... computation ...\nreturn result\ndef set_size (self, size):\n... compute something based on the size\nand set internal state appropriately ...\n# Define a property. The 'delete this attribute'\n# method is defined as None, so the attribute\n# can't be deleted.\nsize = property(get_size, set_size,\nNone,\n\"Storage size of this instance\")\nThat is certainly clearer and easier to write than a pair of\n__getattr__()\n/__setattr__()\nmethods that check for the size\nattribute and handle it specially while retrieving all other attributes from the\ninstance\u2019s __dict__\n. Accesses to size\nare also the only ones\nwhich have to perform the work of calling a function, so references to other\nattributes run at their usual speed.\nFinally, it\u2019s possible to constrain the list of attributes that can be\nreferenced on an object using the new __slots__\nclass attribute. Python\nobjects are usually very dynamic; at any time it\u2019s possible to define a new\nattribute on an instance by just doing obj.new_attr=1\n. A new-style class\ncan define a class attribute named __slots__\nto limit the legal\nattributes to a particular set of names. An example will make this clear:\n>>> class C(object):\n... __slots__ = ('template', 'name')\n...\n>>> obj = C()\n>>> print obj.template\nNone\n>>> obj.template = 'Test'\n>>> print obj.template\nTest\n>>> obj.newattr = None\nTraceback (most recent call last):\nFile \"\", line 1, in ?\nAttributeError: 'C' object has no attribute 'newattr'\nNote how you get an AttributeError\non the attempt to assign to an\nattribute not listed in __slots__\n.\nPEP 234: Iterators\u00b6\nAnother significant addition to 2.2 is an iteration interface at both the C and Python levels. Objects can define how they can be looped over by callers.\nIn Python versions up to 2.1, the usual way to make for item in obj\nwork is\nto define a __getitem__()\nmethod that looks something like this:\ndef __getitem__(self, index):\nreturn \n__getitem__()\nis more properly used to define an indexing operation on an\nobject so that you can write obj[5]\nto retrieve the sixth element. It\u2019s a\nbit misleading when you\u2019re using this only to support for\nloops.\nConsider some file-like object that wants to be looped over; the index\nparameter is essentially meaningless, as the class probably assumes that a\nseries of __getitem__()\ncalls will be made with index incrementing by\none each time. In other words, the presence of the __getitem__()\nmethod\ndoesn\u2019t mean that using file[5]\nto randomly access the sixth element will\nwork, though it really should.\nIn Python 2.2, iteration can be implemented separately, and __getitem__()\nmethods can be limited to classes that really do support random access. The\nbasic idea of iterators is simple. A new built-in function, iter(obj)\nor iter(C, sentinel)\n, is used to get an iterator. iter(obj)\nreturns\nan iterator for the object obj, while iter(C, sentinel)\nreturns an\niterator that will invoke the callable object C until it returns sentinel to\nsignal that the iterator is done.\nPython classes can define an __iter__()\nmethod, which should create and\nreturn a new iterator for the object; if the object is its own iterator, this\nmethod can just return self\n. In particular, iterators will usually be their\nown iterators. Extension types implemented in C can implement a tp_iter\nfunction in order to return an iterator, and extension types that want to behave\nas iterators can define a tp_iternext\nfunction.\nSo, after all this, what do iterators actually do? They have one required\nmethod, next()\n, which takes no arguments and returns the next value. When\nthere are no more values to be returned, calling next()\nshould raise the\nStopIteration\nexception.\n>>> L = [1,2,3]\n>>> i = iter(L)\n>>> print i\n\n>>> i.next()\n1\n>>> i.next()\n2\n>>> i.next()\n3\n>>> i.next()\nTraceback (most recent call last):\nFile \"\", line 1, in ?\nStopIteration\n>>>\nIn 2.2, Python\u2019s for\nstatement no longer expects a sequence; it\nexpects something for which iter()\nwill return an iterator. For backward\ncompatibility and convenience, an iterator is automatically constructed for\nsequences that don\u2019t implement __iter__()\nor a tp_iter\nslot, so\nfor i in [1,2,3]\nwill still work. Wherever the Python interpreter loops\nover a sequence, it\u2019s been changed to use the iterator protocol. This means you\ncan do things like this:\n>>> L = [1,2,3]\n>>> i = iter(L)\n>>> a,b,c = i\n>>> a,b,c\n(1, 2, 3)\nIterator support has been added to some of Python\u2019s basic types. Calling\niter()\non a dictionary will return an iterator which loops over its keys:\n>>> m = {'Jan': 1, 'Feb': 2, 'Mar': 3, 'Apr': 4, 'May': 5, 'Jun': 6,\n... 'Jul': 7, 'Aug': 8, 'Sep': 9, 'Oct': 10, 'Nov': 11, 'Dec': 12}\n>>> for key in m: print key, m[key]\n...\nMar 3\nFeb 2\nAug 8\nSep 9\nMay 5\nJun 6\nJul 7\nJan 1\nApr 4\nNov 11\nDec 12\nOct 10\nThat\u2019s just the default behaviour. If you want to iterate over keys, values, or\nkey/value pairs, you can explicitly call the iterkeys()\n,\nitervalues()\n, or iteritems()\nmethods to get an appropriate iterator.\nIn a minor related change, the in\noperator now works on dictionaries,\nso key in dict\nis now equivalent to dict.has_key(key)\n.\nFiles also provide an iterator, which calls the readline()\nmethod until\nthere are no more lines in the file. This means you can now read each line of a\nfile using code like this:\nfor line in file:\n# do something for each line\n...\nNote that you can only go forward in an iterator; there\u2019s no way to get the\nprevious element, reset the iterator, or make a copy of it. An iterator object\ncould provide such additional capabilities, but the iterator protocol only\nrequires a next()\nmethod.\nSee also\n- PEP 234 - Iterators\nWritten by Ka-Ping Yee and GvR; implemented by the Python Labs crew, mostly by GvR and Tim Peters.\nPEP 255: Simple Generators\u00b6\nGenerators are another new feature, one that interacts with the introduction of iterators.\nYou\u2019re doubtless familiar with how function calls work in Python or C. When you\ncall a function, it gets a private namespace where its local variables are\ncreated. When the function reaches a return\nstatement, the local\nvariables are destroyed and the resulting value is returned to the caller. A\nlater call to the same function will get a fresh new set of local variables.\nBut, what if the local variables weren\u2019t thrown away on exiting a function?\nWhat if you could later resume the function where it left off? This is what\ngenerators provide; they can be thought of as resumable functions.\nHere\u2019s the simplest example of a generator function:\ndef generate_ints(N):\nfor i in range(N):\nyield i\nA new keyword, yield\n, was introduced for generators. Any function\ncontaining a yield\nstatement is a generator function; this is\ndetected by Python\u2019s bytecode compiler which compiles the function specially as\na result. Because a new keyword was introduced, generators must be explicitly\nenabled in a module by including a from __future__ import generators\nstatement near the top of the module\u2019s source code. In Python 2.3 this\nstatement will become unnecessary.\nWhen you call a generator function, it doesn\u2019t return a single value; instead it\nreturns a generator object that supports the iterator protocol. On executing\nthe yield\nstatement, the generator outputs the value of i\n,\nsimilar to a return\nstatement. The big difference between\nyield\nand a return\nstatement is that on reaching a\nyield\nthe generator\u2019s state of execution is suspended and local\nvariables are preserved. On the next call to the generator\u2019s next()\nmethod,\nthe function will resume executing immediately after the yield\nstatement. (For complicated reasons, the yield\nstatement isn\u2019t\nallowed inside the try\nblock of a\ntry\n\u2026finally\nstatement; read PEP 255 for a full\nexplanation of the interaction between yield\nand exceptions.)\nHere\u2019s a sample usage of the generate_ints()\ngenerator:\n>>> gen = generate_ints(3)\n>>> gen\n\n>>> gen.next()\n0\n>>> gen.next()\n1\n>>> gen.next()\n2\n>>> gen.next()\nTraceback (most recent call last):\nFile \"\", line 1, in ?\nFile \"\", line 2, in generate_ints\nStopIteration\nYou could equally write for i in generate_ints(5)\n, or a,b,c =\ngenerate_ints(3)\n.\nInside a generator function, the return\nstatement can only be used\nwithout a value, and signals the end of the procession of values; afterwards the\ngenerator cannot return any further values. return\nwith a value, such\nas return 5\n, is a syntax error inside a generator function. The end of the\ngenerator\u2019s results can also be indicated by raising StopIteration\nmanually, or by just letting the flow of execution fall off the bottom of the\nfunction.\nYou could achieve the effect of generators manually by writing your own class\nand storing all the local variables of the generator as instance variables. For\nexample, returning a list of integers could be done by setting self.count\nto\n0, and having the next()\nmethod increment self.count\nand return it.\nHowever, for a moderately complicated generator, writing a corresponding class\nwould be much messier. Lib/test/test_generators.py\ncontains a number of\nmore interesting examples. The simplest one implements an in-order traversal of\na tree using generators recursively.\n# A recursive generator that generates Tree leaves in in-order.\ndef inorder(t):\nif t:\nfor x in inorder(t.left):\nyield x\nyield t.label\nfor x in inorder(t.right):\nyield x\nTwo other examples in Lib/test/test_generators.py\nproduce solutions for\nthe N-Queens problem (placing $N$ queens on an $NxN$ chess board so that no\nqueen threatens another) and the Knight\u2019s Tour (a route that takes a knight to\nevery square of an $NxN$ chessboard without visiting any square twice).\nThe idea of generators comes from other programming languages, especially Icon (https://www2.cs.arizona.edu/icon/), where the idea of generators is central. In Icon, every expression and function call behaves like a generator. One example from \u201cAn Overview of the Icon Programming Language\u201d at https://www2.cs.arizona.edu/icon/docs/ipd266.htm gives an idea of what this looks like:\nsentence := \"Store it in the neighboring harbor\"\nif (i := find(\"or\", sentence)) > 5 then write(i)\nIn Icon the find()\nfunction returns the indexes at which the substring\n\u201cor\u201d is found: 3, 23, 33. In the if\nstatement, i\nis first\nassigned a value of 3, but 3 is less than 5, so the comparison fails, and Icon\nretries it with the second value of 23. 23 is greater than 5, so the comparison\nnow succeeds, and the code prints the value 23 to the screen.\nPython doesn\u2019t go nearly as far as Icon in adopting generators as a central concept. Generators are considered a new part of the core Python language, but learning or using them isn\u2019t compulsory; if they don\u2019t solve any problems that you have, feel free to ignore them. One novel feature of Python\u2019s interface as compared to Icon\u2019s is that a generator\u2019s state is represented as a concrete object (the iterator) that can be passed around to other functions or stored in a data structure.\nSee also\n- PEP 255 - Simple Generators\nWritten by Neil Schemenauer, Tim Peters, Magnus Lie Hetland. Implemented mostly by Neil Schemenauer and Tim Peters, with other fixes from the Python Labs crew.\nPEP 237: Unifying Long Integers and Integers\u00b6\nIn recent versions, the distinction between regular integers, which are 32-bit\nvalues on most machines, and long integers, which can be of arbitrary size, was\nbecoming an annoyance. For example, on platforms that support files larger than\n2**32\nbytes, the tell()\nmethod of file objects has to return a long\ninteger. However, there were various bits of Python that expected plain integers\nand would raise an error if a long integer was provided instead. For example,\nin Python 1.5, only regular integers could be used as a slice index, and\n'abc'[1L:]\nwould raise a TypeError\nexception with the message \u2018slice\nindex must be int\u2019.\nPython 2.2 will shift values from short to long integers as required. The \u2018L\u2019\nsuffix is no longer needed to indicate a long integer literal, as now the\ncompiler will choose the appropriate type. (Using the \u2018L\u2019 suffix will be\ndiscouraged in future 2.x versions of Python, triggering a warning in Python\n2.4, and probably dropped in Python 3.0.) Many operations that used to raise an\nOverflowError\nwill now return a long integer as their result. For\nexample:\n>>> 1234567890123\n1234567890123L\n>>> 2 ** 64\n18446744073709551616L\nIn most cases, integers and long integers will now be treated identically. You\ncan still distinguish them with the type()\nbuilt-in function, but that\u2019s\nrarely needed.\nSee also\n- PEP 237 - Unifying Long Integers and Integers\nWritten by Moshe Zadka and Guido van Rossum. Implemented mostly by Guido van Rossum.\nPEP 238: Changing the Division Operator\u00b6\nThe most controversial change in Python 2.2 heralds the start of an effort to\nfix an old design flaw that\u2019s been in Python from the beginning. Currently\nPython\u2019s division operator, /\n, behaves like C\u2019s division operator when\npresented with two integer arguments: it returns an integer result that\u2019s\ntruncated down when there would be a fractional part. For example, 3/2\nis\n1, not 1.5, and (-1)/2\nis -1, not -0.5. This means that the results of\ndivision can vary unexpectedly depending on the type of the two operands and\nbecause Python is dynamically typed, it can be difficult to determine the\npossible types of the operands.\n(The controversy is over whether this is really a design flaw, and whether it\u2019s worth breaking existing code to fix this. It\u2019s caused endless discussions on python-dev, and in July 2001 erupted into a storm of acidly sarcastic postings on comp.lang.python. I won\u2019t argue for either side here and will stick to describing what\u2019s implemented in 2.2. Read PEP 238 for a summary of arguments and counter-arguments.)\nBecause this change might break code, it\u2019s being introduced very gradually. Python 2.2 begins the transition, but the switch won\u2019t be complete until Python 3.0.\nFirst, I\u2019ll borrow some terminology from PEP 238. \u201cTrue division\u201d is the\ndivision that most non-programmers are familiar with: 3/2 is 1.5, 1/4 is 0.25,\nand so forth. \u201cFloor division\u201d is what Python\u2019s /\noperator currently does\nwhen given integer operands; the result is the floor of the value returned by\ntrue division. \u201cClassic division\u201d is the current mixed behaviour of /\n; it\nreturns the result of floor division when the operands are integers, and returns\nthe result of true division when one of the operands is a floating-point number.\nHere are the changes 2.2 introduces:\nA new operator,\n//\n, is the floor division operator. (Yes, we know it looks like C++\u2019s comment symbol.)//\nalways performs floor division no matter what the types of its operands are, so1 // 2\nis 0 and1.0 // 2.0\nis also 0.0.//\nis always available in Python 2.2; you don\u2019t need to enable it using a__future__\nstatement.By including a\nfrom __future__ import division\nin a module, the/\noperator will be changed to return the result of true division, so1/2\nis 0.5. Without the__future__\nstatement,/\nstill means classic division. The default meaning of/\nwill not change until Python 3.0.Classes can define methods called\n__truediv__()\nand__floordiv__()\nto overload the two division operators. At the C level, there are also slots in thePyNumberMethods\nstructure so extension types can define the two operators.Python 2.2 supports some command-line arguments for testing whether code will work with the changed division semantics. Running python with\n-Q warn\nwill cause a warning to be issued whenever division is applied to two integers. You can use this to find code that\u2019s affected by the change and fix it. By default, Python 2.2 will simply perform classic division without a warning; the warning will be turned on by default in Python 2.3.\nSee also\n- PEP 238 - Changing the Division Operator\nWritten by Moshe Zadka and Guido van Rossum. Implemented by Guido van Rossum..\nUnicode Changes\u00b6\nPython\u2019s Unicode support has been enhanced a bit in 2.2. Unicode strings are\nusually stored as UCS-2, as 16-bit unsigned integers. Python 2.2 can also be\ncompiled to use UCS-4, 32-bit unsigned integers, as its internal encoding by\nsupplying --enable-unicode=ucs4\nto the configure script. (It\u2019s also\npossible to specify --disable-unicode\nto completely disable Unicode\nsupport.)\nWhen built to use UCS-4 (a \u201cwide Python\u201d), the interpreter can natively handle\nUnicode characters from U+000000 to U+110000, so the range of legal values for\nthe unichr()\nfunction is expanded accordingly. Using an interpreter\ncompiled to use UCS-2 (a \u201cnarrow Python\u201d), values greater than 65535 will still\ncause unichr()\nto raise a ValueError\nexception. This is all\ndescribed in PEP 261, \u201cSupport for \u2018wide\u2019 Unicode characters\u201d; consult it for\nfurther details.\nAnother change is simpler to explain. Since their introduction, Unicode strings\nhave supported an encode()\nmethod to convert the string to a selected\nencoding such as UTF-8 or Latin-1. A symmetric decode([*encoding*])\nmethod has been added to 8-bit strings (though not to Unicode strings) in 2.2.\ndecode()\nassumes that the string is in the specified encoding and decodes\nit, returning whatever is returned by the codec.\nUsing this new feature, codecs have been added for tasks not directly related to\nUnicode. For example, codecs have been added for uu-encoding, MIME\u2019s base64\nencoding, and compression with the zlib\nmodule:\n>>> s = \"\"\"Here is a lengthy piece of redundant, overly verbose,\n... and repetitive text.\n... \"\"\"\n>>> data = s.encode('zlib')\n>>> data\n'x\\x9c\\r\\xc9\\xc1\\r\\x80 \\x10\\x04\\xc0?Ul...'\n>>> data.decode('zlib')\n'Here is a lengthy piece of redundant, overly verbose,\\nand repetitive text.\\n'\n>>> print s.encode('uu')\nbegin 666 \nM2&5R92!I=F5R8F]S92P*86YD(')E<&5T:71I=F4@=&5X=\"X*\nend\n>>> \"sheesh\".encode('rot-13')\n'furrfu'\nTo convert a class instance to Unicode, a __unicode__()\nmethod can be\ndefined by a class, analogous to __str__()\n.\nencode()\n, decode()\n, and __unicode__()\nwere implemented by\nMarc-Andr\u00e9 Lemburg. The changes to support using UCS-4 internally were\nimplemented by Fredrik Lundh and Martin von L\u00f6wis.\nSee also\n- PEP 261 - Support for \u2018wide\u2019 Unicode characters\nWritten by Paul Prescod.\nPEP 227: Nested Scopes\u00b6\nIn Python 2.1, statically nested scopes were added as an optional feature, to be\nenabled by a from __future__ import nested_scopes\ndirective. In 2.2 nested\nscopes no longer need to be specially enabled, and are now always present. The\nrest of this section is a copy of the description of nested scopes from my\n\u201cWhat\u2019s New in Python 2.1\u201d document; if you read it when 2.1 came out, you can\nskip the rest of this section.\nThe largest change introduced in Python 2.1, and made complete in 2.2, is to Python\u2019s scoping rules. In Python 2.0, at any given time there are at most three namespaces used to look up variable names: local, module-level, and the built-in namespace. This often surprised people because it didn\u2019t match their intuitive expectations. For example, a nested recursive function definition doesn\u2019t work:\ndef f():\n...\ndef g(value):\n...\nreturn g(value-1) + 1\n...\nThe function g()\nwill always raise a NameError\nexception, because\nthe binding of the name g\nisn\u2019t in either its local namespace or in the\nmodule-level namespace. This isn\u2019t much of a problem in practice (how often do\nyou recursively define interior functions like this?), but this also made using\nthe lambda\nexpression clumsier, and this was a problem in practice.\nIn code which uses lambda\nyou can often find local variables being\ncopied by passing them as the default values of arguments.\ndef find(self, name):\n\"Return list of any entries equal to 'name'\"\nL = filter(lambda x, name=name: x == name,\nself.list_attribute)\nreturn L\nThe readability of Python code written in a strongly functional style suffers greatly as a result.\nThe most significant change to Python 2.2 is that static scoping has been added\nto the language to fix this problem. As a first effect, the name=name\ndefault argument is now unnecessary in the above example. Put simply, when a\ngiven variable name is not assigned a value within a function (by an assignment,\nor the def\n, class\n, or import\nstatements),\nreferences to the variable will be looked up in the local namespace of the\nenclosing scope. A more detailed explanation of the rules, and a dissection of\nthe implementation, can be found in the PEP.\nThis change may cause some compatibility problems for code where the same variable name is used both at the module level and as a local variable within a function that contains further function definitions. This seems rather unlikely though, since such code would have been pretty confusing to read in the first place.\nOne side effect of the change is that the from module import *\nand\nexec\nstatements have been made illegal inside a function scope under\ncertain conditions. The Python reference manual has said all along that from\nmodule import *\nis only legal at the top level of a module, but the CPython\ninterpreter has never enforced this before. As part of the implementation of\nnested scopes, the compiler which turns Python source into bytecodes has to\ngenerate different code to access variables in a containing scope. from\nmodule import *\nand exec\nmake it impossible for the compiler to\nfigure this out, because they add names to the local namespace that are\nunknowable at compile time. Therefore, if a function contains function\ndefinitions or lambda\nexpressions with free variables, the compiler\nwill flag this by raising a SyntaxError\nexception.\nTo make the preceding explanation a bit clearer, here\u2019s an example:\nx = 1\ndef f():\n# The next line is a syntax error\nexec 'x=2'\ndef g():\nreturn x\nLine 4 containing the exec\nstatement is a syntax error, since\nexec\nwould define a new local variable named x\nwhose value should\nbe accessed by g()\n.\nThis shouldn\u2019t be much of a limitation, since exec\nis rarely used in\nmost Python code (and when it is used, it\u2019s often a sign of a poor design\nanyway).\nSee also\n- PEP 227 - Statically Nested Scopes\nWritten and implemented by Jeremy Hylton.\nNew and Improved Modules\u00b6\nThe\nxmlrpclib\nmodule was contributed to the standard library by Fredrik Lundh, providing support for writing XML-RPC clients. XML-RPC is a simple remote procedure call protocol built on top of HTTP and XML. For example, the following snippet retrieves a list of RSS channels from the O\u2019Reilly Network, and then lists the recent headlines for one channel:import xmlrpclib s = xmlrpclib.Server( 'http://www.oreillynet.com/meerkat/xml-rpc/server.php') channels = s.meerkat.getChannels() # channels is a list of dictionaries, like this: # [{'id': 4, 'title': 'Freshmeat Daily News'} # {'id': 190, 'title': '32Bits Online'}, # {'id': 4549, 'title': '3DGamers'}, ... ] # Get the items for one channel items = s.meerkat.getItems( {'channel': 4} ) # 'items' is another list of dictionaries, like this: # [{'link': 'http://freshmeat.net/releases/52719/', # 'description': 'A utility which converts HTML to XSL FO.', # 'title': 'html2fo 0.3 (Default)'}, ... ]\nThe\nSimpleXMLRPCServer\nmodule makes it easy to create straightforward XML-RPC servers. See http://xmlrpc.scripting.com/ for more information about XML-RPC.The new\nhmac\nmodule implements the HMAC algorithm described by RFC 2104. (Contributed by Gerhard H\u00e4ring.)Several functions that originally returned lengthy tuples now return pseudo-sequences that still behave like tuples but also have mnemonic attributes such as\nmemberst_mtime\nortm_year\n. The enhanced functions includestat()\n,fstat()\n,statvfs()\n, andfstatvfs()\nin theos\nmodule, andlocaltime()\n,gmtime()\n, andstrptime()\nin thetime\nmodule.For example, to obtain a file\u2019s size using the old tuples, you\u2019d end up writing something like\nfile_size = os.stat(filename)[stat.ST_SIZE]\n, but now this can be written more clearly asfile_size = os.stat(filename).st_size\n.The original patch for this feature was contributed by Nick Mathewson.\nThe Python profiler has been extensively reworked and various errors in its output have been corrected. (Contributed by Fred L. Drake, Jr. and Tim Peters.)\nThe\nsocket\nmodule can be compiled to support IPv6; specify the--enable-ipv6\noption to Python\u2019s configure script. (Contributed by Jun-ichiro \u201citojun\u201d Hagino.)Two new format characters were added to the\nstruct\nmodule for 64-bit integers on platforms that support the C long long type.q\nis for a signed 64-bit integer, andQ\nis for an unsigned one. The value is returned in Python\u2019s long integer type. (Contributed by Tim Peters.)In the interpreter\u2019s interactive mode, there\u2019s a new built-in function\nhelp()\nthat uses thepydoc\nmodule introduced in Python 2.1 to provide interactive help.help(object)\ndisplays any available help text about object.help()\nwith no argument puts you in an online help utility, where you can enter the names of functions, classes, or modules to read their help text. (Contributed by Guido van Rossum, using Ka-Ping Yee\u2019spydoc\nmodule.)Various bugfixes and performance improvements have been made to the SRE engine underlying the\nre\nmodule. For example, there.sub()\nandre.split()\nfunctions have been rewritten in C. Another contributed patch speeds up certain Unicode character ranges by a factor of two, and a newfinditer()\nmethod that returns an iterator over all the non-overlapping matches in a given string. (SRE is maintained by Fredrik Lundh. The BIGCHARSET patch was contributed by Martin von L\u00f6wis.)The\nsmtplib\nmodule now supports RFC 2487, \u201cSecure SMTP over TLS\u201d, so it\u2019s now possible to encrypt the SMTP traffic between a Python program and the mail transport agent being handed a message.smtplib\nalso supports SMTP authentication. (Contributed by Gerhard H\u00e4ring.)The\nimaplib\nmodule, maintained by Piers Lauder, has support for several new extensions: the NAMESPACE extension defined in RFC 2342, SORT, GETACL and SETACL. (Contributed by Anthony Baxter and Michel Pelletier.)The\nrfc822\nmodule\u2019s parsing of email addresses is now compliant with RFC 2822, an update to RFC 822. (The module\u2019s name is not going to be changed torfc2822\n.) A new package,email\n, has also been added for parsing and generating e-mail messages. (Contributed by Barry Warsaw, and arising out of his work on Mailman.)The\ndifflib\nmodule now contains a newDiffer\nclass for producing human-readable lists of changes (a \u201cdelta\u201d) between two sequences of lines of text. There are also two generator functions,ndiff()\nandrestore()\n, which respectively return a delta from two sequences, or one of the original sequences from a delta. (Grunt work contributed by David Goodger, from ndiff.py code by Tim Peters who then did the generatorization.)New constants\nascii_letters\n,ascii_lowercase\n, andascii_uppercase\nwere added to thestring\nmodule. There were several modules in the standard library that usedstring.letters\nto mean the ranges A-Za-z, but that assumption is incorrect when locales are in use, becausestring.letters\nvaries depending on the set of legal characters defined by the current locale. The buggy modules have all been fixed to useascii_letters\ninstead. (Reported by an unknown person; fixed by Fred L. Drake, Jr.)The\nmimetypes\nmodule now makes it easier to use alternative MIME-type databases by the addition of aMimeTypes\nclass, which takes a list of filenames to be parsed. (Contributed by Fred L. Drake, Jr.)A\nTimer\nclass was added to thethreading\nmodule that allows scheduling an activity to happen at some future time. (Contributed by Itamar Shtull-Trauring.)\nInterpreter Changes and Fixes\u00b6\nSome of the changes only affect people who deal with the Python interpreter at the C level because they\u2019re writing Python extension modules, embedding the interpreter, or just hacking on the interpreter itself. If you only write Python code, none of the changes described here will affect you very much.\nProfiling and tracing functions can now be implemented in C, which can operate at much higher speeds than Python-based functions and should reduce the overhead of profiling and tracing. This will be of interest to authors of development environments for Python. Two new C functions were added to Python\u2019s API,\nPyEval_SetProfile()\nandPyEval_SetTrace()\n. The existingsys.setprofile()\nandsys.settrace()\nfunctions still exist, and have simply been changed to use the new C-level interface. (Contributed by Fred L. Drake, Jr.)Another low-level API, primarily of interest to implementers of Python debuggers and development tools, was added.\nPyInterpreterState_Head()\nandPyInterpreterState_Next()\nlet a caller walk through all the existing interpreter objects;PyInterpreterState_ThreadHead()\nandPyThreadState_Next()\nallow looping over all the thread states for a given interpreter. (Contributed by David Beazley.)The C-level interface to the garbage collector has been changed to make it easier to write extension types that support garbage collection and to debug misuses of the functions. Various functions have slightly different semantics, so a bunch of functions had to be renamed. Extensions that use the old API will still compile but will not participate in garbage collection, so updating them for 2.2 should be considered fairly high priority.\nTo upgrade an extension module to the new API, perform the following steps:\nRename\nPy_TPFLAGS_GC\ntoPy_TPFLAGS_HAVE_GC\n.- Use\nPyObject_GC_New()\norPyObject_GC_NewVar()\nto allocate objects, and\nPyObject_GC_Del()\nto deallocate them.\n- Use\nRename\nPyObject_GC_Init()\ntoPyObject_GC_Track()\nandPyObject_GC_Fini()\ntoPyObject_GC_UnTrack()\n.Remove\nPyGC_HEAD_SIZE\nfrom object size calculations.Remove calls to\nPyObject_AS_GC()\nandPyObject_FROM_GC()\n.A new\net\nformat sequence was added toPyArg_ParseTuple()\n;et\ntakes both a parameter and an encoding name, and converts the parameter to the given encoding if the parameter turns out to be a Unicode string, or leaves it alone if it\u2019s an 8-bit string, assuming it to already be in the desired encoding. This differs from thees\nformat character, which assumes that 8-bit strings are in Python\u2019s default ASCII encoding and converts them to the specified new encoding. (Contributed by M.-A. Lemburg, and used for the MBCS support on Windows described in the following section.)A different argument parsing function,\nPyArg_UnpackTuple()\n, has been added that\u2019s simpler and presumably faster. Instead of specifying a format string, the caller simply gives the minimum and maximum number of arguments expected, and a set of pointers to PyObject* variables that will be filled in with argument values.Two new flags\nMETH_NOARGS\nandMETH_O\nare available in method definition tables to simplify implementation of methods with no arguments or a single untyped argument. Calling such methods is more efficient than calling a corresponding method that usesMETH_VARARGS\n. Also, the oldMETH_OLDARGS\nstyle of writing C methods is now officially deprecated.Two new wrapper functions,\nPyOS_snprintf()\nandPyOS_vsnprintf()\nwere added to provide cross-platform implementations for the relatively newsnprintf()\nandvsnprintf()\nC lib APIs. In contrast to the standardsprintf()\nandvsprintf()\nfunctions, the Python versions check the bounds of the buffer used to protect against buffer overruns. (Contributed by M.-A. Lemburg.)The\n_PyTuple_Resize()\nfunction has lost an unused parameter, so now it takes 2 parameters instead of 3. The third argument was never used, and can simply be discarded when porting code from earlier versions to Python 2.2.\nOther Changes and Fixes\u00b6\nAs usual there were a bunch of other improvements and bugfixes scattered throughout the source tree. A search through the CVS change logs finds there were 527 patches applied and 683 bugs fixed between Python 2.1 and 2.2; 2.2.1 applied 139 patches and fixed 143 bugs; 2.2.2 applied 106 patches and fixed 82 bugs. These figures are likely to be underestimates.\nSome of the more notable changes are:\nThe code for the MacOS port for Python, maintained by Jack Jansen, is now kept in the main Python CVS tree, and many changes have been made to support MacOS X.\nThe most significant change is the ability to build Python as a framework, enabled by supplying the\n--enable-framework\noption to the configure script when compiling Python. According to Jack Jansen, \u201cThis installs a self-contained Python installation plus the OS X framework \u201cglue\u201d into/Library/Frameworks/Python.framework\n(or another location of choice). For now there is little immediate added benefit to this (actually, there is the disadvantage that you have to change your PATH to be able to find Python), but it is the basis for creating a full-blown Python application, porting the MacPython IDE, possibly using Python as a standard OSA scripting language and much more.\u201dMost of the MacPython toolbox modules, which interface to MacOS APIs such as windowing, QuickTime, scripting, etc. have been ported to OS X, but they\u2019ve been left commented out in\nsetup.py\n. People who want to experiment with these modules can uncomment them manually.Keyword arguments passed to built-in functions that don\u2019t take them now cause a\nTypeError\nexception to be raised, with the message \u201cfunction takes no keyword arguments\u201d.Weak references, added in Python 2.1 as an extension module, are now part of the core because they\u2019re used in the implementation of new-style classes. The\nReferenceError\nexception has therefore moved from theweakref\nmodule to become a built-in exception.A new script,\nTools/scripts/cleanfuture.py\nby Tim Peters, automatically removes obsolete__future__\nstatements from Python source code.An additional flags argument has been added to the built-in function\ncompile()\n, so the behaviour of__future__\nstatements can now be correctly observed in simulated shells, such as those presented by IDLE and other development environments. This is described in PEP 264. (Contributed by Michael Hudson.)The new license introduced with Python 1.6 wasn\u2019t GPL-compatible. This is fixed by some minor textual changes to the 2.2 license, so it\u2019s now legal to embed Python inside a GPLed program again. Note that Python itself is not GPLed, but instead is under a license that\u2019s essentially equivalent to the BSD license, same as it always was. The license changes were also applied to the Python 2.0.1 and 2.1.1 releases.\nWhen presented with a Unicode filename on Windows, Python will now convert it to an MBCS encoded string, as used by the Microsoft file APIs. As MBCS is explicitly used by the file APIs, Python\u2019s choice of ASCII as the default encoding turns out to be an annoyance. On Unix, the locale\u2019s character set is used if\nlocale.nl_langinfo(CODESET)\nis available. (Windows support was contributed by Mark Hammond with assistance from Marc-Andr\u00e9 Lemburg. Unix support was added by Martin von L\u00f6wis.)Large file support is now enabled on Windows. (Contributed by Tim Peters.)\nThe\nTools/scripts/ftpmirror.py\nscript now parses a.netrc\nfile, if you have one. (Contributed by Mike Romberg.)Some features of the object returned by the\nxrange()\nfunction are now deprecated, and trigger warnings when they\u2019re accessed; they\u2019ll disappear in Python 2.3.xrange\nobjects tried to pretend they were full sequence types by supporting slicing, sequence multiplication, and thein\noperator, but these features were rarely used and therefore buggy. Thetolist()\nmethod and thestart\n,stop\n, andstep\nattributes are also being deprecated. At the C level, the fourth argument to thePyRange_New()\nfunction,repeat\n, has also been deprecated.There were a bunch of patches to the dictionary implementation, mostly to fix potential core dumps if a dictionary contains objects that sneakily changed their hash value, or mutated the dictionary they were contained in. For a while python-dev fell into a gentle rhythm of Michael Hudson finding a case that dumped core, Tim Peters fixing the bug, Michael finding another case, and round and round it went.\nOn Windows, Python can now be compiled with Borland C thanks to a number of patches contributed by Stephen Hansen, though the result isn\u2019t fully functional yet. (But this is progress\u2026)\nAnother Windows enhancement: Wise Solutions generously offered PythonLabs use of their InstallerMaster 8.1 system. Earlier PythonLabs Windows installers used Wise 5.0a, which was beginning to show its age. (Packaged up by Tim Peters.)\nFiles ending in\n.pyw\ncan now be imported on Windows..pyw\nis a Windows-only thing, used to indicate that a script needs to be run using PYTHONW.EXE instead of PYTHON.EXE in order to prevent a DOS console from popping up to display the output. This patch makes it possible to import such scripts, in case they\u2019re also usable as modules. (Implemented by David Bolen.)On platforms where Python uses the C\ndlopen()\nfunction to load extension modules, it\u2019s now possible to set the flags used bydlopen()\nusing thesys.getdlopenflags()\nandsys.setdlopenflags()\nfunctions. (Contributed by Bram Stolk.)The\npow()\nbuilt-in function no longer supports 3 arguments when floating-point numbers are supplied.pow(x, y, z)\nreturns(x**y) % z\n, but this is never useful for floating-point numbers, and the final result varies unpredictably depending on the platform. A call such aspow(2.0, 8.0, 7.0)\nwill now raise aTypeError\nexception.\nAcknowledgements\u00b6\nThe author would like to thank the following people for offering suggestions, corrections and assistance with various drafts of this article: Fred Bremmer, Keith Briggs, Andrew Dalke, Fred L. Drake, Jr., Carel Fellinger, David Goodger, Mark Hammond, Stephen Hansen, Michael Hudson, Jack Jansen, Marc-Andr\u00e9 Lemburg, Martin von L\u00f6wis, Fredrik Lundh, Michael McLay, Nick Mathewson, Paul Moore, Gustavo Niemeyer, Don O\u2019Donnell, Joonas Paalasma, Tim Peters, Jens Quade, Tom Reinhardt, Neil Schemenauer, Guido van Rossum, Greg Ward, Edward Welbourne.", "code_snippets": ["\n ", " ", "\n ", "\n ", "\n", "\n", "\n", "\n", "\n", "\n ", " ", " ", " ", " ", " ", "\n ", "\n ", " ", " ", "\n ", " ", " ", "\n", " ", " ", "\n", "\n", "\n ", " ", "\n ", "\n ", " ", " ", "\n\n ", " ", " ", "\n ", "\n ", " ", " ", "\n", " ", "\n\n", "\n ", " ", " ", "\n ", "\n ", "\n ", "\n ", "\n ", "\n ", "\n ", "\n ", "\n\n ", " ", " ", " ", " ", "\n", " ", "\n ", " ", " ", " ", "\n ", " \\\n ", " \\\n ", " \\\n ", " \\\n", " ", "\n ", " ", " ", " ", "\n \\ ", "\n \\ ", "\n \\ ", "\n \\ ", "\n ", "\n", " ", "\n ", " ", "\n ", "\n ", " ", "\n ", "\n ", "\n", "\n ", " ", "\n ", " ", " ", " ", " ", "\n ", " ", "\n ", " ", " ", "\n ", " ", " ", " ", " ", " ", " ", "\n ", " ", " ", " ", " ", " ", "\n\n ", "\n ", "\n ", "\n ", " ", " ", " ", "\n ", "\n ", "\n", "\n", " ", " ", " ", " ", "\n", "\n", " ", " ", "\n", " ", "\n", "\n", " ", " ", "\n", " ", "\n", "\n", " ", " ", "\n", "\n File ", ", line ", ", in ", "\n", ": ", "\n", " ", "\n ", " ", " ", "\n", " ", " ", "\n", " ", " ", "\n", " ", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n File ", ", line ", ", in ", "\n", "\n", "\n", " ", " ", "\n", " ", " ", "\n", " ", " ", "\n", "\n", "\n", " ", " ", " ", " ", " ", " ", " ", " ", " ", " ", " ", " ", " ", "\n", " ", " ", " ", " ", " ", " ", " ", " ", " ", " ", " ", " ", "\n", " ", " ", " ", " ", " ", " ", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", " ", " ", " ", "\n ", "\n ", "\n", "\n ", " ", " ", " ", "\n ", " ", "\n", " ", " ", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n File ", ", line ", ", in ", "\n File ", ", line ", ", in ", "\n", "\n", "\n", "\n ", " ", "\n ", " ", " ", " ", "\n ", " ", "\n ", " ", "\n ", " ", " ", " ", "\n ", " ", "\n", " ", " ", "\n", " ", " ", " ", " ", " ", " ", " ", " ", "\n", "\n", "\n", " ", " ", "\n", "\n", " ", " ", "\n", "\n", "\n", " ", " ", "\n", "\n", "\n", "\n", "\n", " ", "\n", "\n", "\n", "\n\n", "\n", "\n", "\n", "\n ", "\n ", "\n ", "\n ", " ", " ", " ", "\n ", "\n", " ", "\n ", "\n ", " ", " ", " ", " ", " ", " ", " ", "\n ", "\n ", " ", "\n", " ", " ", "\n", "\n ", "\n ", " ", "\n ", "\n ", " ", "\n", "\n", " ", " ", "\n ", "\n", " ", " ", "\n", "\n", "\n", "\n", "\n\n", "\n", " ", " ", " ", " ", " ", "\n\n", "\n", "\n", "\n", "\n"], "language": "Python", "source": "python.org", "token_count": 12920} +{"url": "https://docs.python.org/3/c-api/marshal.html", "title": "Data marshalling support", "content": "Data marshalling support\u00b6\nThese routines allow C code to work with serialized objects using the same\ndata format as the marshal\nmodule. There are functions to write data\ninto the serialization format, and additional functions that can be used to\nread the data back. Files used to store marshalled data must be opened in\nbinary mode.\nNumeric values are stored with the least significant byte first.\nThe module supports several versions of the data format; see\nthe Python module documentation\nfor details.\n-\nPy_MARSHAL_VERSION\u00b6\nThe current format version. See\nmarshal.version\n.\n-\nvoid PyMarshal_WriteLongToFile(long value, FILE *file, int version)\u00b6\nMarshal a long integer, value, to file. This will only write the least-significant 32 bits of value; regardless of the size of the native long type. version indicates the file format.\nThis function can fail, in which case it sets the error indicator. Use\nPyErr_Occurred()\nto check for that.\n-\nvoid PyMarshal_WriteObjectToFile(PyObject *value, FILE *file, int version)\u00b6\nMarshal a Python object, value, to file. version indicates the file format.\nThis function can fail, in which case it sets the error indicator. Use\nPyErr_Occurred()\nto check for that.\n-\nPyObject *PyMarshal_WriteObjectToString(PyObject *value, int version)\u00b6\n- Return value: New reference.\nReturn a bytes object containing the marshalled representation of value. version indicates the file format.\nThe following functions allow marshalled values to be read back in.\n-\nlong PyMarshal_ReadLongFromFile(FILE *file)\u00b6\nReturn a C long from the data stream in a FILE* opened for reading. Only a 32-bit value can be read in using this function, regardless of the native size of long.\nOn error, sets the appropriate exception (\nEOFError\n) and returns-1\n.\n-\nint PyMarshal_ReadShortFromFile(FILE *file)\u00b6\nReturn a C short from the data stream in a FILE* opened for reading. Only a 16-bit value can be read in using this function, regardless of the native size of short.\nOn error, sets the appropriate exception (\nEOFError\n) and returns-1\n.\n-\nPyObject *PyMarshal_ReadObjectFromFile(FILE *file)\u00b6\n- Return value: New reference.\nReturn a Python object from the data stream in a FILE* opened for reading.\nOn error, sets the appropriate exception (\nEOFError\n,ValueError\norTypeError\n) and returnsNULL\n.\n-\nPyObject *PyMarshal_ReadLastObjectFromFile(FILE *file)\u00b6\n- Return value: New reference.\nReturn a Python object from the data stream in a FILE* opened for reading. Unlike\nPyMarshal_ReadObjectFromFile()\n, this function assumes that no further objects will be read from the file, allowing it to aggressively load file data into memory so that the de-serialization can operate from data in memory rather than reading a byte at a time from the file. Only use this variant if you are certain that you won\u2019t be reading anything else from the file.On error, sets the appropriate exception (\nEOFError\n,ValueError\norTypeError\n) and returnsNULL\n.\n-\nPyObject *PyMarshal_ReadObjectFromString(const char *data, Py_ssize_t len)\u00b6\n- Return value: New reference.\nReturn a Python object from the data stream in a byte buffer containing len bytes pointed to by data.\nOn error, sets the appropriate exception (\nEOFError\n,ValueError\norTypeError\n) and returnsNULL\n.", "code_snippets": [], "language": "Python", "source": "python.org", "token_count": 810} +{"url": "https://docs.python.org/3/c-api/weakref.html", "title": "Weak Reference Objects", "content": "Weak Reference Objects\u00b6\nPython supports weak references as first-class objects. There are two specific object types which directly implement weak references. The first is a simple reference object, and the second acts as a proxy for the original object as much as it can.\n-\nint PyWeakref_Check(PyObject *ob)\u00b6\nReturn non-zero if ob is either a reference or proxy object. This function always succeeds.\n-\nint PyWeakref_CheckRef(PyObject *ob)\u00b6\nReturn non-zero if ob is a reference object or a subclass of the reference type. This function always succeeds.\n-\nint PyWeakref_CheckRefExact(PyObject *ob)\u00b6\nReturn non-zero if ob is a reference object, but not a subclass of the reference type. This function always succeeds.\n-\nint PyWeakref_CheckProxy(PyObject *ob)\u00b6\nReturn non-zero if ob is a proxy object. This function always succeeds.\n-\nPyObject *PyWeakref_NewRef(PyObject *ob, PyObject *callback)\u00b6\n- Return value: New reference. Part of the Stable ABI.\nReturn a weak reference object for the object ob. This will always return a new reference, but is not guaranteed to create a new object; an existing reference object may be returned. The second parameter, callback, can be a callable object that receives notification when ob is garbage collected; it should accept a single parameter, which will be the weak reference object itself. callback may also be\nNone\norNULL\n. If ob is not a weakly referenceable object, or if callback is not callable,None\n, orNULL\n, this will returnNULL\nand raiseTypeError\n.See also\nPyType_SUPPORTS_WEAKREFS()\nfor checking if ob is weakly referenceable.\n-\nPyObject *PyWeakref_NewProxy(PyObject *ob, PyObject *callback)\u00b6\n- Return value: New reference. Part of the Stable ABI.\nReturn a weak reference proxy object for the object ob. This will always return a new reference, but is not guaranteed to create a new object; an existing proxy object may be returned. The second parameter, callback, can be a callable object that receives notification when ob is garbage collected; it should accept a single parameter, which will be the weak reference object itself. callback may also be\nNone\norNULL\n. If ob is not a weakly referenceable object, or if callback is not callable,None\n, orNULL\n, this will returnNULL\nand raiseTypeError\n.See also\nPyType_SUPPORTS_WEAKREFS()\nfor checking if ob is weakly referenceable.\n-\nint PyWeakref_GetRef(PyObject *ref, PyObject **pobj)\u00b6\n- Part of the Stable ABI since version 3.13.\nGet a strong reference to the referenced object from a weak reference, ref, into *pobj.\nOn success, set *pobj to a new strong reference to the referenced object and return 1.\nIf the reference is dead, set *pobj to\nNULL\nand return 0.On error, raise an exception and return -1.\nAdded in version 3.13.\n-\nPyObject *PyWeakref_GetObject(PyObject *ref)\u00b6\n- Return value: Borrowed reference. Part of the Stable ABI.\nReturn a borrowed reference to the referenced object from a weak reference, ref. If the referent is no longer live, returns\nPy_None\n.Note\nThis function returns a borrowed reference to the referenced object. This means that you should always call\nPy_INCREF()\non the object except when it cannot be destroyed before the last usage of the borrowed reference.Deprecated since version 3.13, will be removed in version 3.15: Use\nPyWeakref_GetRef()\ninstead.\n-\nPyObject *PyWeakref_GET_OBJECT(PyObject *ref)\u00b6\n- Return value: Borrowed reference.\nSimilar to\nPyWeakref_GetObject()\n, but does no error checking.Deprecated since version 3.13, will be removed in version 3.15: Use\nPyWeakref_GetRef()\ninstead.\n-\nint PyWeakref_IsDead(PyObject *ref)\u00b6\nTest if the weak reference ref is dead. Returns 1 if the reference is dead, 0 if it is alive, and -1 with an error set if ref is not a weak reference object.\nAdded in version 3.14.\n-\nvoid PyObject_ClearWeakRefs(PyObject *object)\u00b6\n- Part of the Stable ABI.\nThis function is called by the\ntp_dealloc\nhandler to clear weak references.This iterates through the weak references for object and calls callbacks for those references which have one. It returns when all callbacks have been attempted.\n-\nvoid PyUnstable_Object_ClearWeakRefsNoCallbacks(PyObject *object)\u00b6\n- This is Unstable API. It may change without warning in minor releases.\nClears the weakrefs for object without calling the callbacks.\nThis function is called by the\ntp_dealloc\nhandler for types with finalizers (i.e.,__del__()\n). The handler for those objects first callsPyObject_ClearWeakRefs()\nto clear weakrefs and call their callbacks, then the finalizer, and finally this function to clear any weakrefs that may have been created by the finalizer.In most circumstances, it\u2019s more appropriate to use\nPyObject_ClearWeakRefs()\nto clear weakrefs instead of this function.Added in version 3.13.", "code_snippets": [], "language": "Python", "source": "python.org", "token_count": 1181} +{"url": "https://docs.python.org/3/howto/remote_debugging.html", "title": "Remote debugging attachment protocol", "content": "Remote debugging attachment protocol\u00b6\nThis protocol enables external tools to attach to a running CPython process and execute Python code remotely.\nMost platforms require elevated privileges to attach to another Python process.\nDisabling remote debugging\u00b6\nTo disable remote debugging support, use any of the following:\nSet the\nPYTHON_DISABLE_REMOTE_DEBUG\nenvironment variable to1\nbefore starting the interpreter.Use the\n-X disable_remote_debug\ncommand-line option.Compile Python with the\n--without-remote-debug\nbuild flag.\nPermission requirements\u00b6\nAttaching to a running Python process for remote debugging requires elevated privileges on most platforms. The specific requirements and troubleshooting steps depend on your operating system:\nLinux\nThe tracer process must have the CAP_SYS_PTRACE\ncapability or equivalent\nprivileges. You can only trace processes you own and can signal. Tracing may\nfail if the process is already being traced, or if it is running with\nset-user-ID or set-group-ID. Security modules like Yama may further restrict\ntracing.\nTo temporarily relax ptrace restrictions (until reboot), run:\necho 0 | sudo tee /proc/sys/kernel/yama/ptrace_scope\nNote\nDisabling ptrace_scope\nreduces system hardening and should only be done\nin trusted environments.\nIf running inside a container, use --cap-add=SYS_PTRACE\nor\n--privileged\n, and run as root if needed.\nTry re-running the command with elevated privileges:\nsudo -E !!\nmacOS\nTo attach to another process, you typically need to run your debugging tool\nwith elevated privileges. This can be done by using sudo\nor running as\nroot.\nEven when attaching to processes you own, macOS may block debugging unless the debugger is run with root privileges due to system security restrictions.\nWindows\nTo attach to another process, you usually need to run your debugging tool with administrative privileges. Start the command prompt or terminal as Administrator.\nSome processes may still be inaccessible even with Administrator rights,\nunless you have the SeDebugPrivilege\nprivilege enabled.\nTo resolve file or folder access issues, adjust the security permissions:\nRight-click the file or folder and select Properties.\nGo to the Security tab to view users and groups with access.\nClick Edit to modify permissions.\nSelect your user account.\nIn Permissions, check Read or Full control as needed.\nClick Apply, then OK to confirm.\nNote\nEnsure you\u2019ve satisfied all Permission requirements before proceeding.\nThis section describes the low-level protocol that enables external tools to inject and execute a Python script within a running CPython process.\nThis mechanism forms the basis of the sys.remote_exec()\nfunction, which\ninstructs a remote Python process to execute a .py\nfile. However, this\nsection does not document the usage of that function. Instead, it provides a\ndetailed explanation of the underlying protocol, which takes as input the\npid\nof a target Python process and the path to a Python source file to be\nexecuted. This information supports independent reimplementation of the\nprotocol, regardless of programming language.\nWarning\nThe execution of the injected script depends on the interpreter reaching a safe evaluation point. As a result, execution may be delayed depending on the runtime state of the target process.\nOnce injected, the script is executed by the interpreter within the target process the next time a safe evaluation point is reached. This approach enables remote execution capabilities without modifying the behavior or structure of the running Python application.\nSubsequent sections provide a step-by-step description of the protocol, including techniques for locating interpreter structures in memory, safely accessing internal fields, and triggering code execution. Platform-specific variations are noted where applicable, and example implementations are included to clarify each operation.\nLocating the PyRuntime structure\u00b6\nCPython places the PyRuntime\nstructure in a dedicated binary section to\nhelp external tools find it at runtime. The name and format of this section\nvary by platform. For example, .PyRuntime\nis used on ELF systems, and\n__DATA,__PyRuntime\nis used on macOS. Tools can find the offset of this\nstructure by examining the binary on disk.\nThe PyRuntime\nstructure contains CPython\u2019s global interpreter state and\nprovides access to other internal data, including the list of interpreters,\nthread states, and debugger support fields.\nTo work with a remote Python process, a debugger must first find the memory\naddress of the PyRuntime\nstructure in the target process. This address\ncan\u2019t be hardcoded or calculated from a symbol name, because it depends on\nwhere the operating system loaded the binary.\nThe method for finding PyRuntime\ndepends on the platform, but the steps are\nthe same in general:\nFind the base address where the Python binary or shared library was loaded in the target process.\nUse the on-disk binary to locate the offset of the\n.PyRuntime\nsection.Add the section offset to the base address to compute the address in memory.\nThe sections below explain how to do this on each supported platform and include example code.\nLinux (ELF)\nTo find the PyRuntime\nstructure on Linux:\nRead the process\u2019s memory map (for example,\n/proc//maps\n) to find the address where the Python executable orlibpython\nwas loaded.Parse the ELF section headers in the binary to get the offset of the\n.PyRuntime\nsection.Add that offset to the base address from step 1 to get the memory address of\nPyRuntime\n.\nThe following is an example implementation:\ndef find_py_runtime_linux(pid: int) -> int:\n# Step 1: Try to find the Python executable in memory\nbinary_path, base_address = find_mapped_binary(\npid, name_contains=\"python\"\n)\n# Step 2: Fallback to shared library if executable is not found\nif binary_path is None:\nbinary_path, base_address = find_mapped_binary(\npid, name_contains=\"libpython\"\n)\n# Step 3: Parse ELF headers to get .PyRuntime section offset\nsection_offset = parse_elf_section_offset(\nbinary_path, \".PyRuntime\"\n)\n# Step 4: Compute PyRuntime address in memory\nreturn base_address + section_offset\nOn Linux systems, there are two main approaches to read memory from another\nprocess. The first is through the /proc\nfilesystem, specifically by reading from\n/proc/[pid]/mem\nwhich provides direct access to the process\u2019s memory. This\nrequires appropriate permissions - either being the same user as the target\nprocess or having root access. The second approach is using the\nprocess_vm_readv()\nsystem call which provides a more efficient way to copy\nmemory between processes. While ptrace\u2019s PTRACE_PEEKTEXT\noperation can also be\nused to read memory, it is significantly slower as it only reads one word at a\ntime and requires multiple context switches between the tracer and tracee\nprocesses.\nFor parsing ELF sections, the process involves reading and interpreting the ELF file format structures from the binary file on disk. The ELF header contains a pointer to the section header table. Each section header contains metadata about a section including its name (stored in a separate string table), offset, and size. To find a specific section like .PyRuntime, you need to walk through these headers and match the section name. The section header then provides the offset where that section exists in the file, which can be used to calculate its runtime address when the binary is loaded into memory.\nYou can read more about the ELF file format in the ELF specification.\nmacOS (Mach-O)\nTo find the PyRuntime\nstructure on macOS:\nCall\ntask_for_pid()\nto get themach_port_t\ntask port for the target process. This handle is needed to read memory using APIs likemach_vm_read_overwrite\nandmach_vm_region\n.Scan the memory regions to find the one containing the Python executable or\nlibpython\n.Load the binary file from disk and parse the Mach-O headers to find the section named\nPyRuntime\nin the__DATA\nsegment. On macOS, symbol names are automatically prefixed with an underscore, so thePyRuntime\nsymbol appears as_PyRuntime\nin the symbol table, but the section name is not affected.\nThe following is an example implementation:\ndef find_py_runtime_macos(pid: int) -> int:\n# Step 1: Get access to the process's memory\nhandle = get_memory_access_handle(pid)\n# Step 2: Try to find the Python executable in memory\nbinary_path, base_address = find_mapped_binary(\nhandle, name_contains=\"python\"\n)\n# Step 3: Fallback to libpython if the executable is not found\nif binary_path is None:\nbinary_path, base_address = find_mapped_binary(\nhandle, name_contains=\"libpython\"\n)\n# Step 4: Parse Mach-O headers to get __DATA,__PyRuntime section offset\nsection_offset = parse_macho_section_offset(\nbinary_path, \"__DATA\", \"__PyRuntime\"\n)\n# Step 5: Compute the PyRuntime address in memory\nreturn base_address + section_offset\nOn macOS, accessing another process\u2019s memory requires using Mach-O specific APIs\nand file formats. The first step is obtaining a task_port\nhandle via\ntask_for_pid()\n, which provides access to the target process\u2019s memory space.\nThis handle enables memory operations through APIs like\nmach_vm_read_overwrite()\n.\nThe process memory can be examined using mach_vm_region()\nto scan through the\nvirtual memory space, while proc_regionfilename()\nhelps identify which binary\nfiles are loaded at each memory region. When the Python binary or library is\nfound, its Mach-O headers need to be parsed to locate the PyRuntime\nstructure.\nThe Mach-O format organizes code and data into segments and sections. The\nPyRuntime\nstructure lives in a section named __PyRuntime\nwithin the\n__DATA\nsegment. The actual runtime address calculation involves finding the\n__TEXT\nsegment which serves as the binary\u2019s base address, then locating the\n__DATA\nsegment containing our target section. The final address is computed by\ncombining the base address with the appropriate section offsets from the Mach-O\nheaders.\nNote that accessing another process\u2019s memory on macOS typically requires elevated privileges - either root access or special security entitlements granted to the debugging process.\nWindows (PE)\nTo find the PyRuntime\nstructure on Windows:\nUse the ToolHelp API to enumerate all modules loaded in the target process. This is done using functions such as CreateToolhelp32Snapshot, Module32First, and Module32Next.\nIdentify the module corresponding to\npython.exe\norpythonXY.dll\n, whereX\nandY\nare the major and minor version numbers of the Python version, and record its base address.Locate the\nPyRuntim\nsection. Due to the PE format\u2019s 8-character limit on section names (defined asIMAGE_SIZEOF_SHORT_NAME\n), the original namePyRuntime\nis truncated. This section contains thePyRuntime\nstructure.Retrieve the section\u2019s relative virtual address (RVA) and add it to the base address of the module.\nThe following is an example implementation:\ndef find_py_runtime_windows(pid: int) -> int:\n# Step 1: Try to find the Python executable in memory\nbinary_path, base_address = find_loaded_module(\npid, name_contains=\"python\"\n)\n# Step 2: Fallback to shared pythonXY.dll if the executable is not\n# found\nif binary_path is None:\nbinary_path, base_address = find_loaded_module(\npid, name_contains=\"python3\"\n)\n# Step 3: Parse PE section headers to get the RVA of the PyRuntime\n# section. The section name appears as \"PyRuntim\" due to the\n# 8-character limit defined by the PE format (IMAGE_SIZEOF_SHORT_NAME).\nsection_rva = parse_pe_section_offset(binary_path, \"PyRuntim\")\n# Step 4: Compute PyRuntime address in memory\nreturn base_address + section_rva\nOn Windows, accessing another process\u2019s memory requires using the Windows API\nfunctions like CreateToolhelp32Snapshot()\nand Module32First()/Module32Next()\nto enumerate loaded modules. The OpenProcess()\nfunction provides a handle to\naccess the target process\u2019s memory space, enabling memory operations through\nReadProcessMemory()\n.\nThe process memory can be examined by enumerating loaded modules to find the\nPython binary or DLL. When found, its PE headers need to be parsed to locate the\nPyRuntime\nstructure.\nThe PE format organizes code and data into sections. The PyRuntime\nstructure\nlives in a section named \u201cPyRuntim\u201d (truncated from \u201cPyRuntime\u201d due to PE\u2019s\n8-character name limit). The actual runtime address calculation involves finding\nthe module\u2019s base address from the module entry, then locating our target\nsection in the PE headers. The final address is computed by combining the base\naddress with the section\u2019s virtual address from the PE section headers.\nNote that accessing another process\u2019s memory on Windows typically requires\nappropriate privileges - either administrative access or the SeDebugPrivilege\nprivilege granted to the debugging process.\nReading _Py_DebugOffsets\u00b6\nOnce the address of the PyRuntime\nstructure has been determined, the next\nstep is to read the _Py_DebugOffsets\nstructure located at the beginning of\nthe PyRuntime\nblock.\nThis structure provides version-specific field offsets that are needed to safely read interpreter and thread state memory. These offsets vary between CPython versions and must be checked before use to ensure they are compatible.\nTo read and check the debug offsets, follow these steps:\nRead memory from the target process starting at the\nPyRuntime\naddress, covering the same number of bytes as the_Py_DebugOffsets\nstructure. This structure is located at the very start of thePyRuntime\nmemory block. Its layout is defined in CPython\u2019s internal headers and stays the same within a given minor version, but may change in major versions.Check that the structure contains valid data:\nThe\ncookie\nfield must match the expected debug marker.The\nversion\nfield must match the version of the Python interpreter used by the debugger.If either the debugger or the target process is using a pre-release version (for example, an alpha, beta, or release candidate), the versions must match exactly.\nThe\nfree_threaded\nfield must have the same value in both the debugger and the target process.\nIf the structure is valid, the offsets it contains can be used to locate fields in memory. If any check fails, the debugger should stop the operation to avoid reading memory in the wrong format.\nThe following is an example implementation that reads and checks\n_Py_DebugOffsets\n:\ndef read_debug_offsets(pid: int, py_runtime_addr: int) -> DebugOffsets:\n# Step 1: Read memory from the target process at the PyRuntime address\ndata = read_process_memory(\npid, address=py_runtime_addr, size=DEBUG_OFFSETS_SIZE\n)\n# Step 2: Deserialize the raw bytes into a _Py_DebugOffsets structure\ndebug_offsets = parse_debug_offsets(data)\n# Step 3: Validate the contents of the structure\nif debug_offsets.cookie != EXPECTED_COOKIE:\nraise RuntimeError(\"Invalid or missing debug cookie\")\nif debug_offsets.version != LOCAL_PYTHON_VERSION:\nraise RuntimeError(\n\"Mismatch between caller and target Python versions\"\n)\nif debug_offsets.free_threaded != LOCAL_FREE_THREADED:\nraise RuntimeError(\"Mismatch in free-threaded configuration\")\nreturn debug_offsets\nWarning\nProcess suspension recommended\nTo avoid race conditions and ensure memory consistency, it is strongly recommended that the target process be suspended before performing any operations that read or write internal interpreter state. The Python runtime may concurrently mutate interpreter data structures\u2014such as creating or destroying threads\u2014during normal execution. This can result in invalid memory reads or writes.\nA debugger may suspend execution by attaching to the process with ptrace\nor by sending a SIGSTOP\nsignal. Execution should only be resumed after\ndebugger-side memory operations are complete.\nNote\nSome tools, such as profilers or sampling-based debuggers, may operate on a running process without suspension. In such cases, tools must be explicitly designed to handle partially updated or inconsistent memory. For most debugger implementations, suspending the process remains the safest and most robust approach.\nLocating the interpreter and thread state\u00b6\nBefore code can be injected and executed in a remote Python process, the\ndebugger must choose a thread in which to schedule execution. This is necessary\nbecause the control fields used to perform remote code injection are located in\nthe _PyRemoteDebuggerSupport\nstructure, which is embedded in a\nPyThreadState\nobject. These fields are modified by the debugger to request\nexecution of injected scripts.\nThe PyThreadState\nstructure represents a thread running inside a Python\ninterpreter. It maintains the thread\u2019s evaluation context and contains the\nfields required for debugger coordination. Locating a valid PyThreadState\nis therefore a key prerequisite for triggering execution remotely.\nA thread is typically selected based on its role or ID. In most cases, the main thread is used, but some tools may target a specific thread by its native thread ID. Once the target thread is chosen, the debugger must locate both the interpreter and the associated thread state structures in memory.\nThe relevant internal structures are defined as follows:\nPyInterpreterState\nrepresents an isolated Python interpreter instance. Each interpreter maintains its own set of imported modules, built-in state, and thread state list. Although most Python applications use a single interpreter, CPython supports multiple interpreters in the same process.PyThreadState\nrepresents a thread running within an interpreter. It contains execution state and the control fields used by the debugger.\nTo locate a thread:\nUse the offset\nruntime_state.interpreters_head\nto obtain the address of the first interpreter in thePyRuntime\nstructure. This is the entry point to the linked list of active interpreters.Use the offset\ninterpreter_state.threads_main\nto access the main thread state associated with the selected interpreter. This is typically the most reliable thread to target.Optionally, use the offset\ninterpreter_state.threads_head\nto iterate through the linked list of all thread states. EachPyThreadState\nstructure contains anative_thread_id\nfield, which may be compared to a target thread ID to find a specific thread.Once a valid\nPyThreadState\nhas been found, its address can be used in later steps of the protocol, such as writing debugger control fields and scheduling execution.\nThe following is an example implementation that locates the main thread state:\ndef find_main_thread_state(\npid: int, py_runtime_addr: int, debug_offsets: DebugOffsets,\n) -> int:\n# Step 1: Read interpreters_head from PyRuntime\ninterp_head_ptr = (\npy_runtime_addr + debug_offsets.runtime_state.interpreters_head\n)\ninterp_addr = read_pointer(pid, interp_head_ptr)\nif interp_addr == 0:\nraise RuntimeError(\"No interpreter found in the target process\")\n# Step 2: Read the threads_main pointer from the interpreter\nthreads_main_ptr = (\ninterp_addr + debug_offsets.interpreter_state.threads_main\n)\nthread_state_addr = read_pointer(pid, threads_main_ptr)\nif thread_state_addr == 0:\nraise RuntimeError(\"Main thread state is not available\")\nreturn thread_state_addr\nThe following example demonstrates how to locate a thread by its native thread ID:\ndef find_thread_by_id(\npid: int,\ninterp_addr: int,\ndebug_offsets: DebugOffsets,\ntarget_tid: int,\n) -> int:\n# Start at threads_head and walk the linked list\nthread_ptr = read_pointer(\npid,\ninterp_addr + debug_offsets.interpreter_state.threads_head\n)\nwhile thread_ptr:\nnative_tid_ptr = (\nthread_ptr + debug_offsets.thread_state.native_thread_id\n)\nnative_tid = read_int(pid, native_tid_ptr)\nif native_tid == target_tid:\nreturn thread_ptr\nthread_ptr = read_pointer(\npid,\nthread_ptr + debug_offsets.thread_state.next\n)\nraise RuntimeError(\"Thread with the given ID was not found\")\nOnce a valid thread state has been located, the debugger can proceed with modifying its control fields and scheduling execution, as described in the next section.\nWriting control information\u00b6\nOnce a valid PyThreadState\nstructure has been identified, the debugger may\nmodify control fields within it to schedule the execution of a specified Python\nscript. These control fields are checked periodically by the interpreter, and\nwhen set correctly, they trigger the execution of remote code at a safe point\nin the evaluation loop.\nEach PyThreadState\ncontains a _PyRemoteDebuggerSupport\nstructure used\nfor communication between the debugger and the interpreter. The locations of\nits fields are defined by the _Py_DebugOffsets\nstructure and include the\nfollowing:\ndebugger_script_path\n: A fixed-size buffer that holds the full path to a Python source file (.py\n). This file must be accessible and readable by the target process when execution is triggered.debugger_pending_call\n: An integer flag. Setting this to1\ntells the interpreter that a script is ready to be executed.eval_breaker\n: A field checked by the interpreter during execution. Setting bit 5 (_PY_EVAL_PLEASE_STOP_BIT\n, value1U << 5\n) in this field causes the interpreter to pause and check for debugger activity.\nTo complete the injection, the debugger must perform the following steps:\nWrite the full script path into the\ndebugger_script_path\nbuffer.Set\ndebugger_pending_call\nto1\n.Read the current value of\neval_breaker\n, set bit 5 (_PY_EVAL_PLEASE_STOP_BIT\n), and write the updated value back. This signals the interpreter to check for debugger activity.\nThe following is an example implementation:\ndef inject_script(\npid: int,\nthread_state_addr: int,\ndebug_offsets: DebugOffsets,\nscript_path: str\n) -> None:\n# Compute the base offset of _PyRemoteDebuggerSupport\nsupport_base = (\nthread_state_addr +\ndebug_offsets.debugger_support.remote_debugger_support\n)\n# Step 1: Write the script path into debugger_script_path\nscript_path_ptr = (\nsupport_base +\ndebug_offsets.debugger_support.debugger_script_path\n)\nwrite_string(pid, script_path_ptr, script_path)\n# Step 2: Set debugger_pending_call to 1\npending_ptr = (\nsupport_base +\ndebug_offsets.debugger_support.debugger_pending_call\n)\nwrite_int(pid, pending_ptr, 1)\n# Step 3: Set _PY_EVAL_PLEASE_STOP_BIT (bit 5, value 1 << 5) in\n# eval_breaker\neval_breaker_ptr = (\nthread_state_addr +\ndebug_offsets.debugger_support.eval_breaker\n)\nbreaker = read_int(pid, eval_breaker_ptr)\nbreaker |= (1 << 5)\nwrite_int(pid, eval_breaker_ptr, breaker)\nOnce these fields are set, the debugger may resume the process (if it was suspended). The interpreter will process the request at the next safe evaluation point, load the script from disk, and execute it.\nIt is the responsibility of the debugger to ensure that the script file remains present and accessible to the target process during execution.\nNote\nScript execution is asynchronous. The script file cannot be deleted immediately after injection. The debugger should wait until the injected script has produced an observable effect before removing the file. This effect depends on what the script is designed to do. For example, a debugger might wait until the remote process connects back to a socket before removing the script. Once such an effect is observed, it is safe to assume the file is no longer needed.\nSummary\u00b6\nTo inject and execute a Python script in a remote process:\nLocate the\nPyRuntime\nstructure in the target process\u2019s memory.Read and validate the\n_Py_DebugOffsets\nstructure at the beginning ofPyRuntime\n.Use the offsets to locate a valid\nPyThreadState\n.Write the path to a Python script into\ndebugger_script_path\n.Set the\ndebugger_pending_call\nflag to1\n.Set\n_PY_EVAL_PLEASE_STOP_BIT\nin theeval_breaker\nfield.Resume the process (if suspended). The script will execute at the next safe evaluation point.", "code_snippets": [" ", " ", " ", "\n ", "\n ", " ", " ", " ", "\n ", " ", "\n ", "\n\n ", "\n ", " ", " ", " ", "\n ", " ", " ", " ", "\n ", " ", "\n ", "\n\n ", "\n ", " ", " ", "\n ", " ", "\n ", "\n\n ", "\n ", " ", " ", " ", "\n", " ", " ", " ", "\n ", "\n ", " ", " ", "\n\n ", "\n ", " ", " ", " ", "\n ", " ", "\n ", "\n\n ", "\n ", " ", " ", " ", "\n ", " ", " ", " ", "\n ", " ", "\n ", "\n\n ", "\n ", " ", " ", "\n ", " ", " ", "\n ", "\n\n ", "\n ", " ", " ", " ", "\n", " ", " ", " ", "\n ", "\n ", " ", " ", " ", "\n ", " ", "\n ", "\n\n ", "\n ", "\n ", " ", " ", " ", "\n ", " ", " ", " ", "\n ", " ", "\n ", "\n\n ", "\n ", "\n ", "\n ", " ", " ", " ", "\n\n ", "\n ", " ", " ", " ", "\n", " ", " ", " ", " ", " ", "\n ", "\n ", " ", " ", "\n ", " ", " ", "\n ", "\n\n ", "\n ", " ", " ", "\n\n ", "\n ", " ", " ", " ", "\n ", " ", "\n ", " ", " ", " ", "\n ", " ", "\n ", "\n ", "\n ", " ", " ", " ", "\n ", " ", "\n\n ", " ", "\n", "\n ", " ", " ", " ", " ", " ", "\n", " ", " ", "\n ", "\n ", " ", " ", "\n ", " ", " ", "\n ", "\n ", " ", " ", " ", "\n ", " ", " ", " ", "\n ", " ", "\n\n ", "\n ", " ", " ", "\n ", " ", " ", "\n ", "\n ", " ", " ", " ", "\n ", " ", " ", " ", "\n ", " ", "\n\n ", " ", "\n", "\n ", " ", "\n ", " ", "\n ", " ", "\n ", " ", "\n", " ", " ", "\n ", "\n ", " ", " ", "\n ", "\n ", " ", " ", "\n ", "\n\n ", " ", "\n ", " ", " ", "\n ", " ", " ", "\n ", "\n ", " ", " ", " ", "\n ", " ", " ", " ", "\n ", " ", "\n ", " ", " ", "\n ", "\n ", " ", " ", "\n ", "\n\n ", " ", "\n", "\n ", " ", "\n ", " ", "\n ", " ", "\n ", " ", "\n", " ", " ", "\n ", "\n ", " ", " ", "\n ", " ", "\n ", "\n ", "\n\n ", "\n ", " ", " ", "\n ", " ", "\n ", "\n ", "\n ", " ", " ", "\n\n ", "\n ", " ", " ", "\n ", " ", "\n ", "\n ", "\n ", " ", " ", "\n\n ", "\n ", "\n ", " ", " ", "\n ", " ", "\n ", "\n ", "\n ", " ", " ", " ", "\n ", " ", " ", " ", " ", "\n ", " ", " ", "\n"], "language": "Python", "source": "python.org", "token_count": 5877} +{"url": "https://docs.python.org/3/library/asyncio-task.html", "title": "Coroutines and Tasks", "content": "Coroutines and Tasks\u00b6\nThis section outlines high-level asyncio APIs to work with coroutines and Tasks.\nCoroutines\u00b6\nSource code: Lib/asyncio/coroutines.py\nCoroutines declared with the async/await syntax is the preferred way of writing asyncio applications. For example, the following snippet of code prints \u201chello\u201d, waits 1 second, and then prints \u201cworld\u201d:\n>>> import asyncio\n>>> async def main():\n... print('hello')\n... await asyncio.sleep(1)\n... print('world')\n>>> asyncio.run(main())\nhello\nworld\nNote that simply calling a coroutine will not schedule it to be executed:\n>>> main()\n\nTo actually run a coroutine, asyncio provides the following mechanisms:\nThe\nasyncio.run()\nfunction to run the top-level entry point \u201cmain()\u201d function (see the above example.)Awaiting on a coroutine. The following snippet of code will print \u201chello\u201d after waiting for 1 second, and then print \u201cworld\u201d after waiting for another 2 seconds:\nimport asyncio import time async def say_after(delay, what): await asyncio.sleep(delay) print(what) async def main(): print(f\"started at {time.strftime('%X')}\") await say_after(1, 'hello') await say_after(2, 'world') print(f\"finished at {time.strftime('%X')}\") asyncio.run(main())\nExpected output:\nstarted at 17:13:52 hello world finished at 17:13:55\nThe\nasyncio.create_task()\nfunction to run coroutines concurrently as asyncioTasks\n.Let\u2019s modify the above example and run two\nsay_after\ncoroutines concurrently:async def main(): task1 = asyncio.create_task( say_after(1, 'hello')) task2 = asyncio.create_task( say_after(2, 'world')) print(f\"started at {time.strftime('%X')}\") # Wait until both tasks are completed (should take # around 2 seconds.) await task1 await task2 print(f\"finished at {time.strftime('%X')}\")\nNote that expected output now shows that the snippet runs 1 second faster than before:\nstarted at 17:14:32 hello world finished at 17:14:34\nThe\nasyncio.TaskGroup\nclass provides a more modern alternative tocreate_task()\n. Using this API, the last example becomes:async def main(): async with asyncio.TaskGroup() as tg: task1 = tg.create_task( say_after(1, 'hello')) task2 = tg.create_task( say_after(2, 'world')) print(f\"started at {time.strftime('%X')}\") # The await is implicit when the context manager exits. print(f\"finished at {time.strftime('%X')}\")\nThe timing and output should be the same as for the previous version.\nAdded in version 3.11:\nasyncio.TaskGroup\n.\nAwaitables\u00b6\nWe say that an object is an awaitable object if it can be used\nin an await\nexpression. Many asyncio APIs are designed to\naccept awaitables.\nThere are three main types of awaitable objects: coroutines, Tasks, and Futures.\nCoroutines\nPython coroutines are awaitables and therefore can be awaited from other coroutines:\nimport asyncio\nasync def nested():\nreturn 42\nasync def main():\n# Nothing happens if we just call \"nested()\".\n# A coroutine object is created but not awaited,\n# so it *won't run at all*.\nnested() # will raise a \"RuntimeWarning\".\n# Let's do it differently now and await it:\nprint(await nested()) # will print \"42\".\nasyncio.run(main())\nImportant\nIn this documentation the term \u201ccoroutine\u201d can be used for two closely related concepts:\na coroutine function: an\nasync def\nfunction;a coroutine object: an object returned by calling a coroutine function.\nTasks\nTasks are used to schedule coroutines concurrently.\nWhen a coroutine is wrapped into a Task with functions like\nasyncio.create_task()\nthe coroutine is automatically\nscheduled to run soon:\nimport asyncio\nasync def nested():\nreturn 42\nasync def main():\n# Schedule nested() to run soon concurrently\n# with \"main()\".\ntask = asyncio.create_task(nested())\n# \"task\" can now be used to cancel \"nested()\", or\n# can simply be awaited to wait until it is complete:\nawait task\nasyncio.run(main())\nFutures\nA Future\nis a special low-level awaitable object that\nrepresents an eventual result of an asynchronous operation.\nWhen a Future object is awaited it means that the coroutine will wait until the Future is resolved in some other place.\nFuture objects in asyncio are needed to allow callback-based code to be used with async/await.\nNormally there is no need to create Future objects at the application level code.\nFuture objects, sometimes exposed by libraries and some asyncio APIs, can be awaited:\nasync def main():\nawait function_that_returns_a_future_object()\n# this is also valid:\nawait asyncio.gather(\nfunction_that_returns_a_future_object(),\nsome_python_coroutine()\n)\nA good example of a low-level function that returns a Future object\nis loop.run_in_executor()\n.\nCreating Tasks\u00b6\nSource code: Lib/asyncio/tasks.py\n- asyncio.create_task(coro, *, name=None, context=None, eager_start=None, **kwargs)\u00b6\nWrap the coro coroutine into a\nTask\nand schedule its execution. Return the Task object.The full function signature is largely the same as that of the\nTask\nconstructor (or factory) - all of the keyword arguments to this function are passed through to that interface.An optional keyword-only context argument allows specifying a custom\ncontextvars.Context\nfor the coro to run in. The current context copy is created when no context is provided.An optional keyword-only eager_start argument allows specifying if the task should execute eagerly during the call to create_task, or be scheduled later. If eager_start is not passed the mode set by\nloop.set_task_factory()\nwill be used.The task is executed in the loop returned by\nget_running_loop()\n,RuntimeError\nis raised if there is no running loop in current thread.Note\nasyncio.TaskGroup.create_task()\nis a new alternative leveraging structural concurrency; it allows for waiting for a group of related tasks with strong safety guarantees.Important\nSave a reference to the result of this function, to avoid a task disappearing mid-execution. The event loop only keeps weak references to tasks. A task that isn\u2019t referenced elsewhere may get garbage collected at any time, even before it\u2019s done. For reliable \u201cfire-and-forget\u201d background tasks, gather them in a collection:\nbackground_tasks = set() for i in range(10): task = asyncio.create_task(some_coro(param=i)) # Add task to the set. This creates a strong reference. background_tasks.add(task) # To prevent keeping references to finished tasks forever, # make each task remove its own reference from the set after # completion: task.add_done_callback(background_tasks.discard)\nAdded in version 3.7.\nChanged in version 3.8: Added the name parameter.\nChanged in version 3.11: Added the context parameter.\nChanged in version 3.14: Added the eager_start parameter by passing on all kwargs.\nTask Cancellation\u00b6\nTasks can easily and safely be cancelled.\nWhen a task is cancelled, asyncio.CancelledError\nwill be raised\nin the task at the next opportunity.\nIt is recommended that coroutines use try/finally\nblocks to robustly\nperform clean-up logic. In case asyncio.CancelledError\nis explicitly caught, it should generally be propagated when\nclean-up is complete. asyncio.CancelledError\ndirectly subclasses\nBaseException\nso most code will not need to be aware of it.\nThe asyncio components that enable structured concurrency, like\nasyncio.TaskGroup\nand asyncio.timeout()\n,\nare implemented using cancellation internally and might misbehave if\na coroutine swallows asyncio.CancelledError\n. Similarly, user code\nshould not generally call uncancel\n.\nHowever, in cases when suppressing asyncio.CancelledError\nis\ntruly desired, it is necessary to also call uncancel()\nto completely\nremove the cancellation state.\nTask Groups\u00b6\nTask groups combine a task creation API with a convenient and reliable way to wait for all tasks in the group to finish.\n- class asyncio.TaskGroup\u00b6\nAn asynchronous context manager holding a group of tasks. Tasks can be added to the group using\ncreate_task()\n. All tasks are awaited when the context manager exits.Added in version 3.11.\n- create_task(coro, *, name=None, context=None, eager_start=None, **kwargs)\u00b6\nCreate a task in this task group. The signature matches that of\nasyncio.create_task()\n. If the task group is inactive (e.g. not yet entered, already finished, or in the process of shutting down), we will close the givencoro\n.Changed in version 3.13: Close the given coroutine if the task group is not active.\nChanged in version 3.14: Passes on all kwargs to\nloop.create_task()\nExample:\nasync def main():\nasync with asyncio.TaskGroup() as tg:\ntask1 = tg.create_task(some_coro(...))\ntask2 = tg.create_task(another_coro(...))\nprint(f\"Both tasks have completed now: {task1.result()}, {task2.result()}\")\nThe async with\nstatement will wait for all tasks in the group to finish.\nWhile waiting, new tasks may still be added to the group\n(for example, by passing tg\ninto one of the coroutines\nand calling tg.create_task()\nin that coroutine).\nOnce the last task has finished and the async with\nblock is exited,\nno new tasks may be added to the group.\nThe first time any of the tasks belonging to the group fails\nwith an exception other than asyncio.CancelledError\n,\nthe remaining tasks in the group are cancelled.\nNo further tasks can then be added to the group.\nAt this point, if the body of the async with\nstatement is still active\n(i.e., __aexit__()\nhasn\u2019t been called yet),\nthe task directly containing the async with\nstatement is also cancelled.\nThe resulting asyncio.CancelledError\nwill interrupt an await\n,\nbut it will not bubble out of the containing async with\nstatement.\nOnce all tasks have finished, if any tasks have failed\nwith an exception other than asyncio.CancelledError\n,\nthose exceptions are combined in an\nExceptionGroup\nor BaseExceptionGroup\n(as appropriate; see their documentation)\nwhich is then raised.\nTwo base exceptions are treated specially:\nIf any task fails with KeyboardInterrupt\nor SystemExit\n,\nthe task group still cancels the remaining tasks and waits for them,\nbut then the initial KeyboardInterrupt\nor SystemExit\nis re-raised instead of ExceptionGroup\nor BaseExceptionGroup\n.\nIf the body of the async with\nstatement exits with an exception\n(so __aexit__()\nis called with an exception set),\nthis is treated the same as if one of the tasks failed:\nthe remaining tasks are cancelled and then waited for,\nand non-cancellation exceptions are grouped into an\nexception group and raised.\nThe exception passed into __aexit__()\n,\nunless it is asyncio.CancelledError\n,\nis also included in the exception group.\nThe same special case is made for\nKeyboardInterrupt\nand SystemExit\nas in the previous paragraph.\nTask groups are careful not to mix up the internal cancellation used to\n\u201cwake up\u201d their __aexit__()\nwith cancellation requests\nfor the task in which they are running made by other parties.\nIn particular, when one task group is syntactically nested in another,\nand both experience an exception in one of their child tasks simultaneously,\nthe inner task group will process its exceptions, and then the outer task group\nwill receive another cancellation and process its own exceptions.\nIn the case where a task group is cancelled externally and also must\nraise an ExceptionGroup\n, it will call the parent task\u2019s\ncancel()\nmethod. This ensures that a\nasyncio.CancelledError\nwill be raised at the next\nawait\n, so the cancellation is not lost.\nTask groups preserve the cancellation count\nreported by asyncio.Task.cancelling()\n.\nChanged in version 3.13: Improved handling of simultaneous internal and external cancellations and correct preservation of cancellation counts.\nTerminating a Task Group\u00b6\nWhile terminating a task group is not natively supported by the standard library, termination can be achieved by adding an exception-raising task to the task group and ignoring the raised exception:\nimport asyncio\nfrom asyncio import TaskGroup\nclass TerminateTaskGroup(Exception):\n\"\"\"Exception raised to terminate a task group.\"\"\"\nasync def force_terminate_task_group():\n\"\"\"Used to force termination of a task group.\"\"\"\nraise TerminateTaskGroup()\nasync def job(task_id, sleep_time):\nprint(f'Task {task_id}: start')\nawait asyncio.sleep(sleep_time)\nprint(f'Task {task_id}: done')\nasync def main():\ntry:\nasync with TaskGroup() as group:\n# spawn some tasks\ngroup.create_task(job(1, 0.5))\ngroup.create_task(job(2, 1.5))\n# sleep for 1 second\nawait asyncio.sleep(1)\n# add an exception-raising task to force the group to terminate\ngroup.create_task(force_terminate_task_group())\nexcept* TerminateTaskGroup:\npass\nasyncio.run(main())\nExpected output:\nTask 1: start\nTask 2: start\nTask 1: done\nSleeping\u00b6\n- async asyncio.sleep(delay, result=None)\u00b6\nBlock for delay seconds.\nIf result is provided, it is returned to the caller when the coroutine completes.\nsleep()\nalways suspends the current task, allowing other tasks to run.Setting the delay to 0 provides an optimized path to allow other tasks to run. This can be used by long-running functions to avoid blocking the event loop for the full duration of the function call.\nExample of coroutine displaying the current date every second for 5 seconds:\nimport asyncio import datetime async def display_date(): loop = asyncio.get_running_loop() end_time = loop.time() + 5.0 while True: print(datetime.datetime.now()) if (loop.time() + 1.0) >= end_time: break await asyncio.sleep(1) asyncio.run(display_date())\nChanged in version 3.10: Removed the loop parameter.\nChanged in version 3.13: Raises\nValueError\nif delay isnan\n.\nRunning Tasks Concurrently\u00b6\n- awaitable asyncio.gather(*aws, return_exceptions=False)\u00b6\nRun awaitable objects in the aws sequence concurrently.\nIf any awaitable in aws is a coroutine, it is automatically scheduled as a Task.\nIf all awaitables are completed successfully, the result is an aggregate list of returned values. The order of result values corresponds to the order of awaitables in aws.\nIf return_exceptions is\nFalse\n(default), the first raised exception is immediately propagated to the task that awaits ongather()\n. Other awaitables in the aws sequence won\u2019t be cancelled and will continue to run.If return_exceptions is\nTrue\n, exceptions are treated the same as successful results, and aggregated in the result list.If\ngather()\nis cancelled, all submitted awaitables (that have not completed yet) are also cancelled.If any Task or Future from the aws sequence is cancelled, it is treated as if it raised\nCancelledError\n\u2013 thegather()\ncall is not cancelled in this case. This is to prevent the cancellation of one submitted Task/Future to cause other Tasks/Futures to be cancelled.Note\nA new alternative to create and run tasks concurrently and wait for their completion is\nasyncio.TaskGroup\n. TaskGroup provides stronger safety guarantees than gather for scheduling a nesting of subtasks: if a task (or a subtask, a task scheduled by a task) raises an exception, TaskGroup will, while gather will not, cancel the remaining scheduled tasks).Example:\nimport asyncio async def factorial(name, number): f = 1 for i in range(2, number + 1): print(f\"Task {name}: Compute factorial({number}), currently i={i}...\") await asyncio.sleep(1) f *= i print(f\"Task {name}: factorial({number}) = {f}\") return f async def main(): # Schedule three calls *concurrently*: L = await asyncio.gather( factorial(\"A\", 2), factorial(\"B\", 3), factorial(\"C\", 4), ) print(L) asyncio.run(main()) # Expected output: # # Task A: Compute factorial(2), currently i=2... # Task B: Compute factorial(3), currently i=2... # Task C: Compute factorial(4), currently i=2... # Task A: factorial(2) = 2 # Task B: Compute factorial(3), currently i=3... # Task C: Compute factorial(4), currently i=3... # Task B: factorial(3) = 6 # Task C: Compute factorial(4), currently i=4... # Task C: factorial(4) = 24 # [2, 6, 24]\nNote\nIf return_exceptions is false, cancelling gather() after it has been marked done won\u2019t cancel any submitted awaitables. For instance, gather can be marked done after propagating an exception to the caller, therefore, calling\ngather.cancel()\nafter catching an exception (raised by one of the awaitables) from gather won\u2019t cancel any other awaitables.Changed in version 3.7: If the gather itself is cancelled, the cancellation is propagated regardless of return_exceptions.\nChanged in version 3.10: Removed the loop parameter.\nDeprecated since version 3.10: Deprecation warning is emitted if no positional arguments are provided or not all positional arguments are Future-like objects and there is no running event loop.\nEager Task Factory\u00b6\n- asyncio.eager_task_factory(loop, coro, *, name=None, context=None)\u00b6\nA task factory for eager task execution.\nWhen using this factory (via\nloop.set_task_factory(asyncio.eager_task_factory)\n), coroutines begin execution synchronously duringTask\nconstruction. Tasks are only scheduled on the event loop if they block. This can be a performance improvement as the overhead of loop scheduling is avoided for coroutines that complete synchronously.A common example where this is beneficial is coroutines which employ caching or memoization to avoid actual I/O when possible.\nNote\nImmediate execution of the coroutine is a semantic change. If the coroutine returns or raises, the task is never scheduled to the event loop. If the coroutine execution blocks, the task is scheduled to the event loop. This change may introduce behavior changes to existing applications. For example, the application\u2019s task execution order is likely to change.\nAdded in version 3.12.\n- asyncio.create_eager_task_factory(custom_task_constructor)\u00b6\nCreate an eager task factory, similar to\neager_task_factory()\n, using the provided custom_task_constructor when creating a new task instead of the defaultTask\n.custom_task_constructor must be a callable with the signature matching the signature of\nTask.__init__\n. The callable must return aasyncio.Task\n-compatible object.This function returns a callable intended to be used as a task factory of an event loop via\nloop.set_task_factory(factory)\n).Added in version 3.12.\nShielding From Cancellation\u00b6\n- awaitable asyncio.shield(aw)\u00b6\nProtect an awaitable object from being\ncancelled\n.If aw is a coroutine it is automatically scheduled as a Task.\nThe statement:\ntask = asyncio.create_task(something()) res = await shield(task)\nis equivalent to:\nres = await something()\nexcept that if the coroutine containing it is cancelled, the Task running in\nsomething()\nis not cancelled. From the point of view ofsomething()\n, the cancellation did not happen. Although its caller is still cancelled, so the \u201cawait\u201d expression still raises aCancelledError\n.If\nsomething()\nis cancelled by other means (i.e. from within itself) that would also cancelshield()\n.If it is desired to completely ignore cancellation (not recommended) the\nshield()\nfunction should be combined with a try/except clause, as follows:task = asyncio.create_task(something()) try: res = await shield(task) except CancelledError: res = None\nImportant\nSave a reference to tasks passed to this function, to avoid a task disappearing mid-execution. The event loop only keeps weak references to tasks. A task that isn\u2019t referenced elsewhere may get garbage collected at any time, even before it\u2019s done.\nChanged in version 3.10: Removed the loop parameter.\nDeprecated since version 3.10: Deprecation warning is emitted if aw is not Future-like object and there is no running event loop.\nTimeouts\u00b6\n- asyncio.timeout(delay)\u00b6\nReturn an asynchronous context manager that can be used to limit the amount of time spent waiting on something.\ndelay can either be\nNone\n, or a float/int number of seconds to wait. If delay isNone\n, no time limit will be applied; this can be useful if the delay is unknown when the context manager is created.In either case, the context manager can be rescheduled after creation using\nTimeout.reschedule()\n.Example:\nasync def main(): async with asyncio.timeout(10): await long_running_task()\nIf\nlong_running_task\ntakes more than 10 seconds to complete, the context manager will cancel the current task and handle the resultingasyncio.CancelledError\ninternally, transforming it into aTimeoutError\nwhich can be caught and handled.Note\nThe\nasyncio.timeout()\ncontext manager is what transforms theasyncio.CancelledError\ninto aTimeoutError\n, which means theTimeoutError\ncan only be caught outside of the context manager.Example of catching\nTimeoutError\n:async def main(): try: async with asyncio.timeout(10): await long_running_task() except TimeoutError: print(\"The long operation timed out, but we've handled it.\") print(\"This statement will run regardless.\")\nThe context manager produced by\nasyncio.timeout()\ncan be rescheduled to a different deadline and inspected.- class asyncio.Timeout(when)\u00b6\nAn asynchronous context manager for cancelling overdue coroutines.\nPrefer using\nasyncio.timeout()\norasyncio.timeout_at()\nrather than instantiatingTimeout\ndirectly.when\nshould be an absolute time at which the context should time out, as measured by the event loop\u2019s clock:If\nwhen\nisNone\n, the timeout will never trigger.If\nwhen < loop.time()\n, the timeout will trigger on the next iteration of the event loop.\nExample:\nasync def main(): try: # We do not know the timeout when starting, so we pass ``None``. async with asyncio.timeout(None) as cm: # We know the timeout now, so we reschedule it. new_deadline = get_running_loop().time() + 10 cm.reschedule(new_deadline) await long_running_task() except TimeoutError: pass if cm.expired(): print(\"Looks like we haven't finished on time.\")\nTimeout context managers can be safely nested.\nAdded in version 3.11.\n- asyncio.timeout_at(when)\u00b6\nSimilar to\nasyncio.timeout()\n, except when is the absolute time to stop waiting, orNone\n.Example:\nasync def main(): loop = get_running_loop() deadline = loop.time() + 20 try: async with asyncio.timeout_at(deadline): await long_running_task() except TimeoutError: print(\"The long operation timed out, but we've handled it.\") print(\"This statement will run regardless.\")\nAdded in version 3.11.\n- async asyncio.wait_for(aw, timeout)\u00b6\nWait for the aw awaitable to complete with a timeout.\nIf aw is a coroutine it is automatically scheduled as a Task.\ntimeout can either be\nNone\nor a float or int number of seconds to wait for. If timeout isNone\n, block until the future completes.If a timeout occurs, it cancels the task and raises\nTimeoutError\n.To avoid the task\ncancellation\n, wrap it inshield()\n.The function will wait until the future is actually cancelled, so the total wait time may exceed the timeout. If an exception happens during cancellation, it is propagated.\nIf the wait is cancelled, the future aw is also cancelled.\nExample:\nasync def eternity(): # Sleep for one hour await asyncio.sleep(3600) print('yay!') async def main(): # Wait for at most 1 second try: await asyncio.wait_for(eternity(), timeout=1.0) except TimeoutError: print('timeout!') asyncio.run(main()) # Expected output: # # timeout!\nChanged in version 3.7: When aw is cancelled due to a timeout,\nwait_for\nwaits for aw to be cancelled. Previously, it raisedTimeoutError\nimmediately.Changed in version 3.10: Removed the loop parameter.\nChanged in version 3.11: Raises\nTimeoutError\ninstead ofasyncio.TimeoutError\n.\nWaiting Primitives\u00b6\n- async asyncio.wait(aws, *, timeout=None, return_when=ALL_COMPLETED)\u00b6\nRun\nFuture\nandTask\ninstances in the aws iterable concurrently and block until the condition specified by return_when.The aws iterable must not be empty.\nReturns two sets of Tasks/Futures:\n(done, pending)\n.Usage:\ndone, pending = await asyncio.wait(aws)\ntimeout (a float or int), if specified, can be used to control the maximum number of seconds to wait before returning.\nNote that this function does not raise\nTimeoutError\n. Futures or Tasks that aren\u2019t done when the timeout occurs are simply returned in the second set.return_when indicates when this function should return. It must be one of the following constants:\nConstant\nDescription\n- asyncio.FIRST_COMPLETED\u00b6\nThe function will return when any future finishes or is cancelled.\n- asyncio.FIRST_EXCEPTION\u00b6\nThe function will return when any future finishes by raising an exception. If no future raises an exception then it is equivalent to\nALL_COMPLETED\n.- asyncio.ALL_COMPLETED\u00b6\nThe function will return when all futures finish or are cancelled.\nUnlike\nwait_for()\n,wait()\ndoes not cancel the futures when a timeout occurs.Changed in version 3.10: Removed the loop parameter.\nChanged in version 3.11: Passing coroutine objects to\nwait()\ndirectly is forbidden.Changed in version 3.12: Added support for generators yielding tasks.\n- asyncio.as_completed(aws, *, timeout=None)\u00b6\nRun awaitable objects in the aws iterable concurrently. The returned object can be iterated to obtain the results of the awaitables as they finish.\nThe object returned by\nas_completed()\ncan be iterated as an asynchronous iterator or a plain iterator. When asynchronous iteration is used, the originally-supplied awaitables are yielded if they are tasks or futures. This makes it easy to correlate previously-scheduled tasks with their results. Example:ipv4_connect = create_task(open_connection(\"127.0.0.1\", 80)) ipv6_connect = create_task(open_connection(\"::1\", 80)) tasks = [ipv4_connect, ipv6_connect] async for earliest_connect in as_completed(tasks): # earliest_connect is done. The result can be obtained by # awaiting it or calling earliest_connect.result() reader, writer = await earliest_connect if earliest_connect is ipv6_connect: print(\"IPv6 connection established.\") else: print(\"IPv4 connection established.\")\nDuring asynchronous iteration, implicitly-created tasks will be yielded for supplied awaitables that aren\u2019t tasks or futures.\nWhen used as a plain iterator, each iteration yields a new coroutine that returns the result or raises the exception of the next completed awaitable. This pattern is compatible with Python versions older than 3.13:\nipv4_connect = create_task(open_connection(\"127.0.0.1\", 80)) ipv6_connect = create_task(open_connection(\"::1\", 80)) tasks = [ipv4_connect, ipv6_connect] for next_connect in as_completed(tasks): # next_connect is not one of the original task objects. It must be # awaited to obtain the result value or raise the exception of the # awaitable that finishes next. reader, writer = await next_connect\nA\nTimeoutError\nis raised if the timeout occurs before all awaitables are done. This is raised by theasync for\nloop during asynchronous iteration or by the coroutines yielded during plain iteration.Changed in version 3.10: Removed the loop parameter.\nDeprecated since version 3.10: Deprecation warning is emitted if not all awaitable objects in the aws iterable are Future-like objects and there is no running event loop.\nChanged in version 3.12: Added support for generators yielding tasks.\nChanged in version 3.13: The result can now be used as either an asynchronous iterator or as a plain iterator (previously it was only a plain iterator).\nRunning in Threads\u00b6\n- async asyncio.to_thread(func, /, *args, **kwargs)\u00b6\nAsynchronously run function func in a separate thread.\nAny *args and **kwargs supplied for this function are directly passed to func. Also, the current\ncontextvars.Context\nis propagated, allowing context variables from the event loop thread to be accessed in the separate thread.Return a coroutine that can be awaited to get the eventual result of func.\nThis coroutine function is primarily intended to be used for executing IO-bound functions/methods that would otherwise block the event loop if they were run in the main thread. For example:\ndef blocking_io(): print(f\"start blocking_io at {time.strftime('%X')}\") # Note that time.sleep() can be replaced with any blocking # IO-bound operation, such as file operations. time.sleep(1) print(f\"blocking_io complete at {time.strftime('%X')}\") async def main(): print(f\"started main at {time.strftime('%X')}\") await asyncio.gather( asyncio.to_thread(blocking_io), asyncio.sleep(1)) print(f\"finished main at {time.strftime('%X')}\") asyncio.run(main()) # Expected output: # # started main at 19:50:53 # start blocking_io at 19:50:53 # blocking_io complete at 19:50:54 # finished main at 19:50:54\nDirectly calling\nblocking_io()\nin any coroutine would block the event loop for its duration, resulting in an additional 1 second of run time. Instead, by usingasyncio.to_thread()\n, we can run it in a separate thread without blocking the event loop.Note\nDue to the GIL,\nasyncio.to_thread()\ncan typically only be used to make IO-bound functions non-blocking. However, for extension modules that release the GIL or alternative Python implementations that don\u2019t have one,asyncio.to_thread()\ncan also be used for CPU-bound functions.Added in version 3.9.\nScheduling From Other Threads\u00b6\n- asyncio.run_coroutine_threadsafe(coro, loop)\u00b6\nSubmit a coroutine to the given event loop. Thread-safe.\nReturn a\nconcurrent.futures.Future\nto wait for the result from another OS thread.This function is meant to be called from a different OS thread than the one where the event loop is running. Example:\ndef in_thread(loop: asyncio.AbstractEventLoop) -> None: # Run some blocking IO pathlib.Path(\"example.txt\").write_text(\"hello world\", encoding=\"utf8\") # Create a coroutine coro = asyncio.sleep(1, result=3) # Submit the coroutine to a given loop future = asyncio.run_coroutine_threadsafe(coro, loop) # Wait for the result with an optional timeout argument assert future.result(timeout=2) == 3 async def amain() -> None: # Get the running loop loop = asyncio.get_running_loop() # Run something in a thread await asyncio.to_thread(in_thread, loop)\nIt\u2019s also possible to run the other way around. Example:\n@contextlib.contextmanager def loop_in_thread() -> Generator[asyncio.AbstractEventLoop]: loop_fut = concurrent.futures.Future[asyncio.AbstractEventLoop]() stop_event = asyncio.Event() async def main() -> None: loop_fut.set_result(asyncio.get_running_loop()) await stop_event.wait() with concurrent.futures.ThreadPoolExecutor(1) as tpe: complete_fut = tpe.submit(asyncio.run, main()) for fut in concurrent.futures.as_completed((loop_fut, complete_fut)): if fut is loop_fut: loop = loop_fut.result() try: yield loop finally: loop.call_soon_threadsafe(stop_event.set) else: fut.result() # Create a loop in another thread with loop_in_thread() as loop: # Create a coroutine coro = asyncio.sleep(1, result=3) # Submit the coroutine to a given loop future = asyncio.run_coroutine_threadsafe(coro, loop) # Wait for the result with an optional timeout argument assert future.result(timeout=2) == 3\nIf an exception is raised in the coroutine, the returned Future will be notified. It can also be used to cancel the task in the event loop:\ntry: result = future.result(timeout) except TimeoutError: print('The coroutine took too long, cancelling the task...') future.cancel() except Exception as exc: print(f'The coroutine raised an exception: {exc!r}') else: print(f'The coroutine returned: {result!r}')\nSee the concurrency and multithreading section of the documentation.\nUnlike other asyncio functions this function requires the loop argument to be passed explicitly.\nAdded in version 3.5.1.\nIntrospection\u00b6\n- asyncio.current_task(loop=None)\u00b6\nReturn the currently running\nTask\ninstance, orNone\nif no task is running.If loop is\nNone\nget_running_loop()\nis used to get the current loop.Added in version 3.7.\n- asyncio.all_tasks(loop=None)\u00b6\nReturn a set of not yet finished\nTask\nobjects run by the loop.If loop is\nNone\n,get_running_loop()\nis used for getting current loop.Added in version 3.7.\n- asyncio.iscoroutine(obj)\u00b6\nReturn\nTrue\nif obj is a coroutine object.Added in version 3.4.\nTask Object\u00b6\n- class asyncio.Task(coro, *, loop=None, name=None, context=None, eager_start=False)\u00b6\nA\nFuture-like\nobject that runs a Python coroutine. Not thread-safe.Tasks are used to run coroutines in event loops. If a coroutine awaits on a Future, the Task suspends the execution of the coroutine and waits for the completion of the Future. When the Future is done, the execution of the wrapped coroutine resumes.\nEvent loops use cooperative scheduling: an event loop runs one Task at a time. While a Task awaits for the completion of a Future, the event loop runs other Tasks, callbacks, or performs IO operations.\nUse the high-level\nasyncio.create_task()\nfunction to create Tasks, or the low-levelloop.create_task()\norensure_future()\nfunctions. Manual instantiation of Tasks is discouraged.To cancel a running Task use the\ncancel()\nmethod. Calling it will cause the Task to throw aCancelledError\nexception into the wrapped coroutine. If a coroutine is awaiting on a Future object during cancellation, the Future object will be cancelled.cancelled()\ncan be used to check if the Task was cancelled. The method returnsTrue\nif the wrapped coroutine did not suppress theCancelledError\nexception and was actually cancelled.asyncio.Task\ninherits fromFuture\nall of its APIs exceptFuture.set_result()\nandFuture.set_exception()\n.An optional keyword-only context argument allows specifying a custom\ncontextvars.Context\nfor the coro to run in. If no context is provided, the Task copies the current context and later runs its coroutine in the copied context.An optional keyword-only eager_start argument allows eagerly starting the execution of the\nasyncio.Task\nat task creation time. If set toTrue\nand the event loop is running, the task will start executing the coroutine immediately, until the first time the coroutine blocks. If the coroutine returns or raises without blocking, the task will be finished eagerly and will skip scheduling to the event loop.Changed in version 3.7: Added support for the\ncontextvars\nmodule.Changed in version 3.8: Added the name parameter.\nDeprecated since version 3.10: Deprecation warning is emitted if loop is not specified and there is no running event loop.\nChanged in version 3.11: Added the context parameter.\nChanged in version 3.12: Added the eager_start parameter.\n- done()\u00b6\nReturn\nTrue\nif the Task is done.A Task is done when the wrapped coroutine either returned a value, raised an exception, or the Task was cancelled.\n- result()\u00b6\nReturn the result of the Task.\nIf the Task is done, the result of the wrapped coroutine is returned (or if the coroutine raised an exception, that exception is re-raised.)\nIf the Task has been cancelled, this method raises a\nCancelledError\nexception.If the Task\u2019s result isn\u2019t yet available, this method raises an\nInvalidStateError\nexception.\n- exception()\u00b6\nReturn the exception of the Task.\nIf the wrapped coroutine raised an exception that exception is returned. If the wrapped coroutine returned normally this method returns\nNone\n.If the Task has been cancelled, this method raises a\nCancelledError\nexception.If the Task isn\u2019t done yet, this method raises an\nInvalidStateError\nexception.\n- add_done_callback(callback, *, context=None)\u00b6\nAdd a callback to be run when the Task is done.\nThis method should only be used in low-level callback-based code.\nSee the documentation of\nFuture.add_done_callback()\nfor more details.\n- remove_done_callback(callback)\u00b6\nRemove callback from the callbacks list.\nThis method should only be used in low-level callback-based code.\nSee the documentation of\nFuture.remove_done_callback()\nfor more details.\n- get_stack(*, limit=None)\u00b6\nReturn the list of stack frames for this Task.\nIf the wrapped coroutine is not done, this returns the stack where it is suspended. If the coroutine has completed successfully or was cancelled, this returns an empty list. If the coroutine was terminated by an exception, this returns the list of traceback frames.\nThe frames are always ordered from oldest to newest.\nOnly one stack frame is returned for a suspended coroutine.\nThe optional limit argument sets the maximum number of frames to return; by default all available frames are returned. The ordering of the returned list differs depending on whether a stack or a traceback is returned: the newest frames of a stack are returned, but the oldest frames of a traceback are returned. (This matches the behavior of the traceback module.)\n- print_stack(*, limit=None, file=None)\u00b6\nPrint the stack or traceback for this Task.\nThis produces output similar to that of the traceback module for the frames retrieved by\nget_stack()\n.The limit argument is passed to\nget_stack()\ndirectly.The file argument is an I/O stream to which the output is written; by default output is written to\nsys.stdout\n.\n- get_coro()\u00b6\nReturn the coroutine object wrapped by the\nTask\n.Note\nThis will return\nNone\nfor Tasks which have already completed eagerly. See the Eager Task Factory.Added in version 3.8.\nChanged in version 3.12: Newly added eager task execution means result may be\nNone\n.\n- get_context()\u00b6\nReturn the\ncontextvars.Context\nobject associated with the task.Added in version 3.12.\n- get_name()\u00b6\nReturn the name of the Task.\nIf no name has been explicitly assigned to the Task, the default asyncio Task implementation generates a default name during instantiation.\nAdded in version 3.8.\n- set_name(value)\u00b6\nSet the name of the Task.\nThe value argument can be any object, which is then converted to a string.\nIn the default Task implementation, the name will be visible in the\nrepr()\noutput of a task object.Added in version 3.8.\n- cancel(msg=None)\u00b6\nRequest the Task to be cancelled.\nIf the Task is already done or cancelled, return\nFalse\n, otherwise, returnTrue\n.The method arranges for a\nCancelledError\nexception to be thrown into the wrapped coroutine on the next cycle of the event loop.The coroutine then has a chance to clean up or even deny the request by suppressing the exception with a\ntry\n\u2026 \u2026except CancelledError\n\u2026finally\nblock. Therefore, unlikeFuture.cancel()\n,Task.cancel()\ndoes not guarantee that the Task will be cancelled, although suppressing cancellation completely is not common and is actively discouraged. Should the coroutine nevertheless decide to suppress the cancellation, it needs to callTask.uncancel()\nin addition to catching the exception.Changed in version 3.9: Added the msg parameter.\nChanged in version 3.11: The\nmsg\nparameter is propagated from cancelled task to its awaiter.The following example illustrates how coroutines can intercept the cancellation request:\nasync def cancel_me(): print('cancel_me(): before sleep') try: # Wait for 1 hour await asyncio.sleep(3600) except asyncio.CancelledError: print('cancel_me(): cancel sleep') raise finally: print('cancel_me(): after sleep') async def main(): # Create a \"cancel_me\" Task task = asyncio.create_task(cancel_me()) # Wait for 1 second await asyncio.sleep(1) task.cancel() try: await task except asyncio.CancelledError: print(\"main(): cancel_me is cancelled now\") asyncio.run(main()) # Expected output: # # cancel_me(): before sleep # cancel_me(): cancel sleep # cancel_me(): after sleep # main(): cancel_me is cancelled now\n- cancelled()\u00b6\nReturn\nTrue\nif the Task is cancelled.The Task is cancelled when the cancellation was requested with\ncancel()\nand the wrapped coroutine propagated theCancelledError\nexception thrown into it.\n- uncancel()\u00b6\nDecrement the count of cancellation requests to this Task.\nReturns the remaining number of cancellation requests.\nNote that once execution of a cancelled task completed, further calls to\nuncancel()\nare ineffective.Added in version 3.11.\nThis method is used by asyncio\u2019s internals and isn\u2019t expected to be used by end-user code. In particular, if a Task gets successfully uncancelled, this allows for elements of structured concurrency like Task Groups and\nasyncio.timeout()\nto continue running, isolating cancellation to the respective structured block. For example:async def make_request_with_timeout(): try: async with asyncio.timeout(1): # Structured block affected by the timeout: await make_request() await make_another_request() except TimeoutError: log(\"There was a timeout\") # Outer code not affected by the timeout: await unrelated_code()\nWhile the block with\nmake_request()\nandmake_another_request()\nmight get cancelled due to the timeout,unrelated_code()\nshould continue running even in case of the timeout. This is implemented withuncancel()\n.TaskGroup\ncontext managers useuncancel()\nin a similar fashion.If end-user code is, for some reason, suppressing cancellation by catching\nCancelledError\n, it needs to call this method to remove the cancellation state.When this method decrements the cancellation count to zero, the method checks if a previous\ncancel()\ncall had arranged forCancelledError\nto be thrown into the task. If it hasn\u2019t been thrown yet, that arrangement will be rescinded (by resetting the internal_must_cancel\nflag).\nChanged in version 3.13: Changed to rescind pending cancellation requests upon reaching zero.\n- cancelling()\u00b6\nReturn the number of pending cancellation requests to this Task, i.e., the number of calls to\ncancel()\nless the number ofuncancel()\ncalls.Note that if this number is greater than zero but the Task is still executing,\ncancelled()\nwill still returnFalse\n. This is because this number can be lowered by callinguncancel()\n, which can lead to the task not being cancelled after all if the cancellation requests go down to zero.This method is used by asyncio\u2019s internals and isn\u2019t expected to be used by end-user code. See\nuncancel()\nfor more details.Added in version 3.11.", "code_snippets": ["\n\n", " ", "\n", " ", "\n", " ", " ", "\n", " ", "\n\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n\n", " ", " ", "\n ", " ", "\n ", "\n\n", " ", "\n ", "\n\n ", " ", " ", "\n ", " ", " ", "\n\n ", "\n\n", "\n", " ", " ", "\n", "\n", "\n", " ", " ", "\n", " ", "\n ", " ", " ", "\n ", " ", "\n\n ", " ", " ", "\n ", " ", "\n\n ", "\n\n ", "\n ", "\n ", " ", "\n ", " ", "\n\n ", "\n", " ", " ", "\n", "\n", "\n", " ", " ", "\n", " ", "\n ", " ", " ", " ", " ", "\n ", " ", " ", "\n ", " ", "\n\n ", " ", " ", "\n ", " ", "\n\n ", "\n\n ", "\n\n ", "\n", "\n\n", " ", "\n ", " ", "\n\n", " ", "\n ", "\n ", "\n ", "\n ", " ", "\n\n ", "\n ", " ", " ", "\n\n", "\n", "\n\n", " ", "\n ", " ", "\n\n", " ", "\n ", "\n ", "\n ", " ", " ", "\n\n ", "\n ", "\n ", " ", "\n\n", "\n", " ", "\n ", " ", "\n\n ", "\n ", " ", "\n ", "\n ", "\n ", "\n", " ", " ", "\n\n", " ", " ", " ", "\n ", " ", " ", "\n\n ", "\n ", "\n\n ", "\n ", "\n ", "\n ", "\n", " ", "\n ", " ", " ", " ", " ", "\n ", " ", " ", "\n ", " ", " ", "\n ", "\n", "\n", "\n\n", " ", "\n ", " ", " ", "\n ", " ", " ", " ", " ", "\n ", " ", "\n ", "\n ", " ", " ", " ", " ", " ", "\n ", "\n ", " ", "\n\n", "\n", "\n\n", " ", " ", "\n ", " ", " ", "\n ", " ", " ", " ", " ", " ", " ", "\n ", "\n ", " ", "\n ", " ", " ", "\n ", "\n ", " ", "\n\n", " ", "\n ", "\n ", " ", " ", " ", "\n ", " ", "\n ", " ", "\n ", " ", "\n ", "\n ", "\n\n", "\n\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", " ", " ", "\n", " ", " ", " ", "\n", " ", " ", " ", "\n", " ", " ", "\n", "\n ", " ", " ", " ", "\n", " ", "\n ", " ", " ", "\n", " ", "\n ", " ", " ", "\n ", " ", "\n", " ", "\n ", "\n ", " ", " ", "\n ", " ", "\n ", " ", "\n ", "\n\n ", "\n", " ", "\n ", "\n ", "\n ", " ", " ", " ", " ", "\n ", "\n ", " ", " ", " ", " ", "\n ", "\n\n ", " ", "\n ", " ", "\n ", "\n\n ", " ", "\n ", "\n", " ", "\n ", " ", " ", "\n ", " ", " ", " ", " ", "\n ", "\n ", " ", " ", "\n ", " ", "\n ", " ", "\n ", "\n\n ", "\n", " ", "\n ", "\n ", " ", "\n ", "\n\n", " ", "\n ", "\n ", "\n ", " ", " ", "\n ", " ", "\n ", "\n\n", "\n\n", "\n", "\n", "\n", " ", " ", " ", " ", "\n", " ", " ", " ", "\n", " ", " ", " ", "\n", " ", " ", " ", "\n\n", " ", " ", " ", " ", "\n ", "\n ", "\n ", " ", " ", " ", " ", "\n\n ", " ", " ", " ", "\n ", "\n ", "\n ", "\n", " ", " ", " ", "\n", " ", " ", " ", "\n", " ", " ", " ", "\n\n", " ", " ", " ", "\n ", "\n ", "\n ", "\n ", " ", " ", " ", " ", "\n", "\n ", "\n ", "\n ", "\n ", "\n ", "\n\n", " ", "\n ", "\n\n ", " ", "\n ", "\n ", "\n\n ", "\n\n\n", "\n\n", "\n", "\n", "\n", "\n", "\n", "\n", " ", " ", " ", "\n ", "\n ", " ", "\n\n ", "\n ", " ", " ", " ", "\n\n ", "\n ", " ", " ", " ", "\n\n ", "\n ", " ", " ", " ", "\n\n", " ", " ", " ", "\n ", "\n ", " ", " ", "\n\n ", "\n ", " ", " ", "\n", "\n", " ", " ", "\n ", " ", " ", "\n ", " ", " ", "\n\n ", " ", " ", " ", "\n ", "\n ", " ", "\n\n ", " ", " ", " ", "\n ", " ", " ", " ", "\n ", " ", " ", " ", " ", "\n ", " ", " ", " ", "\n ", " ", " ", "\n ", "\n ", " ", "\n ", "\n ", "\n ", "\n ", "\n\n", "\n", " ", " ", " ", "\n ", "\n ", " ", " ", " ", "\n\n ", "\n ", " ", " ", " ", "\n\n ", "\n ", " ", " ", " ", "\n", "\n ", " ", " ", "\n", " ", "\n ", "\n ", "\n", " ", " ", " ", "\n ", "\n", "\n ", "\n", " ", "\n ", "\n\n ", "\n ", "\n ", " ", "\n ", " ", "\n ", "\n ", "\n ", "\n ", "\n\n", " ", "\n ", "\n ", " ", " ", "\n\n ", "\n ", " ", "\n\n ", "\n ", "\n ", " ", "\n ", " ", "\n ", "\n\n", "\n\n", "\n", "\n", "\n", "\n", "\n", "\n", " ", "\n ", "\n ", " ", " ", "\n ", "\n ", " ", "\n ", " ", "\n ", " ", "\n ", "\n ", "\n ", " ", "\n"], "language": "Python", "source": "python.org", "token_count": 10258} +{"url": "https://docs.python.org/3/c-api/gen.html", "title": "Generator Objects", "content": "Generator Objects\u00b6\nGenerator objects are what Python uses to implement generator iterators. They\nare normally created by iterating over a function that yields values, rather\nthan explicitly calling PyGen_New()\nor PyGen_NewWithQualName()\n.\n-\ntype PyGenObject\u00b6\nThe C structure used for generator objects.\n-\nPyTypeObject PyGen_Type\u00b6\nThe type object corresponding to generator objects.\n-\nint PyGen_Check(PyObject *ob)\u00b6\nReturn true if ob is a generator object; ob must not be\nNULL\n. This function always succeeds.\n-\nint PyGen_CheckExact(PyObject *ob)\u00b6\nReturn true if ob\u2019s type is\nPyGen_Type\n; ob must not beNULL\n. This function always succeeds.\n-\nPyObject *PyGen_New(PyFrameObject *frame)\u00b6\n- Return value: New reference.\nCreate and return a new generator object based on the frame object. A reference to frame is stolen by this function. The argument must not be\nNULL\n.\n-\nPyObject *PyGen_NewWithQualName(PyFrameObject *frame, PyObject *name, PyObject *qualname)\u00b6\n- Return value: New reference.\nCreate and return a new generator object based on the frame object, with\n__name__\nand__qualname__\nset to name and qualname. A reference to frame is stolen by this function. The frame argument must not beNULL\n.\n-\nPyCodeObject *PyGen_GetCode(PyGenObject *gen)\u00b6\nReturn a new strong reference to the code object wrapped by gen. This function always succeeds.\nAsynchronous Generator Objects\u00b6\nSee also\n-\nPyTypeObject PyAsyncGen_Type\u00b6\nThe type object corresponding to asynchronous generator objects. This is available as\ntypes.AsyncGeneratorType\nin the Python layer.Added in version 3.6.\n-\nPyObject *PyAsyncGen_New(PyFrameObject *frame, PyObject *name, PyObject *qualname)\u00b6\nCreate a new asynchronous generator wrapping frame, with\n__name__\nand__qualname__\nset to name and qualname. frame is stolen by this function and must not beNULL\n.On success, this function returns a strong reference to the new asynchronous generator. On failure, this function returns\nNULL\nwith an exception set.Added in version 3.6.\nDeprecated API\u00b6\n-\nPyAsyncGenASend_CheckExact(op)\u00b6\nThis is a soft deprecated API that was included in Python\u2019s C API by mistake.\nIt is solely here for completeness; do not use this API.", "code_snippets": [], "language": "Python", "source": "python.org", "token_count": 543} +{"url": "https://docs.python.org/3/library/spwd.html", "title": " \u2014 The shadow password database", "content": "spwd\n\u2014 The shadow password database\u00b6\nDeprecated since version 3.11, removed in version 3.13.\nThis module is no longer part of the Python standard library. It was removed in Python 3.13 after being deprecated in Python 3.11. The removal was decided in PEP 594.\nA possible replacement is the third-party library python-pam. This library is not supported or maintained by the Python core team.\nThe last version of Python that provided the spwd\nmodule was\nPython 3.12.", "code_snippets": [], "language": "Python", "source": "python.org", "token_count": 116} +{"url": "https://docs.python.org/3/whatsnew/3.1.html", "title": "What\u2019s New In Python 3.1", "content": "What\u2019s New In Python 3.1\u00b6\n- Author:\nRaymond Hettinger\nThis article explains the new features in Python 3.1, compared to 3.0. Python 3.1 was released on June 27, 2009.\nPEP 372: Ordered Dictionaries\u00b6\nRegular Python dictionaries iterate over key/value pairs in arbitrary order.\nOver the years, a number of authors have written alternative implementations\nthat remember the order that the keys were originally inserted. Based on\nthe experiences from those implementations, a new\ncollections.OrderedDict\nclass has been introduced.\nThe OrderedDict API is substantially the same as regular dictionaries but will iterate over keys and values in a guaranteed order depending on when a key was first inserted. If a new entry overwrites an existing entry, the original insertion position is left unchanged. Deleting an entry and reinserting it will move it to the end.\nThe standard library now supports use of ordered dictionaries in several\nmodules. The configparser\nmodule uses them by default. This lets\nconfiguration files be read, modified, and then written back in their original\norder. The _asdict() method for collections.namedtuple()\nnow\nreturns an ordered dictionary with the values appearing in the same order as\nthe underlying tuple indices. The json\nmodule is being built-out with\nan object_pairs_hook to allow OrderedDicts to be built by the decoder.\nSupport was also added for third-party tools like PyYAML.\nSee also\n- PEP 372 - Ordered Dictionaries\nPEP written by Armin Ronacher and Raymond Hettinger. Implementation written by Raymond Hettinger.\nSince an ordered dictionary remembers its insertion order, it can be used in conjunction with sorting to make a sorted dictionary:\n>>> # regular unsorted dictionary\n>>> d = {'banana': 3, 'apple':4, 'pear': 1, 'orange': 2}\n>>> # dictionary sorted by key\n>>> OrderedDict(sorted(d.items(), key=lambda t: t[0]))\nOrderedDict([('apple', 4), ('banana', 3), ('orange', 2), ('pear', 1)])\n>>> # dictionary sorted by value\n>>> OrderedDict(sorted(d.items(), key=lambda t: t[1]))\nOrderedDict([('pear', 1), ('orange', 2), ('banana', 3), ('apple', 4)])\n>>> # dictionary sorted by length of the key string\n>>> OrderedDict(sorted(d.items(), key=lambda t: len(t[0])))\nOrderedDict([('pear', 1), ('apple', 4), ('orange', 2), ('banana', 3)])\nThe new sorted dictionaries maintain their sort order when entries are deleted. But when new keys are added, the keys are appended to the end and the sort is not maintained.\nPEP 378: Format Specifier for Thousands Separator\u00b6\nThe built-in format()\nfunction and the str.format()\nmethod use\na mini-language that now includes a simple, non-locale aware way to format\na number with a thousands separator. That provides a way to humanize a\nprogram\u2019s output, improving its professional appearance and readability:\n>>> format(1234567, ',d')\n'1,234,567'\n>>> format(1234567.89, ',.2f')\n'1,234,567.89'\n>>> format(12345.6 + 8901234.12j, ',f')\n'12,345.600000+8,901,234.120000j'\n>>> format(Decimal('1234567.89'), ',f')\n'1,234,567.89'\nThe supported types are int\n, float\n, complex\nand decimal.Decimal\n.\nDiscussions are underway about how to specify alternative separators like dots, spaces, apostrophes, or underscores. Locale-aware applications should use the existing n format specifier which already has some support for thousands separators.\nSee also\n- PEP 378 - Format Specifier for Thousands Separator\nPEP written by Raymond Hettinger and implemented by Eric Smith and Mark Dickinson.\nOther Language Changes\u00b6\nSome smaller changes made to the core Python language are:\nDirectories and zip archives containing a\n__main__.py\nfile can now be executed directly by passing their name to the interpreter. The directory/zipfile is automatically inserted as the first entry in sys.path. (Suggestion and initial patch by Andy Chu; revised patch by Phillip J. Eby and Nick Coghlan; bpo-1739468.)The\nint()\ntype gained abit_length\nmethod that returns the number of bits necessary to represent its argument in binary:>>> n = 37 >>> bin(37) '0b100101' >>> n.bit_length() 6 >>> n = 2**123-1 >>> n.bit_length() 123 >>> (n+1).bit_length() 124\n(Contributed by Fredrik Johansson, Victor Stinner, Raymond Hettinger, and Mark Dickinson; bpo-3439.)\nThe fields in\nformat()\nstrings can now be automatically numbered:>>> 'Sir {} of {}'.format('Gallahad', 'Camelot') 'Sir Gallahad of Camelot'\nFormerly, the string would have required numbered fields such as:\n'Sir {0} of {1}'\n.(Contributed by Eric Smith; bpo-5237.)\nThe\nstring.maketrans()\nfunction is deprecated and is replaced by new static methods,bytes.maketrans()\nandbytearray.maketrans()\n. This change solves the confusion around which types were supported by thestring\nmodule. Now,str\n,bytes\n, andbytearray\neach have their own maketrans and translate methods with intermediate translation tables of the appropriate type.(Contributed by Georg Brandl; bpo-5675.)\nThe syntax of the\nwith\nstatement now allows multiple context managers in a single statement:>>> with open('mylog.txt') as infile, open('a.out', 'w') as outfile: ... for line in infile: ... if '' in line: ... outfile.write(line)\nWith the new syntax, the\ncontextlib.nested()\nfunction is no longer needed and is now deprecated.(Contributed by Georg Brandl and Mattias Br\u00e4ndstr\u00f6m; appspot issue 53094.)\nround(x, n)\nnow returns an integer if x is an integer. Previously it returned a float:>>> round(1123, -2) 1100\n(Contributed by Mark Dickinson; bpo-4707.)\nPython now uses David Gay\u2019s algorithm for finding the shortest floating-point representation that doesn\u2019t change its value. This should help mitigate some of the confusion surrounding binary floating-point numbers.\nThe significance is easily seen with a number like\n1.1\nwhich does not have an exact equivalent in binary floating point. Since there is no exact equivalent, an expression likefloat('1.1')\nevaluates to the nearest representable value which is0x1.199999999999ap+0\nin hex or1.100000000000000088817841970012523233890533447265625\nin decimal. That nearest value was and still is used in subsequent floating-point calculations.What is new is how the number gets displayed. Formerly, Python used a simple approach. The value of\nrepr(1.1)\nwas computed asformat(1.1, '.17g')\nwhich evaluated to'1.1000000000000001'\n. The advantage of using 17 digits was that it relied on IEEE-754 guarantees to assure thateval(repr(1.1))\nwould round-trip exactly to its original value. The disadvantage is that many people found the output to be confusing (mistaking intrinsic limitations of binary floating-point representation as being a problem with Python itself).The new algorithm for\nrepr(1.1)\nis smarter and returns'1.1'\n. Effectively, it searches all equivalent string representations (ones that get stored with the same underlying float value) and returns the shortest representation.The new algorithm tends to emit cleaner representations when possible, but it does not change the underlying values. So, it is still the case that\n1.1 + 2.2 != 3.3\neven though the representations may suggest otherwise.The new algorithm depends on certain features in the underlying floating-point implementation. If the required features are not found, the old algorithm will continue to be used. Also, the text pickle protocols assure cross-platform portability by using the old algorithm.\n(Contributed by Eric Smith and Mark Dickinson; bpo-1580)\nNew, Improved, and Deprecated Modules\u00b6\nAdded a\ncollections.Counter\nclass to support convenient counting of unique items in a sequence or iterable:>>> Counter(['red', 'blue', 'red', 'green', 'blue', 'blue']) Counter({'blue': 3, 'red': 2, 'green': 1})\n(Contributed by Raymond Hettinger; bpo-1696199.)\nAdded a new module,\ntkinter.ttk\nfor access to the Tk themed widget set. The basic idea of ttk is to separate, to the extent possible, the code implementing a widget\u2019s behavior from the code implementing its appearance.(Contributed by Guilherme Polo; bpo-2983.)\nThe\ngzip.GzipFile\nandbz2.BZ2File\nclasses now support the context management protocol:>>> # Automatically close file after writing >>> with gzip.GzipFile(filename, \"wb\") as f: ... f.write(b\"xxx\")\n(Contributed by Antoine Pitrou.)\nThe\ndecimal\nmodule now supports methods for creating a decimal object from a binaryfloat\n. The conversion is exact but can sometimes be surprising:>>> Decimal.from_float(1.1) Decimal('1.100000000000000088817841970012523233890533447265625')\nThe long decimal result shows the actual binary fraction being stored for 1.1. The fraction has many digits because 1.1 cannot be exactly represented in binary.\n(Contributed by Raymond Hettinger and Mark Dickinson.)\nThe\nitertools\nmodule grew two new functions. Theitertools.combinations_with_replacement()\nfunction is one of four for generating combinatorics including permutations and Cartesian products. Theitertools.compress()\nfunction mimics its namesake from APL. Also, the existingitertools.count()\nfunction now has an optional step argument and can accept any type of counting sequence includingfractions.Fraction\nanddecimal.Decimal\n:>>> [p+q for p,q in combinations_with_replacement('LOVE', 2)] ['LL', 'LO', 'LV', 'LE', 'OO', 'OV', 'OE', 'VV', 'VE', 'EE'] >>> list(compress(data=range(10), selectors=[0,0,1,1,0,1,0,1,0,0])) [2, 3, 5, 7] >>> c = count(start=Fraction(1,2), step=Fraction(1,6)) >>> [next(c), next(c), next(c), next(c)] [Fraction(1, 2), Fraction(2, 3), Fraction(5, 6), Fraction(1, 1)]\n(Contributed by Raymond Hettinger.)\ncollections.namedtuple()\nnow supports a keyword argument rename which lets invalid fieldnames be automatically converted to positional names in the form _0, _1, etc. This is useful when the field names are being created by an external source such as a CSV header, SQL field list, or user input:>>> query = input() SELECT region, dept, count(*) FROM main GROUPBY region, dept >>> cursor.execute(query) >>> query_fields = [desc[0] for desc in cursor.description] >>> UserQuery = namedtuple('UserQuery', query_fields, rename=True) >>> pprint.pprint([UserQuery(*row) for row in cursor]) [UserQuery(region='South', dept='Shipping', _2=185), UserQuery(region='North', dept='Accounting', _2=37), UserQuery(region='West', dept='Sales', _2=419)]\n(Contributed by Raymond Hettinger; bpo-1818.)\nThe\nre.sub()\n,re.subn()\nandre.split()\nfunctions now accept a flags parameter.(Contributed by Gregory Smith.)\nThe\nlogging\nmodule now implements a simplelogging.NullHandler\nclass for applications that are not using logging but are calling library code that does. Setting-up a null handler will suppress spurious warnings such as \u201cNo handlers could be found for logger foo\u201d:>>> h = logging.NullHandler() >>> logging.getLogger(\"foo\").addHandler(h)\n(Contributed by Vinay Sajip; bpo-4384).\nThe\nrunpy\nmodule which supports the-m\ncommand line switch now supports the execution of packages by looking for and executing a__main__\nsubmodule when a package name is supplied.(Contributed by Andi Vajda; bpo-4195.)\nThe\npdb\nmodule can now access and display source code loaded viazipimport\n(or any other conformant PEP 302 loader).(Contributed by Alexander Belopolsky; bpo-4201.)\nfunctools.partial\nobjects can now be pickled.\n(Suggested by Antoine Pitrou and Jesse Noller. Implemented by Jack Diederich; bpo-5228.)\nAdd\npydoc\nhelp topics for symbols so thathelp('@')\nworks as expected in the interactive environment.(Contributed by David Laban; bpo-4739.)\nThe\nunittest\nmodule now supports skipping individual tests or classes of tests. And it supports marking a test as an expected failure, a test that is known to be broken, but shouldn\u2019t be counted as a failure on a TestResult:class TestGizmo(unittest.TestCase): @unittest.skipUnless(sys.platform.startswith(\"win\"), \"requires Windows\") def test_gizmo_on_windows(self): ... @unittest.expectedFailure def test_gimzo_without_required_library(self): ...\nAlso, tests for exceptions have been builtout to work with context managers using the\nwith\nstatement:def test_division_by_zero(self): with self.assertRaises(ZeroDivisionError): x / 0\nIn addition, several new assertion methods were added including\nassertSetEqual()\n,assertDictEqual()\n,assertDictContainsSubset()\n,assertListEqual()\n,assertTupleEqual()\n,assertSequenceEqual()\n,assertRaisesRegexp()\n,assertIsNone()\n, andassertIsNotNone()\n.(Contributed by Benjamin Peterson and Antoine Pitrou.)\nThe\nio\nmodule has three new constants for theseek()\nmethod:SEEK_SET\n,SEEK_CUR\n, andSEEK_END\n.The\nsys.version_info\ntuple is now a named tuple:>>> sys.version_info sys.version_info(major=3, minor=1, micro=0, releaselevel='alpha', serial=2)\n(Contributed by Ross Light; bpo-4285.)\nThe\nnntplib\nandimaplib\nmodules now support IPv6.The\npickle\nmodule has been adapted for better interoperability with Python 2.x when used with protocol 2 or lower. The reorganization of the standard library changed the formal reference for many objects. For example,__builtin__.set\nin Python 2 is calledbuiltins.set\nin Python 3. This change confounded efforts to share data between different versions of Python. But now when protocol 2 or lower is selected, the pickler will automatically use the old Python 2 names for both loading and dumping. This remapping is turned-on by default but can be disabled with the fix_imports option:>>> s = {1, 2, 3} >>> pickle.dumps(s, protocol=0) b'c__builtin__\\nset\\np0\\n((lp1\\nL1L\\naL2L\\naL3L\\natp2\\nRp3\\n.' >>> pickle.dumps(s, protocol=0, fix_imports=False) b'cbuiltins\\nset\\np0\\n((lp1\\nL1L\\naL2L\\naL3L\\natp2\\nRp3\\n.'\nAn unfortunate but unavoidable side-effect of this change is that protocol 2 pickles produced by Python 3.1 won\u2019t be readable with Python 3.0. The latest pickle protocol, protocol 3, should be used when migrating data between Python 3.x implementations, as it doesn\u2019t attempt to remain compatible with Python 2.x.\n(Contributed by Alexandre Vassalotti and Antoine Pitrou, bpo-6137.)\nA new module,\nimportlib\nwas added. It provides a complete, portable, pure Python reference implementation of theimport\nstatement and its counterpart, the__import__()\nfunction. It represents a substantial step forward in documenting and defining the actions that take place during imports.(Contributed by Brett Cannon.)\nOptimizations\u00b6\nMajor performance enhancements have been added:\nThe new I/O library (as defined in PEP 3116) was mostly written in Python and quickly proved to be a problematic bottleneck in Python 3.0. In Python 3.1, the I/O library has been entirely rewritten in C and is 2 to 20 times faster depending on the task at hand. The pure Python version is still available for experimentation purposes through the\n_pyio\nmodule.(Contributed by Amaury Forgeot d\u2019Arc and Antoine Pitrou.)\nAdded a heuristic so that tuples and dicts containing only untrackable objects are not tracked by the garbage collector. This can reduce the size of collections and therefore the garbage collection overhead on long-running programs, depending on their particular use of datatypes.\n(Contributed by Antoine Pitrou, bpo-4688.)\nEnabling a configure option named\n--with-computed-gotos\non compilers that support it (notably: gcc, SunPro, icc), the bytecode evaluation loop is compiled with a new dispatch mechanism which gives speedups of up to 20%, depending on the system, the compiler, and the benchmark.(Contributed by Antoine Pitrou along with a number of other participants, bpo-4753).\nThe decoding of UTF-8, UTF-16 and LATIN-1 is now two to four times faster.\n(Contributed by Antoine Pitrou and Amaury Forgeot d\u2019Arc, bpo-4868.)\nThe\njson\nmodule now has a C extension to substantially improve its performance. In addition, the API was modified so that json works only withstr\n, not withbytes\n. That change makes the module closely match the JSON specification which is defined in terms of Unicode.(Contributed by Bob Ippolito and converted to Py3.1 by Antoine Pitrou and Benjamin Peterson; bpo-4136.)\nUnpickling now interns the attribute names of pickled objects. This saves memory and allows pickles to be smaller.\n(Contributed by Jake McGuire and Antoine Pitrou; bpo-5084.)\nIDLE\u00b6\nIDLE\u2019s format menu now provides an option to strip trailing whitespace from a source file.\n(Contributed by Roger D. Serwy; bpo-5150.)\nBuild and C API Changes\u00b6\nChanges to Python\u2019s build process and to the C API include:\nIntegers are now stored internally either in base\n2**15\nor in base2**30\n, the base being determined at build time. Previously, they were always stored in base2**15\n. Using base2**30\ngives significant performance improvements on 64-bit machines, but benchmark results on 32-bit machines have been mixed. Therefore, the default is to use base2**30\non 64-bit machines and base2**15\non 32-bit machines; on Unix, there\u2019s a new configure option--enable-big-digits\nthat can be used to override this default.Apart from the performance improvements this change should be invisible to end users, with one exception: for testing and debugging purposes there\u2019s a new\nsys.int_info\nthat provides information about the internal format, giving the number of bits per digit and the size in bytes of the C type used to store each digit:>>> import sys >>> sys.int_info sys.int_info(bits_per_digit=30, sizeof_digit=4)\n(Contributed by Mark Dickinson; bpo-4258.)\nThe\nPyLong_AsUnsignedLongLong()\nfunction now handles a negative pylong by raisingOverflowError\ninstead ofTypeError\n.(Contributed by Mark Dickinson and Lisandro Dalcrin; bpo-5175.)\nDeprecated\nPyNumber_Int()\n. UsePyNumber_Long()\ninstead.(Contributed by Mark Dickinson; bpo-4910.)\nAdded a new\nPyOS_string_to_double()\nfunction to replace the deprecated functionsPyOS_ascii_strtod()\nandPyOS_ascii_atof()\n.(Contributed by Mark Dickinson; bpo-5914.)\nAdded\nPyCapsule\nas a replacement for thePyCObject\nAPI. The principal difference is that the new type has a well defined interface for passing typing safety information and a less complicated signature for calling a destructor. The old type had a problematic API and is now deprecated.(Contributed by Larry Hastings; bpo-5630.)\nPorting to Python 3.1\u00b6\nThis section lists previously described changes and other bugfixes that may require changes to your code:\nThe new floating-point string representations can break existing doctests. For example:\ndef e(): '''Compute the base of natural logarithms. >>> e() 2.7182818284590451 ''' return sum(1/math.factorial(x) for x in reversed(range(30))) doctest.testmod() ********************************************************************** Failed example: e() Expected: 2.7182818284590451 Got: 2.718281828459045 **********************************************************************\nThe automatic name remapping in the pickle module for protocol 2 or lower can make Python 3.1 pickles unreadable in Python 3.0. One solution is to use protocol 3. Another solution is to set the fix_imports option to\nFalse\n. See the discussion above for more details.", "code_snippets": ["\n", " ", " ", " ", " ", " ", " ", " ", " ", "\n\n", "\n", " ", " ", " ", "\n", "\n\n", "\n", " ", " ", " ", "\n", "\n\n", "\n", " ", " ", " ", "\n", "\n", " ", "\n", "\n", " ", "\n", "\n", " ", " ", " ", "\n", "\n", " ", "\n", "\n", " ", " ", "\n", "\n", "\n", "\n", "\n", " ", " ", "\n", "\n", "\n", "\n", "\n", " ", "\n", "\n", " ", " ", " ", " ", " ", " ", " ", "\n", " ", " ", " ", " ", "\n", " ", " ", " ", " ", "\n", " ", "\n", " ", "\n", "\n", " ", " ", " ", " ", " ", "\n", "\n", "\n", " ", " ", " ", " ", "\n", " ", "\n", "\n", "\n", " ", " ", " ", " ", " ", "\n", "\n\n", " ", "\n", "\n\n", " ", " ", " ", "\n", " ", " ", " ", "\n", "\n", " ", " ", "\n", "\n\n", "\n", " ", " ", " ", " ", " ", " ", "\n", " ", " ", " ", " ", "\n", " ", " ", " ", " ", "\n", "\n", "\n", "\n", " ", " ", "\n", "\n", "\n\n ", " ", "\n ", "\n ", "\n\n ", "\n ", "\n ", "\n", "\n ", " ", "\n ", " ", " ", "\n", "\n", "\n", " ", " ", " ", " ", "\n", " ", "\n", "\n", " ", " ", "\n", "\n", "\n", "\n", "\n", "\n", "\n\n", "\n", "\n\n", "\n ", " ", " ", " ", " ", " ", "\n\n", "\n\n", "\n", " ", "\n ", "\n", "\n ", "\n", "\n ", "\n", "\n"], "language": "Python", "source": "python.org", "token_count": 4690} +{"url": "https://docs.python.org/3/c-api/abstract.html", "title": "Abstract Objects Layer", "content": "Abstract Objects Layer\u00b6\nThe functions in this chapter interact with Python objects regardless of their type, or with wide classes of object types (e.g. all numerical types, or all sequence types). When used on object types for which they do not apply, they will raise a Python exception.\nIt is not possible to use these functions on objects that are not properly\ninitialized, such as a list object that has been created by PyList_New()\n,\nbut whose items have not been set to some non-NULL\nvalue yet.", "code_snippets": [], "language": "Python", "source": "python.org", "token_count": 125} +{"url": "https://docs.python.org/3/c-api/sys.html", "title": "Operating System Utilities", "content": "Operating System Utilities\u00b6\n-\nPyObject *PyOS_FSPath(PyObject *path)\u00b6\n- Return value: New reference. Part of the Stable ABI since version 3.6.\nReturn the file system representation for path. If the object is a\nstr\norbytes\nobject, then a new strong reference is returned. If the object implements theos.PathLike\ninterface, then__fspath__()\nis returned as long as it is astr\norbytes\nobject. OtherwiseTypeError\nis raised andNULL\nis returned.Added in version 3.6.\n-\nint Py_FdIsInteractive(FILE *fp, const char *filename)\u00b6\nReturn true (nonzero) if the standard I/O file fp with name filename is deemed interactive. This is the case for files for which\nisatty(fileno(fp))\nis true. If thePyConfig.interactive\nis non-zero, this function also returns true if the filename pointer isNULL\nor if the name is equal to one of the strings''\nor'???'\n.This function must not be called before Python is initialized.\n-\nvoid PyOS_BeforeFork()\u00b6\n- Part of the Stable ABI on platforms with fork() since version 3.7.\nFunction to prepare some internal state before a process fork. This should be called before calling\nfork()\nor any similar function that clones the current process. Only available on systems wherefork()\nis defined.Warning\nThe C\nfork()\ncall should only be made from the \u201cmain\u201d thread (of the \u201cmain\u201d interpreter). The same is true forPyOS_BeforeFork()\n.Added in version 3.7.\n-\nvoid PyOS_AfterFork_Parent()\u00b6\n- Part of the Stable ABI on platforms with fork() since version 3.7.\nFunction to update some internal state after a process fork. This should be called from the parent process after calling\nfork()\nor any similar function that clones the current process, regardless of whether process cloning was successful. Only available on systems wherefork()\nis defined.Warning\nThe C\nfork()\ncall should only be made from the \u201cmain\u201d thread (of the \u201cmain\u201d interpreter). The same is true forPyOS_AfterFork_Parent()\n.Added in version 3.7.\n-\nvoid PyOS_AfterFork_Child()\u00b6\n- Part of the Stable ABI on platforms with fork() since version 3.7.\nFunction to update internal interpreter state after a process fork. This must be called from the child process after calling\nfork()\n, or any similar function that clones the current process, if there is any chance the process will call back into the Python interpreter. Only available on systems wherefork()\nis defined.Warning\nThe C\nfork()\ncall should only be made from the \u201cmain\u201d thread (of the \u201cmain\u201d interpreter). The same is true forPyOS_AfterFork_Child()\n.Added in version 3.7.\nSee also\nos.register_at_fork()\nallows registering custom Python functions to be called byPyOS_BeforeFork()\n,PyOS_AfterFork_Parent()\nandPyOS_AfterFork_Child()\n.\n-\nvoid PyOS_AfterFork()\u00b6\n- Part of the Stable ABI on platforms with fork().\nFunction to update some internal state after a process fork; this should be called in the new process if the Python interpreter will continue to be used. If a new executable is loaded into the new process, this function does not need to be called.\nDeprecated since version 3.7: This function is superseded by\nPyOS_AfterFork_Child()\n.\n-\nint PyOS_CheckStack()\u00b6\n- Part of the Stable ABI on platforms with USE_STACKCHECK since version 3.7.\nReturn true when the interpreter runs out of stack space. This is a reliable check, but is only available when\nUSE_STACKCHECK\nis defined (currently on certain versions of Windows using the Microsoft Visual C++ compiler).USE_STACKCHECK\nwill be defined automatically; you should never change the definition in your own code.\n-\ntypedef void (*PyOS_sighandler_t)(int)\u00b6\n- Part of the Stable ABI.\n-\nPyOS_sighandler_t PyOS_getsig(int i)\u00b6\n- Part of the Stable ABI.\nReturn the current signal handler for signal i. This is a thin wrapper around either\nsigaction()\norsignal()\n. Do not call those functions directly!\n-\nPyOS_sighandler_t PyOS_setsig(int i, PyOS_sighandler_t h)\u00b6\n- Part of the Stable ABI.\nSet the signal handler for signal i to be h; return the old signal handler. This is a thin wrapper around either\nsigaction()\norsignal()\n. Do not call those functions directly!\n-\nint PyOS_InterruptOccurred(void)\u00b6\n- Part of the Stable ABI.\nCheck if a\nSIGINT\nsignal has been received.Returns\n1\nif aSIGINT\nhas occurred and clears the signal flag, or0\notherwise.In most cases, you should prefer\nPyErr_CheckSignals()\nover this function.PyErr_CheckSignals()\ninvokes the appropriate signal handlers for all pending signals, allowing Python code to handle the signal properly. This function only detectsSIGINT\nand does not invoke any Python signal handlers.This function is async-signal-safe and this function cannot fail. The caller must hold an attached thread state.\n-\nwchar_t *Py_DecodeLocale(const char *arg, size_t *size)\u00b6\n- Part of the Stable ABI since version 3.7.\nWarning\nThis function should not be called directly: use the\nPyConfig\nAPI with thePyConfig_SetBytesString()\nfunction which ensures that Python is preinitialized.This function must not be called before Python is preinitialized and so that the LC_CTYPE locale is properly configured: see the\nPy_PreInitialize()\nfunction.Decode a byte string from the filesystem encoding and error handler. If the error handler is surrogateescape error handler, undecodable bytes are decoded as characters in range U+DC80..U+DCFF; and if a byte sequence can be decoded as a surrogate character, the bytes are escaped using the surrogateescape error handler instead of decoding them.\nReturn a pointer to a newly allocated wide character string, use\nPyMem_RawFree()\nto free the memory. If size is notNULL\n, write the number of wide characters excluding the null character into*size\nReturn\nNULL\non decoding error or memory allocation error. If size is notNULL\n,*size\nis set to(size_t)-1\non memory error or set to(size_t)-2\non decoding error.The filesystem encoding and error handler are selected by\nPyConfig_Read()\n: seefilesystem_encoding\nandfilesystem_errors\nmembers ofPyConfig\n.Decoding errors should never happen, unless there is a bug in the C library.\nUse the\nPy_EncodeLocale()\nfunction to encode the character string back to a byte string.See also\nThe\nPyUnicode_DecodeFSDefaultAndSize()\nandPyUnicode_DecodeLocaleAndSize()\nfunctions.Added in version 3.5.\nChanged in version 3.7: The function now uses the UTF-8 encoding in the Python UTF-8 Mode.\nChanged in version 3.8: The function now uses the UTF-8 encoding on Windows if\nPyPreConfig.legacy_windows_fs_encoding\nis zero;\n-\nchar *Py_EncodeLocale(const wchar_t *text, size_t *error_pos)\u00b6\n- Part of the Stable ABI since version 3.7.\nEncode a wide character string to the filesystem encoding and error handler. If the error handler is surrogateescape error handler, surrogate characters in the range U+DC80..U+DCFF are converted to bytes 0x80..0xFF.\nReturn a pointer to a newly allocated byte string, use\nPyMem_Free()\nto free the memory. ReturnNULL\non encoding error or memory allocation error.If error_pos is not\nNULL\n,*error_pos\nis set to(size_t)-1\non success, or set to the index of the invalid character on encoding error.The filesystem encoding and error handler are selected by\nPyConfig_Read()\n: seefilesystem_encoding\nandfilesystem_errors\nmembers ofPyConfig\n.Use the\nPy_DecodeLocale()\nfunction to decode the bytes string back to a wide character string.Warning\nThis function must not be called before Python is preinitialized and so that the LC_CTYPE locale is properly configured: see the\nPy_PreInitialize()\nfunction.See also\nThe\nPyUnicode_EncodeFSDefault()\nandPyUnicode_EncodeLocale()\nfunctions.Added in version 3.5.\nChanged in version 3.7: The function now uses the UTF-8 encoding in the Python UTF-8 Mode.\nChanged in version 3.8: The function now uses the UTF-8 encoding on Windows if\nPyPreConfig.legacy_windows_fs_encoding\nis zero.\n-\nFILE *Py_fopen(PyObject *path, const char *mode)\u00b6\nSimilar to\nfopen()\n, but path is a Python object and an exception is set on error.path must be a\nstr\nobject, abytes\nobject, or a path-like object.On success, return the new file pointer. On error, set an exception and return\nNULL\n.The file must be closed by\nPy_fclose()\nrather than calling directlyfclose()\n.The file descriptor is created non-inheritable (PEP 446).\nThe caller must have an attached thread state.\nAdded in version 3.14.\n-\nint Py_fclose(FILE *file)\u00b6\nClose a file that was opened by\nPy_fopen()\n.On success, return\n0\n. On error, returnEOF\nanderrno\nis set to indicate the error. In either case, any further access (including another call toPy_fclose()\n) to the stream results in undefined behavior.Added in version 3.14.\nSystem Functions\u00b6\nThese are utility functions that make functionality from the sys\nmodule\naccessible to C code. They all work with the current interpreter thread\u2019s\nsys\nmodule\u2019s dict, which is contained in the internal thread state structure.\n-\nPyObject *PySys_GetObject(const char *name)\u00b6\n- Return value: Borrowed reference. Part of the Stable ABI.\nReturn the object name from the\nsys\nmodule orNULL\nif it does not exist, without setting an exception.\n-\nint PySys_SetObject(const char *name, PyObject *v)\u00b6\n- Part of the Stable ABI.\nSet name in the\nsys\nmodule to v unless v isNULL\n, in which case name is deleted from the sys module. Returns0\non success,-1\non error.\n-\nvoid PySys_ResetWarnOptions()\u00b6\n- Part of the Stable ABI.\nReset\nsys.warnoptions\nto an empty list. This function may be called prior toPy_Initialize()\n.Deprecated since version 3.13, will be removed in version 3.15: Clear\nsys.warnoptions\nandwarnings.filters\ninstead.\n-\nvoid PySys_WriteStdout(const char *format, ...)\u00b6\n- Part of the Stable ABI.\nWrite the output string described by format to\nsys.stdout\n. No exceptions are raised, even if truncation occurs (see below).format should limit the total size of the formatted output string to 1000 bytes or less \u2013 after 1000 bytes, the output string is truncated. In particular, this means that no unrestricted \u201c%s\u201d formats should occur; these should be limited using \u201c%.s\u201d where is a decimal number calculated so that plus the maximum size of other formatted text does not exceed 1000 bytes. Also watch out for \u201c%f\u201d, which can print hundreds of digits for very large numbers.\nIf a problem occurs, or\nsys.stdout\nis unset, the formatted message is written to the real (C level) stdout.\n-\nvoid PySys_WriteStderr(const char *format, ...)\u00b6\n- Part of the Stable ABI.\nAs\nPySys_WriteStdout()\n, but write tosys.stderr\nor stderr instead.\n-\nvoid PySys_FormatStdout(const char *format, ...)\u00b6\n- Part of the Stable ABI.\nFunction similar to PySys_WriteStdout() but format the message using\nPyUnicode_FromFormatV()\nand don\u2019t truncate the message to an arbitrary length.Added in version 3.2.\n-\nvoid PySys_FormatStderr(const char *format, ...)\u00b6\n- Part of the Stable ABI.\nAs\nPySys_FormatStdout()\n, but write tosys.stderr\nor stderr instead.Added in version 3.2.\n-\nPyObject *PySys_GetXOptions()\u00b6\n- Return value: Borrowed reference. Part of the Stable ABI since version 3.7.\nReturn the current dictionary of\n-X\noptions, similarly tosys._xoptions\n. On error,NULL\nis returned and an exception is set.Added in version 3.2.\n-\nint PySys_Audit(const char *event, const char *format, ...)\u00b6\n- Part of the Stable ABI since version 3.13.\nRaise an auditing event with any active hooks. Return zero for success and non-zero with an exception set on failure.\nThe event string argument must not be NULL.\nIf any hooks have been added, format and other arguments will be used to construct a tuple to pass. Apart from\nN\n, the same format characters as used inPy_BuildValue()\nare available. If the built value is not a tuple, it will be added into a single-element tuple.The\nN\nformat option must not be used. It consumes a reference, but since there is no way to know whether arguments to this function will be consumed, using it may cause reference leaks.Note that\n#\nformat characters should always be treated asPy_ssize_t\n, regardless of whetherPY_SSIZE_T_CLEAN\nwas defined.sys.audit()\nperforms the same function from Python code.See also\nPySys_AuditTuple()\n.Added in version 3.8.\nChanged in version 3.8.2: Require\nPy_ssize_t\nfor#\nformat characters. Previously, an unavoidable deprecation warning was raised.\n-\nint PySys_AuditTuple(const char *event, PyObject *args)\u00b6\n- Part of the Stable ABI since version 3.13.\nSimilar to\nPySys_Audit()\n, but pass arguments as a Python object. args must be atuple\n. To pass no arguments, args can be NULL.Added in version 3.13.\n-\nint PySys_AddAuditHook(Py_AuditHookFunction hook, void *userData)\u00b6\nAppend the callable hook to the list of active auditing hooks. Return zero on success and non-zero on failure. If the runtime has been initialized, also set an error on failure. Hooks added through this API are called for all interpreters created by the runtime.\nThe userData pointer is passed into the hook function. Since hook functions may be called from different runtimes, this pointer should not refer directly to Python state.\nThis function is safe to call before\nPy_Initialize()\n. When called after runtime initialization, existing audit hooks are notified and may silently abort the operation by raising an error subclassed fromException\n(other errors will not be silenced).The hook function is always called with an attached thread state by the Python interpreter that raised the event.\nSee PEP 578 for a detailed description of auditing. Functions in the runtime and standard library that raise events are listed in the audit events table. Details are in each function\u2019s documentation.\nIf the interpreter is initialized, this function raises an auditing event\nsys.addaudithook\nwith no arguments. If any existing hooks raise an exception derived fromException\n, the new hook will not be added and the exception is cleared. As a result, callers cannot assume that their hook has been added unless they control all existing hooks.-\ntypedef int (*Py_AuditHookFunction)(const char *event, PyObject *args, void *userData)\u00b6\nThe type of the hook function. event is the C string event argument passed to\nPySys_Audit()\norPySys_AuditTuple()\n. args is guaranteed to be aPyTupleObject\n. userData is the argument passed to PySys_AddAuditHook().\nAdded in version 3.8.\n-\ntypedef int (*Py_AuditHookFunction)(const char *event, PyObject *args, void *userData)\u00b6\nProcess Control\u00b6\n-\nvoid Py_FatalError(const char *message)\u00b6\n- Part of the Stable ABI.\nPrint a fatal error message and kill the process. No cleanup is performed. This function should only be invoked when a condition is detected that would make it dangerous to continue using the Python interpreter; e.g., when the object administration appears to be corrupted. On Unix, the standard C library function\nabort()\nis called which will attempt to produce acore\nfile.The\nPy_FatalError()\nfunction is replaced with a macro which logs automatically the name of the current function, unless thePy_LIMITED_API\nmacro is defined.Changed in version 3.9: Log the function name automatically.\n-\nvoid Py_Exit(int status)\u00b6\n- Part of the Stable ABI.\nExit the current process. This calls\nPy_FinalizeEx()\nand then calls the standard C library functionexit(status)\n. IfPy_FinalizeEx()\nindicates an error, the exit status is set to 120.Changed in version 3.6: Errors from finalization no longer ignored.\n-\nint Py_AtExit(void (*func)())\u00b6\n- Part of the Stable ABI.\nRegister a cleanup function to be called by\nPy_FinalizeEx()\n. The cleanup function will be called with no arguments and should return no value. At most 32 cleanup functions can be registered. When the registration is successful,Py_AtExit()\nreturns0\n; on failure, it returns-1\n. The cleanup function registered last is called first. Each cleanup function will be called at most once. Since Python\u2019s internal finalization will have completed before the cleanup function, no Python APIs should be called by func.See also\nPyUnstable_AtExit()\nfor passing avoid *data\nargument.", "code_snippets": [], "language": "Python", "source": "python.org", "token_count": 3944} +{"url": "https://docs.python.org/3/c-api/stable.html", "title": "C API Stability", "content": "C API Stability\u00b6\nUnless documented otherwise, Python\u2019s C API is covered by the Backwards Compatibility Policy, PEP 387. Most changes to it are source-compatible (typically by only adding new API). Changing existing API or removing API is only done after a deprecation period or to fix serious issues.\nCPython\u2019s Application Binary Interface (ABI) is forward- and backwards-compatible across a minor release (if these are compiled the same way; see Platform Considerations below). So, code compiled for Python 3.10.0 will work on 3.10.8 and vice versa, but will need to be compiled separately for 3.9.x and 3.11.x.\nThere are two tiers of C API with different stability expectations:\nUnstable API, may change in minor versions without a deprecation period. It is marked by the\nPyUnstable\nprefix in names.Limited API, is compatible across several minor releases. When\nPy_LIMITED_API\nis defined, only this subset is exposed fromPython.h\n.\nThese are discussed in more detail below.\nNames prefixed by an underscore, such as _Py_InternalState\n,\nare private API that can change without notice even in patch releases.\nIf you need to use this API, consider reaching out to\nCPython developers\nto discuss adding public API for your use case.\nUnstable C API\u00b6\nAny API named with the PyUnstable\nprefix exposes CPython implementation\ndetails, and may change in every minor release (e.g. from 3.9 to 3.10) without\nany deprecation warnings.\nHowever, it will not change in a bugfix release (e.g. from 3.10.0 to 3.10.1).\nIt is generally intended for specialized, low-level tools like debuggers.\nProjects that use this API are expected to follow CPython development and spend extra effort adjusting to changes.\nStable Application Binary Interface\u00b6\nFor simplicity, this document talks about extensions, but the Limited API and Stable ABI work the same way for all uses of the API \u2013 for example, embedding Python.\nLimited C API\u00b6\nPython 3.2 introduced the Limited API, a subset of Python\u2019s C API. Extensions that only use the Limited API can be compiled once and be loaded on multiple versions of Python. Contents of the Limited API are listed below.\n-\nPy_LIMITED_API\u00b6\nDefine this macro before including\nPython.h\nto opt in to only use the Limited API, and to select the Limited API version.Define\nPy_LIMITED_API\nto the value ofPY_VERSION_HEX\ncorresponding to the lowest Python version your extension supports. The extension will be ABI-compatible with all Python 3 releases from the specified one onward, and can use Limited API introduced up to that version.Rather than using the\nPY_VERSION_HEX\nmacro directly, hardcode a minimum minor version (e.g.0x030A0000\nfor Python 3.10) for stability when compiling with future Python versions.You can also define\nPy_LIMITED_API\nto3\n. This works the same as0x03020000\n(Python 3.2, the version that introduced Limited API).\nStable ABI\u00b6\nTo enable this, Python provides a Stable ABI: a set of symbols that will remain ABI-compatible across Python 3.x versions.\nNote\nThe Stable ABI prevents ABI issues, like linker errors due to missing symbols or data corruption due to changes in structure layouts or function signatures. However, other changes in Python can change the behavior of extensions. See Python\u2019s Backwards Compatibility Policy (PEP 387) for details.\nThe Stable ABI contains symbols exposed in the Limited API, but also other ones \u2013 for example, functions necessary to support older versions of the Limited API.\nOn Windows, extensions that use the Stable ABI should be linked against\npython3.dll\nrather than a version-specific library such as\npython39.dll\n.\nOn some platforms, Python will look for and load shared library files named\nwith the abi3\ntag (e.g. mymodule.abi3.so\n).\nIt does not check if such extensions conform to a Stable ABI.\nThe user (or their packaging tools) need to ensure that, for example,\nextensions built with the 3.10+ Limited API are not installed for lower\nversions of Python.\nAll functions in the Stable ABI are present as functions in Python\u2019s shared library, not solely as macros. This makes them usable from languages that don\u2019t use the C preprocessor.\nLimited API Scope and Performance\u00b6\nThe goal for the Limited API is to allow everything that is possible with the full C API, but possibly with a performance penalty.\nFor example, while PyList_GetItem()\nis available, its \u201cunsafe\u201d macro\nvariant PyList_GET_ITEM()\nis not.\nThe macro can be faster because it can rely on version-specific implementation\ndetails of the list object.\nWithout Py_LIMITED_API\ndefined, some C API functions are inlined or\nreplaced by macros.\nDefining Py_LIMITED_API\ndisables this inlining, allowing stability as\nPython\u2019s data structures are improved, but possibly reducing performance.\nBy leaving out the Py_LIMITED_API\ndefinition, it is possible to compile\na Limited API extension with a version-specific ABI. This can improve\nperformance for that Python version, but will limit compatibility.\nCompiling with Py_LIMITED_API\nwill then yield an extension that can be\ndistributed where a version-specific one is not available \u2013 for example,\nfor prereleases of an upcoming Python version.\nLimited API Caveats\u00b6\nNote that compiling with Py_LIMITED_API\nis not a complete guarantee that\ncode conforms to the Limited API or the Stable ABI. Py_LIMITED_API\nonly covers definitions, but an API also\nincludes other issues, such as expected semantics.\nOne issue that Py_LIMITED_API\ndoes not guard against is calling a function\nwith arguments that are invalid in a lower Python version.\nFor example, consider a function that starts accepting NULL\nfor an\nargument. In Python 3.9, NULL\nnow selects a default behavior, but in\nPython 3.8, the argument will be used directly, causing a NULL\ndereference\nand crash. A similar argument works for fields of structs.\nAnother issue is that some struct fields are currently not hidden when\nPy_LIMITED_API\nis defined, even though they\u2019re part of the Limited API.\nFor these reasons, we recommend testing an extension with all minor Python versions it supports, and preferably to build with the lowest such version.\nWe also recommend reviewing documentation of all used API to check\nif it is explicitly part of the Limited API. Even with Py_LIMITED_API\ndefined, a few private declarations are exposed for technical reasons (or\neven unintentionally, as bugs).\nAlso note that the Limited API is not necessarily stable: compiling with\nPy_LIMITED_API\nwith Python 3.8 means that the extension will\nrun with Python 3.12, but it will not necessarily compile with Python 3.12.\nIn particular, parts of the Limited API may be deprecated and removed,\nprovided that the Stable ABI stays stable.\nPlatform Considerations\u00b6\nABI stability depends not only on Python, but also on the compiler used, lower-level libraries and compiler options. For the purposes of the Stable ABI, these details define a \u201cplatform\u201d. They usually depend on the OS type and processor architecture\nIt is the responsibility of each particular distributor of Python\nto ensure that all Python versions on a particular platform are built\nin a way that does not break the Stable ABI.\nThis is the case with Windows and macOS releases from python.org\nand many\nthird-party distributors.\nContents of Limited API\u00b6\nCurrently, the Limited API includes the following items:\nPyErr_Display()\nPyModuleDef_Base\nPyModuleDef_Type\nPyUnicode_AsDecodedObject()\nPyUnicode_AsDecodedUnicode()\nPyUnicode_AsEncodedObject()\nPyUnicode_AsEncodedUnicode()\nPyVarObject.ob_base\nPyWeakReference\nPy_FileSystemDefaultEncodeErrors\nPy_FileSystemDefaultEncoding\nPy_HasFileSystemDefaultEncoding\nPy_UTF8Mode\nPy_intptr_t\nPy_uintptr_t\nssizessizeargfunc\nssizessizeobjargproc\nsymtable", "code_snippets": [], "language": "Python", "source": "python.org", "token_count": 1912} +{"url": "https://docs.python.org/3/whatsnew/3.12.html", "title": "What\u2019s New In Python 3.12", "content": "What\u2019s New In Python 3.12\u00b6\n- Editor:\nAdam Turner\nThis article explains the new features in Python 3.12, compared to 3.11. Python 3.12 was released on October 2, 2023. For full details, see the changelog.\nSee also\nPEP 693 \u2013 Python 3.12 Release Schedule\nSummary \u2013 Release highlights\u00b6\nPython 3.12 is a stable release of the Python programming language,\nwith a mix of changes to the language and the standard library.\nThe library changes focus on cleaning up deprecated APIs, usability, and correctness.\nOf note, the distutils\npackage has been removed from the standard library.\nFilesystem support in os\nand pathlib\nhas seen a number of improvements,\nand several modules have better performance.\nThe language changes focus on usability,\nas f-strings have had many limitations removed\nand \u2018Did you mean \u2026\u2019 suggestions continue to improve.\nThe new type parameter syntax\nand type\nstatement improve ergonomics for using generic types and type aliases with static type checkers.\nThis article doesn\u2019t attempt to provide a complete specification of all new features, but instead gives a convenient overview. For full details, you should refer to the documentation, such as the Library Reference and Language Reference. If you want to understand the complete implementation and design rationale for a change, refer to the PEP for a particular new feature; but note that PEPs usually are not kept up-to-date once a feature has been fully implemented.\nNew syntax features:\nNew grammar features:\nInterpreter improvements:\nPEP 669, low impact monitoring\nImproved \u2018Did you mean \u2026\u2019 suggestions for\nNameError\n,ImportError\n, andSyntaxError\nexceptions\nPython data model improvements:\nPEP 688, using the buffer protocol from Python\nSignificant improvements in the standard library:\nThe\npathlib.Path\nclass now supports subclassingThe\nos\nmodule received several improvements for Windows supportA command-line interface has been added to the\nsqlite3\nmoduleisinstance()\nchecks againstruntime-checkable protocols\nenjoy a speed up of between two and 20 timesThe\nasyncio\npackage has had a number of performance improvements, with some benchmarks showing a 75% speed up.A command-line interface has been added to the\nuuid\nmoduleDue to the changes in PEP 701, producing tokens via the\ntokenize\nmodule is up to 64% faster.\nSecurity improvements:\nReplace the builtin\nhashlib\nimplementations of SHA1, SHA3, SHA2-384, SHA2-512, and MD5 with formally verified code from the HACL* project. These builtin implementations remain as fallbacks that are only used when OpenSSL does not provide them.\nC API improvements:\nCPython implementation improvements:\nPEP 709, comprehension inlining\nCPython support for the Linux\nperf\nprofilerImplement stack overflow protection on supported platforms\nNew typing features:\nPEP 698,\ntyping.override()\ndecorator\nImportant deprecations, removals or restrictions:\nPEP 623: Remove\nwstr\nfrom Unicode objects in Python\u2019s C API, reducing the size of everystr\nobject by at least 8 bytes.PEP 632: Remove the\ndistutils\npackage. See the migration guide for advice replacing the APIs it provided. The third-party Setuptools package continues to providedistutils\n, if you still require it in Python 3.12 and beyond.gh-95299: Do not pre-install\nsetuptools\nin virtual environments created withvenv\n. This means thatdistutils\n,setuptools\n,pkg_resources\n, andeasy_install\nwill no longer available by default; to access these runpip install setuptools\nin the activated virtual environment.The\nasynchat\n,asyncore\n, andimp\nmodules have been removed, along with severalunittest.TestCase\nmethod aliases.\nNew Features\u00b6\nPEP 695: Type Parameter Syntax\u00b6\nGeneric classes and functions under PEP 484 were declared using a verbose syntax that left the scope of type parameters unclear and required explicit declarations of variance.\nPEP 695 introduces a new, more compact and explicit way to create generic classes and functions:\ndef max[T](args: Iterable[T]) -> T:\n...\nclass list[T]:\ndef __getitem__(self, index: int, /) -> T:\n...\ndef append(self, element: T) -> None:\n...\nIn addition, the PEP introduces a new way to declare type aliases\nusing the type\nstatement, which creates an instance of\nTypeAliasType\n:\ntype Point = tuple[float, float]\nType aliases can also be generic:\ntype Point[T] = tuple[T, T]\nThe new syntax allows declaring TypeVarTuple\nand ParamSpec\nparameters, as well as TypeVar\nparameters with bounds or constraints:\ntype IntFunc[**P] = Callable[P, int] # ParamSpec\ntype LabeledTuple[*Ts] = tuple[str, *Ts] # TypeVarTuple\ntype HashableSequence[T: Hashable] = Sequence[T] # TypeVar with bound\ntype IntOrStrSequence[T: (int, str)] = Sequence[T] # TypeVar with constraints\nThe value of type aliases and the bound and constraints of type variables created through this syntax are evaluated only on demand (see lazy evaluation). This means type aliases are able to refer to other types defined later in the file.\nType parameters declared through a type parameter list are visible within the scope of the declaration and any nested scopes, but not in the outer scope. For example, they can be used in the type annotations for the methods of a generic class or in the class body. However, they cannot be used in the module scope after the class is defined. See Type parameter lists for a detailed description of the runtime semantics of type parameters.\nIn order to support these scoping semantics, a new kind of scope is introduced, the annotation scope. Annotation scopes behave for the most part like function scopes, but interact differently with enclosing class scopes. In Python 3.13, annotations will also be evaluated in annotation scopes.\nSee PEP 695 for more details.\n(PEP written by Eric Traut. Implementation by Jelle Zijlstra, Eric Traut, and others in gh-103764.)\nPEP 701: Syntactic formalization of f-strings\u00b6\nPEP 701 lifts some restrictions on the usage of f-strings. Expression components inside f-strings can now be any valid Python expression, including strings reusing the same quote as the containing f-string, multi-line expressions, comments, backslashes, and unicode escape sequences. Let\u2019s cover these in detail:\nQuote reuse: in Python 3.11, reusing the same quotes as the enclosing f-string raises a\nSyntaxError\n, forcing the user to either use other available quotes (like using double quotes or triple quotes if the f-string uses single quotes). In Python 3.12, you can now do things like this:>>> songs = ['Take me back to Eden', 'Alkaline', 'Ascensionism'] >>> f\"This is the playlist: {\", \".join(songs)}\" 'This is the playlist: Take me back to Eden, Alkaline, Ascensionism'\nNote that before this change there was no explicit limit in how f-strings can be nested, but the fact that string quotes cannot be reused inside the expression component of f-strings made it impossible to nest f-strings arbitrarily. In fact, this is the most nested f-string that could be written:\n>>> f\"\"\"{f'''{f'{f\"{1+1}\"}'}'''}\"\"\" '2'\nAs now f-strings can contain any valid Python expression inside expression components, it is now possible to nest f-strings arbitrarily:\n>>> f\"{f\"{f\"{f\"{f\"{f\"{1+1}\"}\"}\"}\"}\"}\" '2'\nMulti-line expressions and comments: In Python 3.11, f-string expressions must be defined in a single line, even if the expression within the f-string could normally span multiple lines (like literal lists being defined over multiple lines), making them harder to read. In Python 3.12 you can now define f-strings spanning multiple lines, and add inline comments:\n>>> f\"This is the playlist: {\", \".join([ ... 'Take me back to Eden', # My, my, those eyes like fire ... 'Alkaline', # Not acid nor alkaline ... 'Ascensionism' # Take to the broken skies at last ... ])}\" 'This is the playlist: Take me back to Eden, Alkaline, Ascensionism'\nBackslashes and unicode characters: before Python 3.12 f-string expressions couldn\u2019t contain any\n\\\ncharacter. This also affected unicode escape sequences (such as\\N{snowman}\n) as these contain the\\N\npart that previously could not be part of expression components of f-strings. Now, you can define expressions like this:>>> print(f\"This is the playlist: {\"\\n\".join(songs)}\") This is the playlist: Take me back to Eden Alkaline Ascensionism >>> print(f\"This is the playlist: {\"\\N{BLACK HEART SUIT}\".join(songs)}\") This is the playlist: Take me back to Eden\u2665Alkaline\u2665Ascensionism\nSee PEP 701 for more details.\nAs a positive side-effect of how this feature has been implemented (by parsing f-strings\nwith the PEG parser), now error messages for f-strings are more precise\nand include the exact location of the error. For example, in Python 3.11, the following\nf-string raises a SyntaxError\n:\n>>> my_string = f\"{x z y}\" + f\"{1 + 1}\"\nFile \"\", line 1\n(x z y)\n^^^\nSyntaxError: f-string: invalid syntax. Perhaps you forgot a comma?\nbut the error message doesn\u2019t include the exact location of the error within the line and also has the expression artificially surrounded by parentheses. In Python 3.12, as f-strings are parsed with the PEG parser, error messages can be more precise and show the entire line:\n>>> my_string = f\"{x z y}\" + f\"{1 + 1}\"\nFile \"\", line 1\nmy_string = f\"{x z y}\" + f\"{1 + 1}\"\n^^^\nSyntaxError: invalid syntax. Perhaps you forgot a comma?\n(Contributed by Pablo Galindo, Batuhan Taskaya, Lysandros Nikolaou, Cristi\u00e1n Maureira-Fredes and Marta G\u00f3mez in gh-102856. PEP written by Pablo Galindo, Batuhan Taskaya, Lysandros Nikolaou and Marta G\u00f3mez).\nPEP 684: A Per-Interpreter GIL\u00b6\nPEP 684 introduces a per-interpreter GIL, so that sub-interpreters may now be created with a unique GIL per interpreter. This allows Python programs to take full advantage of multiple CPU cores. This is currently only available through the C-API, though a Python API is anticipated for 3.13.\nUse the new Py_NewInterpreterFromConfig()\nfunction to\ncreate an interpreter with its own GIL:\nPyInterpreterConfig config = {\n.check_multi_interp_extensions = 1,\n.gil = PyInterpreterConfig_OWN_GIL,\n};\nPyThreadState *tstate = NULL;\nPyStatus status = Py_NewInterpreterFromConfig(&tstate, &config);\nif (PyStatus_Exception(status)) {\nreturn -1;\n}\n/* The new interpreter is now active in the current thread. */\nFor further examples how to use the C-API for sub-interpreters with a\nper-interpreter GIL, see Modules/_xxsubinterpretersmodule.c\n.\n(Contributed by Eric Snow in gh-104210, etc.)\nPEP 669: Low impact monitoring for CPython\u00b6\nPEP 669 defines a new API\nfor profilers,\ndebuggers, and other tools to monitor events in CPython.\nIt covers a wide range of events, including calls,\nreturns, lines, exceptions, jumps, and more.\nThis means that you only pay for what you use, providing support\nfor near-zero overhead debuggers and coverage tools.\nSee sys.monitoring\nfor details.\n(Contributed by Mark Shannon in gh-103082.)\nPEP 688: Making the buffer protocol accessible in Python\u00b6\nPEP 688 introduces a way to use the buffer protocol\nfrom Python code. Classes that implement the __buffer__()\nmethod\nare now usable as buffer types.\nThe new collections.abc.Buffer\nABC provides a standard\nway to represent buffer objects, for example in type annotations.\nThe new inspect.BufferFlags\nenum represents the flags that\ncan be used to customize buffer creation.\n(Contributed by Jelle Zijlstra in gh-102500.)\nPEP 709: Comprehension inlining\u00b6\nDictionary, list, and set comprehensions are now inlined, rather than creating a new single-use function object for each execution of the comprehension. This speeds up execution of a comprehension by up to two times. See PEP 709 for further details.\nComprehension iteration variables remain isolated and don\u2019t overwrite a variable of the same name in the outer scope, nor are they visible after the comprehension. Inlining does result in a few visible behavior changes:\nThere is no longer a separate frame for the comprehension in tracebacks, and tracing/profiling no longer shows the comprehension as a function call.\nThe\nsymtable\nmodule will no longer produce child symbol tables for each comprehension; instead, the comprehension\u2019s locals will be included in the parent function\u2019s symbol table.Calling\nlocals()\ninside a comprehension now includes variables from outside the comprehension, and no longer includes the synthetic.0\nvariable for the comprehension \u201cargument\u201d.A comprehension iterating directly over\nlocals()\n(e.g.[k for k in locals()]\n) may see \u201cRuntimeError: dictionary changed size during iteration\u201d when run under tracing (e.g. code coverage measurement). This is the same behavior already seen in e.g.for k in locals():\n. To avoid the error, first create a list of keys to iterate over:keys = list(locals()); [k for k in keys]\n.\n(Contributed by Carl Meyer and Vladimir Matveev in PEP 709.)\nImproved Error Messages\u00b6\nModules from the standard library are now potentially suggested as part of the error messages displayed by the interpreter when a\nNameError\nis raised to the top level. (Contributed by Pablo Galindo in gh-98254.)>>> sys.version_info Traceback (most recent call last): File \"\", line 1, in NameError: name 'sys' is not defined. Did you forget to import 'sys'?\nImprove the error suggestion for\nNameError\nexceptions for instances. Now if aNameError\nis raised in a method and the instance has an attribute that\u2019s exactly equal to the name in the exception, the suggestion will includeself.\ninstead of the closest match in the method scope. (Contributed by Pablo Galindo in gh-99139.)>>> class A: ... def __init__(self): ... self.blech = 1 ... ... def foo(self): ... somethin = blech ... >>> A().foo() Traceback (most recent call last): File \"\", line 1 somethin = blech ^^^^^ NameError: name 'blech' is not defined. Did you mean: 'self.blech'?\nImprove the\nSyntaxError\nerror message when the user typesimport x from y\ninstead offrom y import x\n. (Contributed by Pablo Galindo in gh-98931.)>>> import a.y.z from b.y.z Traceback (most recent call last): File \"\", line 1 import a.y.z from b.y.z ^^^^^^^^^^^^^^^^^^^^^^^ SyntaxError: Did you mean to use 'from ... import ...' instead?\nImportError\nexceptions raised from failedfrom import \nstatements now include suggestions for the value of\nbased on the available names in\n. (Contributed by Pablo Galindo in gh-91058.)>>> from collections import chainmap Traceback (most recent call last): File \"\", line 1, in ImportError: cannot import name 'chainmap' from 'collections'. Did you mean: 'ChainMap'?\nOther Language Changes\u00b6\nThe parser now raises\nSyntaxError\nwhen parsing source code containing null bytes. (Contributed by Pablo Galindo in gh-96670.)A backslash-character pair that is not a valid escape sequence now generates a\nSyntaxWarning\n, instead ofDeprecationWarning\n. For example,re.compile(\"\\d+\\.\\d+\")\nnow emits aSyntaxWarning\n(\"\\d\"\nis an invalid escape sequence, use raw strings for regular expression:re.compile(r\"\\d+\\.\\d+\")\n). In a future Python version,SyntaxError\nwill eventually be raised, instead ofSyntaxWarning\n. (Contributed by Victor Stinner in gh-98401.)Octal escapes with value larger than\n0o377\n(ex:\"\\477\"\n), deprecated in Python 3.11, now produce aSyntaxWarning\n, instead ofDeprecationWarning\n. In a future Python version they will be eventually aSyntaxError\n. (Contributed by Victor Stinner in gh-98401.)Variables used in the target part of comprehensions that are not stored to can now be used in assignment expressions (\n:=\n). For example, in[(b := 1) for a, b.prop in some_iter]\n, the assignment tob\nis now allowed. Note that assigning to variables stored to in the target part of comprehensions (likea\n) is still disallowed, as per PEP 572. (Contributed by Nikita Sobolev in gh-100581.)Exceptions raised in a class or type\u2019s\n__set_name__\nmethod are no longer wrapped by aRuntimeError\n. Context information is added to the exception as a PEP 678 note. (Contributed by Irit Katriel in gh-77757.)When a\ntry-except*\nconstruct handles the entireExceptionGroup\nand raises one other exception, that exception is no longer wrapped in anExceptionGroup\n. Also changed in version 3.11.4. (Contributed by Irit Katriel in gh-103590.)The Garbage Collector now runs only on the eval breaker mechanism of the Python bytecode evaluation loop instead of object allocations. The GC can also run when\nPyErr_CheckSignals()\nis called so C extensions that need to run for a long time without executing any Python code also have a chance to execute the GC periodically. (Contributed by Pablo Galindo in gh-97922.)All builtin and extension callables expecting boolean parameters now accept arguments of any type instead of just\nbool\nandint\n. (Contributed by Serhiy Storchaka in gh-60203.)memoryview\nnow supports the half-float type (the \u201ce\u201d format code). (Contributed by Donghee Na and Antoine Pitrou in gh-90751.)slice\nobjects are now hashable, allowing them to be used as dict keys and set items. (Contributed by Will Bradshaw, Furkan Onder, and Raymond Hettinger in gh-101264.)sum()\nnow uses Neumaier summation to improve accuracy and commutativity when summing floats or mixed ints and floats. (Contributed by Raymond Hettinger in gh-100425.)ast.parse()\nnow raisesSyntaxError\ninstead ofValueError\nwhen parsing source code containing null bytes. (Contributed by Pablo Galindo in gh-96670.)The extraction methods in\ntarfile\n, andshutil.unpack_archive()\n, have a new a filter argument that allows limiting tar features than may be surprising or dangerous, such as creating files outside the destination directory. See tarfile extraction filters for details. In Python 3.14, the default will switch to'data'\n. (Contributed by Petr Viktorin in PEP 706.)types.MappingProxyType\ninstances are now hashable if the underlying mapping is hashable. (Contributed by Serhiy Storchaka in gh-87995.)Add support for the perf profiler through the new environment variable\nPYTHONPERFSUPPORT\nand command-line option-X perf\n, as well as the newsys.activate_stack_trampoline()\n,sys.deactivate_stack_trampoline()\n, andsys.is_stack_trampoline_active()\nfunctions. (Design by Pablo Galindo. Contributed by Pablo Galindo and Christian Heimes with contributions from Gregory P. Smith [Google] and Mark Shannon in gh-96123.)\nNew Modules\u00b6\nNone.\nImproved Modules\u00b6\narray\u00b6\nThe\narray.array\nclass now supports subscripting, making it a generic type. (Contributed by Jelle Zijlstra in gh-98658.)\nasyncio\u00b6\nThe performance of writing to sockets in\nasyncio\nhas been significantly improved.asyncio\nnow avoids unnecessary copying when writing to sockets and usessendmsg()\nif the platform supports it. (Contributed by Kumar Aditya in gh-91166.)Add\nasyncio.eager_task_factory()\nandasyncio.create_eager_task_factory()\nfunctions to allow opting an event loop in to eager task execution, making some use-cases 2x to 5x faster. (Contributed by Jacob Bower & Itamar Oren in gh-102853, gh-104140, and gh-104138)On Linux,\nasyncio\nusesasyncio.PidfdChildWatcher\nby default ifos.pidfd_open()\nis available and functional instead ofasyncio.ThreadedChildWatcher\n. (Contributed by Kumar Aditya in gh-98024.)The event loop now uses the best available child watcher for each platform (\nasyncio.PidfdChildWatcher\nif supported andasyncio.ThreadedChildWatcher\notherwise), so manually configuring a child watcher is not recommended. (Contributed by Kumar Aditya in gh-94597.)Add loop_factory parameter to\nasyncio.run()\nto allow specifying a custom event loop factory. (Contributed by Kumar Aditya in gh-99388.)Add C implementation of\nasyncio.current_task()\nfor 4x-6x speedup. (Contributed by Itamar Oren and Pranav Thulasiram Bhat in gh-100344.)asyncio.iscoroutine()\nnow returnsFalse\nfor generators asasyncio\ndoes not support legacy generator-based coroutines. (Contributed by Kumar Aditya in gh-102748.)asyncio.wait()\nandasyncio.as_completed()\nnow accepts generators yielding tasks. (Contributed by Kumar Aditya in gh-78530.)\ncalendar\u00b6\nAdd enums\ncalendar.Month\nandcalendar.Day\ndefining months of the year and days of the week. (Contributed by Prince Roshan in gh-103636.)\ncsv\u00b6\nAdd\ncsv.QUOTE_NOTNULL\nandcsv.QUOTE_STRINGS\nflags to provide finer grained control ofNone\nand empty strings byreader\nandwriter\nobjects.\ndis\u00b6\nPseudo instruction opcodes (which are used by the compiler but do not appear in executable bytecode) are now exposed in the\ndis\nmodule.HAVE_ARGUMENT\nis still relevant to real opcodes, but it is not useful for pseudo instructions. Use the newdis.hasarg\ncollection instead. (Contributed by Irit Katriel in gh-94216.)Add the\ndis.hasexc\ncollection to signify instructions that set an exception handler. (Contributed by Irit Katriel in gh-94216.)\nfractions\u00b6\nObjects of type\nfractions.Fraction\nnow support float-style formatting. (Contributed by Mark Dickinson in gh-100161.)\nimportlib.resources\u00b6\nimportlib.resources.as_file()\nnow supports resource directories. (Contributed by Jason R. Coombs in gh-97930.)Rename first parameter of\nimportlib.resources.files()\nto anchor. (Contributed by Jason R. Coombs in gh-100598.)\ninspect\u00b6\nAdd\ninspect.markcoroutinefunction()\nto mark sync functions that return a coroutine for use withinspect.iscoroutinefunction()\n. (Contributed by Carlton Gibson in gh-99247.)Add\ninspect.getasyncgenstate()\nandinspect.getasyncgenlocals()\nfor determining the current state of asynchronous generators. (Contributed by Thomas Krennwallner in gh-79940.)The performance of\ninspect.getattr_static()\nhas been considerably improved. Most calls to the function should be at least 2x faster than they were in Python 3.11. (Contributed by Alex Waygood in gh-103193.)\nitertools\u00b6\nAdd\nitertools.batched()\nfor collecting into even-sized tuples where the last batch may be shorter than the rest. (Contributed by Raymond Hettinger in gh-98363.)\nmath\u00b6\nAdd\nmath.sumprod()\nfor computing a sum of products. (Contributed by Raymond Hettinger in gh-100485.)Extend\nmath.nextafter()\nto include a steps argument for moving up or down multiple steps at a time. (Contributed by Matthias Goergens, Mark Dickinson, and Raymond Hettinger in gh-94906.)\nos\u00b6\nAdd\nos.PIDFD_NONBLOCK\nto open a file descriptor for a process withos.pidfd_open()\nin non-blocking mode. (Contributed by Kumar Aditya in gh-93312.)os.DirEntry\nnow includes anos.DirEntry.is_junction()\nmethod to check if the entry is a junction. (Contributed by Charles Machalow in gh-99547.)Add\nos.listdrives()\n,os.listvolumes()\nandos.listmounts()\nfunctions on Windows for enumerating drives, volumes and mount points. (Contributed by Steve Dower in gh-102519.)os.stat()\nandos.lstat()\nare now more accurate on Windows. Thest_birthtime\nfield will now be filled with the creation time of the file, andst_ctime\nis deprecated but still contains the creation time (but in the future will return the last metadata change, for consistency with other platforms).st_dev\nmay be up to 64 bits andst_ino\nup to 128 bits depending on your file system, andst_rdev\nis always set to zero rather than incorrect values. Both functions may be significantly faster on newer releases of Windows. (Contributed by Steve Dower in gh-99726.)\nos.path\u00b6\nAdd\nos.path.isjunction()\nto check if a given path is a junction. (Contributed by Charles Machalow in gh-99547.)Add\nos.path.splitroot()\nto split a path into a triad(drive, root, tail)\n. (Contributed by Barney Gale in gh-101000.)\npathlib\u00b6\nAdd support for subclassing\npathlib.PurePath\nandpathlib.Path\n, plus their Posix- and Windows-specific variants. Subclasses may override thepathlib.PurePath.with_segments()\nmethod to pass information between path instances.Add\npathlib.Path.walk()\nfor walking the directory trees and generating all file or directory names within them, similar toos.walk()\n. (Contributed by Stanislav Zmiev in gh-90385.)Add walk_up optional parameter to\npathlib.PurePath.relative_to()\nto allow the insertion of..\nentries in the result; this behavior is more consistent withos.path.relpath()\n. (Contributed by Domenico Ragusa in gh-84538.)Add\npathlib.Path.is_junction()\nas a proxy toos.path.isjunction()\n. (Contributed by Charles Machalow in gh-99547.)Add case_sensitive optional parameter to\npathlib.Path.glob()\n,pathlib.Path.rglob()\nandpathlib.PurePath.match()\nfor matching the path\u2019s case sensitivity, allowing for more precise control over the matching process.\nplatform\u00b6\nAdd support for detecting Windows 11 and Windows Server releases past 2012. Previously, lookups on Windows Server platforms newer than Windows Server 2012 and on Windows 11 would return\nWindows-10\n. (Contributed by Steve Dower in gh-89545.)\npdb\u00b6\nAdd convenience variables to hold values temporarily for debug session and provide quick access to values like the current frame or the return value. (Contributed by Tian Gao in gh-103693.)\nrandom\u00b6\nAdd\nrandom.binomialvariate()\n. (Contributed by Raymond Hettinger in gh-81620.)Add a default of\nlambd=1.0\ntorandom.expovariate()\n. (Contributed by Raymond Hettinger in gh-100234.)\nshutil\u00b6\nshutil.make_archive()\nnow passes the root_dir argument to custom archivers which support it. In this case it no longer temporarily changes the current working directory of the process to root_dir to perform archiving. (Contributed by Serhiy Storchaka in gh-74696.)shutil.rmtree()\nnow accepts a new argument onexc which is an error handler like onerror but which expects an exception instance rather than a (typ, val, tb) triplet. onerror is deprecated. (Contributed by Irit Katriel in gh-102828.)shutil.which()\nnow consults the PATHEXT environment variable to find matches within PATH on Windows even when the given cmd includes a directory component. (Contributed by Charles Machalow in gh-103179.)shutil.which()\nwill callNeedCurrentDirectoryForExePathW\nwhen querying for executables on Windows to determine if the current working directory should be prepended to the search path. (Contributed by Charles Machalow in gh-103179.)shutil.which()\nwill return a path matching the cmd with a component fromPATHEXT\nprior to a direct match elsewhere in the search path on Windows. (Contributed by Charles Machalow in gh-103179.)\nsqlite3\u00b6\nAdd a command-line interface. (Contributed by Erlend E. Aasland in gh-77617.)\nAdd the\nsqlite3.Connection.autocommit\nattribute tosqlite3.Connection\nand the autocommit parameter tosqlite3.connect()\nto control PEP 249-compliant transaction handling. (Contributed by Erlend E. Aasland in gh-83638.)Add entrypoint keyword-only parameter to\nsqlite3.Connection.load_extension()\n, for overriding the SQLite extension entry point. (Contributed by Erlend E. Aasland in gh-103015.)Add\nsqlite3.Connection.getconfig()\nandsqlite3.Connection.setconfig()\ntosqlite3.Connection\nto make configuration changes to a database connection. (Contributed by Erlend E. Aasland in gh-103489.)\nstatistics\u00b6\nExtend\nstatistics.correlation()\nto include as aranked\nmethod for computing the Spearman correlation of ranked data. (Contributed by Raymond Hettinger in gh-95861.)\nsys\u00b6\nAdd the\nsys.monitoring\nnamespace to expose the new PEP 669 monitoring API. (Contributed by Mark Shannon in gh-103082.)Add\nsys.activate_stack_trampoline()\nandsys.deactivate_stack_trampoline()\nfor activating and deactivating stack profiler trampolines, andsys.is_stack_trampoline_active()\nfor querying if stack profiler trampolines are active. (Contributed by Pablo Galindo and Christian Heimes with contributions from Gregory P. Smith [Google] and Mark Shannon in gh-96123.)Add\nsys.last_exc\nwhich holds the last unhandled exception that was raised (for post-mortem debugging use cases). Deprecate the three fields that have the same information in its legacy form:sys.last_type\n,sys.last_value\nandsys.last_traceback\n. (Contributed by Irit Katriel in gh-102778.)sys._current_exceptions()\nnow returns a mapping from thread-id to an exception instance, rather than to a(typ, exc, tb)\ntuple. (Contributed by Irit Katriel in gh-103176.)sys.setrecursionlimit()\nandsys.getrecursionlimit()\n. The recursion limit now applies only to Python code. Builtin functions do not use the recursion limit, but are protected by a different mechanism that prevents recursion from causing a virtual machine crash.\ntempfile\u00b6\nThe\ntempfile.NamedTemporaryFile\nfunction has a new optional parameter delete_on_close (Contributed by Evgeny Zorin in gh-58451.)tempfile.mkdtemp()\nnow always returns an absolute path, even if the argument provided to the dir parameter is a relative path.\nthreading\u00b6\nAdd\nthreading.settrace_all_threads()\nandthreading.setprofile_all_threads()\nthat allow to set tracing and profiling functions in all running threads in addition to the calling one. (Contributed by Pablo Galindo in gh-93503.)\ntkinter\u00b6\ntkinter.Canvas.coords()\nnow flattens its arguments. It now accepts not only coordinates as separate arguments (x1, y1, x2, y2, ...\n) and a sequence of coordinates ([x1, y1, x2, y2, ...]\n), but also coordinates grouped in pairs ((x1, y1), (x2, y2), ...\nand[(x1, y1), (x2, y2), ...]\n), likecreate_*()\nmethods. (Contributed by Serhiy Storchaka in gh-94473.)\ntokenize\u00b6\nThe\ntokenize\nmodule includes the changes introduced in PEP 701. (Contributed by Marta G\u00f3mez Mac\u00edas and Pablo Galindo in gh-102856.) See Porting to Python 3.12 for more information on the changes to thetokenize\nmodule.\ntypes\u00b6\nAdd\ntypes.get_original_bases()\nto allow for further introspection of User-defined generic types when subclassed. (Contributed by James Hilton-Balfe and Alex Waygood in gh-101827.)\ntyping\u00b6\nisinstance()\nchecks againstruntime-checkable protocols\nnow useinspect.getattr_static()\nrather thanhasattr()\nto lookup whether attributes exist. This means that descriptors and__getattr__()\nmethods are no longer unexpectedly evaluated duringisinstance()\nchecks against runtime-checkable protocols. However, it may also mean that some objects which used to be considered instances of a runtime-checkable protocol may no longer be considered instances of that protocol on Python 3.12+, and vice versa. Most users are unlikely to be affected by this change. (Contributed by Alex Waygood in gh-102433.)The members of a runtime-checkable protocol are now considered \u201cfrozen\u201d at runtime as soon as the class has been created. Monkey-patching attributes onto a runtime-checkable protocol will still work, but will have no impact on\nisinstance()\nchecks comparing objects to the protocol. For example:>>> from typing import Protocol, runtime_checkable >>> @runtime_checkable ... class HasX(Protocol): ... x = 1 ... >>> class Foo: ... ... >>> f = Foo() >>> isinstance(f, HasX) False >>> f.x = 1 >>> isinstance(f, HasX) True >>> HasX.y = 2 >>> isinstance(f, HasX) # unchanged, even though HasX now also has a \"y\" attribute True\nThis change was made in order to speed up\nisinstance()\nchecks against runtime-checkable protocols.The performance profile of\nisinstance()\nchecks againstruntime-checkable protocols\nhas changed significantly. Mostisinstance()\nchecks against protocols with only a few members should be at least 2x faster than in 3.11, and some may be 20x faster or more. However,isinstance()\nchecks against protocols with many members may be slower than in Python 3.11. (Contributed by Alex Waygood in gh-74690 and gh-103193.)All\ntyping.TypedDict\nandtyping.NamedTuple\nclasses now have the__orig_bases__\nattribute. (Contributed by Adrian Garcia Badaracco in gh-103699.)Add\nfrozen_default\nparameter totyping.dataclass_transform()\n. (Contributed by Erik De Bonte in gh-99957.)\nunicodedata\u00b6\nThe Unicode database has been updated to version 15.0.0. (Contributed by Benjamin Peterson in gh-96734).\nunittest\u00b6\nAdd a --durations\ncommand line option, showing the N slowest test cases:\npython3 -m unittest --durations=3 lib.tests.test_threading\n.....\nSlowest test durations\n----------------------------------------------------------------------\n1.210s test_timeout (Lib.test.test_threading.BarrierTests)\n1.003s test_default_timeout (Lib.test.test_threading.BarrierTests)\n0.518s test_timeout (Lib.test.test_threading.EventTests)\n(0.000 durations hidden. Use -v to show these durations.)\n----------------------------------------------------------------------\nRan 158 tests in 9.869s\nOK (skipped=3)\n(Contributed by Giampaolo Rodola in gh-48330)\nuuid\u00b6\nAdd a command-line interface. (Contributed by Adam Chhina in gh-88597.)\nOptimizations\u00b6\nRemove\nwstr\nandwstr_length\nmembers from Unicode objects. It reduces object size by 8 or 16 bytes on 64bit platform. (PEP 623) (Contributed by Inada Naoki in gh-92536.)Add experimental support for using the BOLT binary optimizer in the build process, which improves performance by 1-5%. (Contributed by Kevin Modzelewski in gh-90536 and tuned by Donghee Na in gh-101525)\nSpeed up the regular expression substitution (functions\nre.sub()\nandre.subn()\nand correspondingre.Pattern\nmethods) for replacement strings containing group references by 2\u20133 times. (Contributed by Serhiy Storchaka in gh-91524.)Speed up\nasyncio.Task\ncreation by deferring expensive string formatting. (Contributed by Itamar Oren in gh-103793.)The\ntokenize.tokenize()\nandtokenize.generate_tokens()\nfunctions are up to 64% faster as a side effect of the changes required to cover PEP 701 in thetokenize\nmodule. (Contributed by Marta G\u00f3mez Mac\u00edas and Pablo Galindo in gh-102856.)Speed up\nsuper()\nmethod calls and attribute loads via the newLOAD_SUPER_ATTR\ninstruction. (Contributed by Carl Meyer and Vladimir Matveev in gh-103497.)\nCPython bytecode changes\u00b6\nRemove the\nLOAD_METHOD\ninstruction. It has been merged intoLOAD_ATTR\n.LOAD_ATTR\nwill now behave like the oldLOAD_METHOD\ninstruction if the low bit of its oparg is set. (Contributed by Ken Jin in gh-93429.)Remove the\nJUMP_IF_FALSE_OR_POP\nandJUMP_IF_TRUE_OR_POP\ninstructions. (Contributed by Irit Katriel in gh-102859.)Remove the\nPRECALL\ninstruction. (Contributed by Mark Shannon in gh-92925.)Add the\nBINARY_SLICE\nandSTORE_SLICE\ninstructions. (Contributed by Mark Shannon in gh-94163.)Add the\nCALL_INTRINSIC_1\ninstructions. (Contributed by Mark Shannon in gh-99005.)Add the\nCALL_INTRINSIC_2\ninstruction. (Contributed by Irit Katriel in gh-101799.)Add the\nCLEANUP_THROW\ninstruction. (Contributed by Brandt Bucher in gh-90997.)Add the\nEND_SEND\ninstruction. (Contributed by Mark Shannon in gh-103082.)Add the\nLOAD_FAST_AND_CLEAR\ninstruction as part of the implementation of PEP 709. (Contributed by Carl Meyer in gh-101441.)Add the\nLOAD_FAST_CHECK\ninstruction. (Contributed by Dennis Sweeney in gh-93143.)Add the\nLOAD_FROM_DICT_OR_DEREF\n,LOAD_FROM_DICT_OR_GLOBALS\n, andLOAD_LOCALS\nopcodes as part of the implementation of PEP 695. Remove theLOAD_CLASSDEREF\nopcode, which can be replaced withLOAD_LOCALS\nplusLOAD_FROM_DICT_OR_DEREF\n. (Contributed by Jelle Zijlstra in gh-103764.)Add the\nLOAD_SUPER_ATTR\ninstruction. (Contributed by Carl Meyer and Vladimir Matveev in gh-103497.)Add the\nRETURN_CONST\ninstruction. (Contributed by Wenyang Wang in gh-101632.)\nDemos and Tools\u00b6\nRemove the\nTools/demo/\ndirectory which contained old demo scripts. A copy can be found in the old-demos project. (Contributed by Victor Stinner in gh-97681.)Remove outdated example scripts of the\nTools/scripts/\ndirectory. A copy can be found in the old-demos project. (Contributed by Victor Stinner in gh-97669.)\nDeprecated\u00b6\nargparse\n: The type, choices, and metavar parameters ofargparse.BooleanOptionalAction\nare deprecated and will be removed in 3.14. (Contributed by Nikita Sobolev in gh-92248.)ast\n: The followingast\nfeatures have been deprecated in documentation since Python 3.8, now cause aDeprecationWarning\nto be emitted at runtime when they are accessed or used, and will be removed in Python 3.14:ast.Num\nast.Str\nast.Bytes\nast.NameConstant\nast.Ellipsis\nUse\nast.Constant\ninstead. (Contributed by Serhiy Storchaka in gh-90953.)-\nThe child watcher classes\nasyncio.MultiLoopChildWatcher\n,asyncio.FastChildWatcher\n,asyncio.AbstractChildWatcher\nandasyncio.SafeChildWatcher\nare deprecated and will be removed in Python 3.14. (Contributed by Kumar Aditya in gh-94597.)asyncio.set_child_watcher()\n,asyncio.get_child_watcher()\n,asyncio.AbstractEventLoopPolicy.set_child_watcher()\nandasyncio.AbstractEventLoopPolicy.get_child_watcher()\nare deprecated and will be removed in Python 3.14. (Contributed by Kumar Aditya in gh-94597.)The\nget_event_loop()\nmethod of the default event loop policy now emits aDeprecationWarning\nif there is no current event loop set and it decides to create one. (Contributed by Serhiy Storchaka and Guido van Rossum in gh-100160.)\ncalendar\n:calendar.January\nandcalendar.February\nconstants are deprecated and replaced bycalendar.JANUARY\nandcalendar.FEBRUARY\n. (Contributed by Prince Roshan in gh-103636.)collections.abc\n: Deprecatedcollections.abc.ByteString\n.Use\nisinstance(obj, collections.abc.Buffer)\nto test ifobj\nimplements the buffer protocol at runtime. For use in type annotations, either useBuffer\nor a union that explicitly specifies the types your code supports (e.g.,bytes | bytearray | memoryview\n).ByteString\nwas originally intended to be an abstract class that would serve as a supertype of bothbytes\nandbytearray\n. However, since the ABC never had any methods, knowing that an object was an instance ofByteString\nnever actually told you anything useful about the object. Other common buffer types such asmemoryview\nwere also never understood as subtypes ofByteString\n(either at runtime or by static type checkers).See PEP 688 for more details. (Contributed by Shantanu Jain in gh-91896.)\ndatetime\n:datetime.datetime\n\u2019sutcnow()\nandutcfromtimestamp()\nare deprecated and will be removed in a future version. Instead, use timezone-aware objects to represent datetimes in UTC: respectively, callnow()\nandfromtimestamp()\nwith the tz parameter set todatetime.UTC\n. (Contributed by Paul Ganssle in gh-103857.)email\n: Deprecate the isdst parameter inemail.utils.localtime()\n. (Contributed by Alan Williams in gh-72346.)importlib.abc\n: Deprecated the following classes, scheduled for removal in Python 3.14:importlib.abc.ResourceReader\nimportlib.abc.Traversable\nimportlib.abc.TraversableResources\nUse\nimportlib.resources.abc\nclasses instead:(Contributed by Jason R. Coombs and Hugo van Kemenade in gh-93963.)\nitertools\n: Deprecate the support for copy, deepcopy, and pickle operations, which is undocumented, inefficient, historically buggy, and inconsistent. This will be removed in 3.14 for a significant reduction in code volume and maintenance burden. (Contributed by Raymond Hettinger in gh-101588.)multiprocessing\n: In Python 3.14, the defaultmultiprocessing\nstart method will change to a safer one on Linux, BSDs, and other non-macOS POSIX platforms where'fork'\nis currently the default (gh-84559). Adding a runtime warning about this was deemed too disruptive as the majority of code is not expected to care. Use theget_context()\norset_start_method()\nAPIs to explicitly specify when your code requires'fork'\n. See contexts and start methods.pkgutil\n:pkgutil.find_loader()\nandpkgutil.get_loader()\nare deprecated and will be removed in Python 3.14; useimportlib.util.find_spec()\ninstead. (Contributed by Nikita Sobolev in gh-97850.)pty\n: The module has two undocumentedmaster_open()\nandslave_open()\nfunctions that have been deprecated since Python 2 but only gained a properDeprecationWarning\nin 3.12. Remove them in 3.14. (Contributed by Soumendra Ganguly and Gregory P. Smith in gh-85984.)os\n:The\nst_ctime\nfields return byos.stat()\nandos.lstat()\non Windows are deprecated. In a future release, they will contain the last metadata change time, consistent with other platforms. For now, they still contain the creation time, which is also available in the newst_birthtime\nfield. (Contributed by Steve Dower in gh-99726.)On POSIX platforms,\nos.fork()\ncan now raise aDeprecationWarning\nwhen it can detect being called from a multithreaded process. There has always been a fundamental incompatibility with the POSIX platform when doing so. Even if such code appeared to work. We added the warning to raise awareness as issues encountered by code doing this are becoming more frequent. See theos.fork()\ndocumentation for more details along with this discussion on fork being incompatible with threads for why we\u2019re now surfacing this longstanding platform compatibility problem to developers.\nWhen this warning appears due to usage of\nmultiprocessing\norconcurrent.futures\nthe fix is to use a differentmultiprocessing\nstart method such as\"spawn\"\nor\"forkserver\"\n.shutil\n: The onerror argument ofshutil.rmtree()\nis deprecated; use onexc instead. (Contributed by Irit Katriel in gh-102828.)-\ndefault adapters and converters are now deprecated. Instead, use the Adapter and converter recipes and tailor them to your needs. (Contributed by Erlend E. Aasland in gh-90016.)\nIn\nexecute()\n,DeprecationWarning\nis now emitted when named placeholders are used together with parameters supplied as a sequence instead of as adict\n. Starting from Python 3.14, using named placeholders with parameters supplied as a sequence will raise aProgrammingError\n. (Contributed by Erlend E. Aasland in gh-101698.)\nsys\n: Thesys.last_type\n,sys.last_value\nandsys.last_traceback\nfields are deprecated. Usesys.last_exc\ninstead. (Contributed by Irit Katriel in gh-102778.)tarfile\n: Extracting tar archives without specifying filter is deprecated until Python 3.14, when'data'\nfilter will become the default. See Extraction filters for details.-\ntyping.Hashable\nandtyping.Sized\n, aliases forcollections.abc.Hashable\nandcollections.abc.Sized\nrespectively, are deprecated. (gh-94309.)typing.ByteString\n, deprecated since Python 3.9, now causes aDeprecationWarning\nto be emitted when it is used. (Contributed by Alex Waygood in gh-91896.)\nxml.etree.ElementTree\n: The module now emitsDeprecationWarning\nwhen testing the truth value of anxml.etree.ElementTree.Element\n. Before, the Python implementation emittedFutureWarning\n, and the C implementation emitted nothing. (Contributed by Jacob Walls in gh-83122.)The 3-arg signatures (type, value, traceback) of\ncoroutine throw()\n,generator throw()\nandasync generator throw()\nare deprecated and may be removed in a future version of Python. Use the single-arg versions of these functions instead. (Contributed by Ofey Chan in gh-89874.)DeprecationWarning\nis now raised when__package__\non a module differs from__spec__.parent\n(previously it wasImportWarning\n). (Contributed by Brett Cannon in gh-65961.)Setting\n__package__\nor__cached__\non a module is deprecated, and will cease to be set or taken into consideration by the import system in Python 3.14. (Contributed by Brett Cannon in gh-65961.)The bitwise inversion operator (\n~\n) on bool is deprecated. It will throw an error in Python 3.16. Usenot\nfor logical negation of bools instead. In the rare case that you really need the bitwise inversion of the underlyingint\n, convert to int explicitly:~int(x)\n. (Contributed by Tim Hoffmann in gh-103487.)Accessing\nco_lnotab\non code objects was deprecated in Python 3.10 via PEP 626, but it only got a properDeprecationWarning\nin 3.12. May be removed in 3.15. (Contributed by Nikita Sobolev in gh-101866.)\nPending removal in Python 3.13\u00b6\nModules (see PEP 594):\naifc\naudioop\ncgi\ncgitb\nchunk\ncrypt\nimghdr\nmailcap\nmsilib\nnis\nnntplib\nossaudiodev\npipes\nsndhdr\nspwd\nsunau\ntelnetlib\nuu\nxdrlib\nOther modules:\nlib2to3\n, and the 2to3 program (gh-84540)\nAPIs:\nconfigparser.LegacyInterpolation\n(gh-90765)locale.resetlocale()\n(gh-90817)turtle.RawTurtle.settiltangle()\n(gh-50096)unittest.findTestCases()\n(gh-50096)unittest.getTestCaseNames()\n(gh-50096)unittest.makeSuite()\n(gh-50096)unittest.TestProgram.usageExit()\n(gh-67048)webbrowser.MacOSX\n(gh-86421)classmethod\ndescriptor chaining (gh-89519)\nPending removal in Python 3.14\u00b6\nargparse\n: The type, choices, and metavar parameters ofargparse.BooleanOptionalAction\nare deprecated and will be removed in 3.14. (Contributed by Nikita Sobolev in gh-92248.)ast\n: The following features have been deprecated in documentation since Python 3.8, now cause aDeprecationWarning\nto be emitted at runtime when they are accessed or used, and will be removed in Python 3.14:ast.Num\nast.Str\nast.Bytes\nast.NameConstant\nast.Ellipsis\nUse\nast.Constant\ninstead. (Contributed by Serhiy Storchaka in gh-90953.)-\nThe child watcher classes\nasyncio.MultiLoopChildWatcher\n,asyncio.FastChildWatcher\n,asyncio.AbstractChildWatcher\nandasyncio.SafeChildWatcher\nare deprecated and will be removed in Python 3.14. (Contributed by Kumar Aditya in gh-94597.)asyncio.set_child_watcher()\n,asyncio.get_child_watcher()\n,asyncio.AbstractEventLoopPolicy.set_child_watcher()\nandasyncio.AbstractEventLoopPolicy.get_child_watcher()\nare deprecated and will be removed in Python 3.14. (Contributed by Kumar Aditya in gh-94597.)The\nget_event_loop()\nmethod of the default event loop policy now emits aDeprecationWarning\nif there is no current event loop set and it decides to create one. (Contributed by Serhiy Storchaka and Guido van Rossum in gh-100160.)\nemail\n: Deprecated the isdst parameter inemail.utils.localtime()\n. (Contributed by Alan Williams in gh-72346.)importlib.abc\ndeprecated classes:importlib.abc.ResourceReader\nimportlib.abc.Traversable\nimportlib.abc.TraversableResources\nUse\nimportlib.resources.abc\nclasses instead:(Contributed by Jason R. Coombs and Hugo van Kemenade in gh-93963.)\nitertools\nhad undocumented, inefficient, historically buggy, and inconsistent support for copy, deepcopy, and pickle operations. This will be removed in 3.14 for a significant reduction in code volume and maintenance burden. (Contributed by Raymond Hettinger in gh-101588.)multiprocessing\n: The default start method will change to a safer one on Linux, BSDs, and other non-macOS POSIX platforms where'fork'\nis currently the default (gh-84559). Adding a runtime warning about this was deemed too disruptive as the majority of code is not expected to care. Use theget_context()\norset_start_method()\nAPIs to explicitly specify when your code requires'fork'\n. See Contexts and start methods.pathlib\n:is_relative_to()\nandrelative_to()\n: passing additional arguments is deprecated.pkgutil\n:pkgutil.find_loader()\nandpkgutil.get_loader()\nnow raiseDeprecationWarning\n; useimportlib.util.find_spec()\ninstead. (Contributed by Nikita Sobolev in gh-97850.)pty\n:master_open()\n: usepty.openpty()\n.slave_open()\n: usepty.openpty()\n.\n-\nversion\nandversion_info\n.execute()\nandexecutemany()\nif named placeholders are used and parameters is a sequence instead of adict\n.\nurllib\n:urllib.parse.Quoter\nis deprecated: it was not intended to be a public API. (Contributed by Gregory P. Smith in gh-88168.)\nPending removal in Python 3.15\u00b6\nThe import system:\nSetting\n__cached__\non a module while failing to set__spec__.cached\nis deprecated. In Python 3.15,__cached__\nwill cease to be set or take into consideration by the import system or standard library. (gh-97879)Setting\n__package__\non a module while failing to set__spec__.parent\nis deprecated. In Python 3.15,__package__\nwill cease to be set or take into consideration by the import system or standard library. (gh-97879)\n-\nThe undocumented\nctypes.SetPointerType()\nfunction has been deprecated since Python 3.13.\n-\nThe obsolete and rarely used\nCGIHTTPRequestHandler\nhas been deprecated since Python 3.13. No direct replacement exists. Anything is better than CGI to interface a web server with a request handler.The\n--cgi\nflag to the python -m http.server command-line interface has been deprecated since Python 3.13.\n-\nload_module()\nmethod: useexec_module()\ninstead.\n-\nThe\ngetdefaultlocale()\nfunction has been deprecated since Python 3.11. Its removal was originally planned for Python 3.13 (gh-90817), but has been postponed to Python 3.15. Usegetlocale()\n,setlocale()\n, andgetencoding()\ninstead. (Contributed by Hugo van Kemenade in gh-111187.)\n-\nPurePath.is_reserved()\nhas been deprecated since Python 3.13. Useos.path.isreserved()\nto detect reserved paths on Windows.\n-\njava_ver()\nhas been deprecated since Python 3.13. This function is only useful for Jython support, has a confusing API, and is largely untested.\n-\nThe check_home argument of\nsysconfig.is_python_build()\nhas been deprecated since Python 3.12.\n-\nRLock()\nwill take no arguments in Python 3.15. Passing any arguments has been deprecated since Python 3.14, as the Python version does not permit any arguments, but the C version allows any number of positional or keyword arguments, ignoring every argument.\n-\ntypes.CodeType\n: Accessingco_lnotab\nwas deprecated in PEP 626 since 3.10 and was planned to be removed in 3.12, but it only got a properDeprecationWarning\nin 3.12. May be removed in 3.15. (Contributed by Nikita Sobolev in gh-101866.)\n-\nThe undocumented keyword argument syntax for creating\nNamedTuple\nclasses (for example,Point = NamedTuple(\"Point\", x=int, y=int)\n) has been deprecated since Python 3.13. Use the class-based syntax or the functional syntax instead.When using the functional syntax of\nTypedDict\ns, failing to pass a value to the fields parameter (TD = TypedDict(\"TD\")\n) or passingNone\n(TD = TypedDict(\"TD\", None)\n) has been deprecated since Python 3.13. Useclass TD(TypedDict): pass\norTD = TypedDict(\"TD\", {})\nto create a TypedDict with zero field.The\ntyping.no_type_check_decorator()\ndecorator function has been deprecated since Python 3.13. After eight years in thetyping\nmodule, it has yet to be supported by any major type checker.\nwave\n:The\ngetmark()\n,setmark()\n, andgetmarkers()\nmethods of theWave_read\nandWave_write\nclasses have been deprecated since Python 3.13.\n-\nload_module()\nhas been deprecated since Python 3.10. Useexec_module()\ninstead. (Contributed by Jiahao Li in gh-125746.)\nPending removal in Python 3.16\u00b6\nThe import system:\nSetting\n__loader__\non a module while failing to set__spec__.loader\nis deprecated. In Python 3.16,__loader__\nwill cease to be set or taken into consideration by the import system or the standard library.\n-\nThe\n'u'\nformat code (wchar_t\n) has been deprecated in documentation since Python 3.3 and at runtime since Python 3.13. Use the'w'\nformat code (Py_UCS4\n) for Unicode characters instead.\n-\nasyncio.iscoroutinefunction()\nis deprecated and will be removed in Python 3.16; useinspect.iscoroutinefunction()\ninstead. (Contributed by Jiahao Li and Kumar Aditya in gh-122875.)asyncio\npolicy system is deprecated and will be removed in Python 3.16. In particular, the following classes and functions are deprecated:Users should use\nasyncio.run()\norasyncio.Runner\nwith loop_factory to use the desired event loop implementation.For example, to use\nasyncio.SelectorEventLoop\non Windows:import asyncio async def main(): ... asyncio.run(main(), loop_factory=asyncio.SelectorEventLoop)\n(Contributed by Kumar Aditya in gh-127949.)\n-\nBitwise inversion on boolean types,\n~True\nor~False\nhas been deprecated since Python 3.12, as it produces surprising and unintuitive results (-2\nand-1\n). Usenot x\ninstead for the logical negation of a Boolean. In the rare case that you need the bitwise inversion of the underlying integer, convert toint\nexplicitly (~int(x)\n).\n-\nCalling the Python implementation of\nfunctools.reduce()\nwith function or sequence as keyword arguments has been deprecated since Python 3.14.\n-\nSupport for custom logging handlers with the strm argument is deprecated and scheduled for removal in Python 3.16. Define handlers with the stream argument instead. (Contributed by Mariusz Felisiak in gh-115032.)\n-\nValid extensions start with a \u2018.\u2019 or are empty for\nmimetypes.MimeTypes.add_type()\n. Undotted extensions are deprecated and will raise aValueError\nin Python 3.16. (Contributed by Hugo van Kemenade in gh-75223.)\n-\nThe\nExecError\nexception has been deprecated since Python 3.14. It has not been used by any function inshutil\nsince Python 3.4, and is now an alias ofRuntimeError\n.\n-\nThe\nClass.get_methods\nmethod has been deprecated since Python 3.14.\nsys\n:The\n_enablelegacywindowsfsencoding()\nfunction has been deprecated since Python 3.13. Use thePYTHONLEGACYWINDOWSFSENCODING\nenvironment variable instead.\n-\nThe\nsysconfig.expand_makefile_vars()\nfunction has been deprecated since Python 3.14. Use thevars\nargument ofsysconfig.get_paths()\ninstead.\n-\nThe undocumented and unused\nTarFile.tarfile\nattribute has been deprecated since Python 3.13.\nPending removal in Python 3.17\u00b6\n-\ncollections.abc.ByteString\nis scheduled for removal in Python 3.17.Use\nisinstance(obj, collections.abc.Buffer)\nto test ifobj\nimplements the buffer protocol at runtime. For use in type annotations, either useBuffer\nor a union that explicitly specifies the types your code supports (e.g.,bytes | bytearray | memoryview\n).ByteString\nwas originally intended to be an abstract class that would serve as a supertype of bothbytes\nandbytearray\n. However, since the ABC never had any methods, knowing that an object was an instance ofByteString\nnever actually told you anything useful about the object. Other common buffer types such asmemoryview\nwere also never understood as subtypes ofByteString\n(either at runtime or by static type checkers).See PEP 688 for more details. (Contributed by Shantanu Jain in gh-91896.)\n-\nBefore Python 3.14, old-style unions were implemented using the private class\ntyping._UnionGenericAlias\n. This class is no longer needed for the implementation, but it has been retained for backward compatibility, with removal scheduled for Python 3.17. Users should use documented introspection helpers liketyping.get_origin()\nandtyping.get_args()\ninstead of relying on private implementation details.typing.ByteString\n, deprecated since Python 3.9, is scheduled for removal in Python 3.17.Use\nisinstance(obj, collections.abc.Buffer)\nto test ifobj\nimplements the buffer protocol at runtime. For use in type annotations, either useBuffer\nor a union that explicitly specifies the types your code supports (e.g.,bytes | bytearray | memoryview\n).ByteString\nwas originally intended to be an abstract class that would serve as a supertype of bothbytes\nandbytearray\n. However, since the ABC never had any methods, knowing that an object was an instance ofByteString\nnever actually told you anything useful about the object. Other common buffer types such asmemoryview\nwere also never understood as subtypes ofByteString\n(either at runtime or by static type checkers).See PEP 688 for more details. (Contributed by Shantanu Jain in gh-91896.)\nPending removal in future versions\u00b6\nThe following APIs will be removed in the future, although there is currently no date scheduled for their removal.\n-\nNesting argument groups and nesting mutually exclusive groups are deprecated.\nPassing the undocumented keyword argument prefix_chars to\nadd_argument_group()\nis now deprecated.The\nargparse.FileType\ntype converter is deprecated.\n-\nGenerators:\nthrow(type, exc, tb)\nandathrow(type, exc, tb)\nsignature is deprecated: usethrow(exc)\nandathrow(exc)\ninstead, the single argument signature.Currently Python accepts numeric literals immediately followed by keywords, for example\n0in x\n,1or x\n,0if 1else 2\n. It allows confusing and ambiguous expressions like[0x1for x in y]\n(which can be interpreted as[0x1 for x in y]\nor[0x1f or x in y]\n). A syntax warning is raised if the numeric literal is immediately followed by one of keywordsand\n,else\n,for\n,if\n,in\n,is\nandor\n. In a future release it will be changed to a syntax error. (gh-87999)Support for\n__index__()\nand__int__()\nmethod returning non-int type: these methods will be required to return an instance of a strict subclass ofint\n.Support for\n__float__()\nmethod returning a strict subclass offloat\n: these methods will be required to return an instance offloat\n.Support for\n__complex__()\nmethod returning a strict subclass ofcomplex\n: these methods will be required to return an instance ofcomplex\n.Passing a complex number as the real or imag argument in the\ncomplex()\nconstructor is now deprecated; it should only be passed as a single positional argument. (Contributed by Serhiy Storchaka in gh-109218.)\ncalendar\n:calendar.January\nandcalendar.February\nconstants are deprecated and replaced bycalendar.JANUARY\nandcalendar.FEBRUARY\n. (Contributed by Prince Roshan in gh-103636.)codecs\n: useopen()\ninstead ofcodecs.open()\n. (gh-133038)codeobject.co_lnotab\n: use thecodeobject.co_lines()\nmethod instead.-\nutcnow()\n: usedatetime.datetime.now(tz=datetime.UTC)\n.utcfromtimestamp()\n: usedatetime.datetime.fromtimestamp(timestamp, tz=datetime.UTC)\n.\ngettext\n: Plural value must be an integer.-\ncache_from_source()\ndebug_override parameter is deprecated: use the optimization parameter instead.\n-\nEntryPoints\ntuple interface.Implicit\nNone\non return values.\nlogging\n: thewarn()\nmethod has been deprecated since Python 3.3, usewarning()\ninstead.mailbox\n: Use of StringIO input and text mode is deprecated, use BytesIO and binary mode instead.os\n: Callingos.register_at_fork()\nin multi-threaded process.pydoc.ErrorDuringImport\n: A tuple value for exc_info parameter is deprecated, use an exception instance.re\n: More strict rules are now applied for numerical group references and group names in regular expressions. Only sequence of ASCII digits is now accepted as a numerical reference. The group name in bytes patterns and replacement strings can now only contain ASCII letters and digits and underscore. (Contributed by Serhiy Storchaka in gh-91760.)sre_compile\n,sre_constants\nandsre_parse\nmodules.shutil\n:rmtree()\n\u2019s onerror parameter is deprecated in Python 3.12; use the onexc parameter instead.ssl\noptions and protocols:ssl.SSLContext\nwithout protocol argument is deprecated.ssl.SSLContext\n:set_npn_protocols()\nandselected_npn_protocol()\nare deprecated: use ALPN instead.ssl.OP_NO_SSL*\noptionsssl.OP_NO_TLS*\noptionsssl.PROTOCOL_SSLv3\nssl.PROTOCOL_TLS\nssl.PROTOCOL_TLSv1\nssl.PROTOCOL_TLSv1_1\nssl.PROTOCOL_TLSv1_2\nssl.TLSVersion.SSLv3\nssl.TLSVersion.TLSv1\nssl.TLSVersion.TLSv1_1\nthreading\nmethods:threading.Condition.notifyAll()\n: usenotify_all()\n.threading.Event.isSet()\n: useis_set()\n.threading.Thread.isDaemon()\n,threading.Thread.setDaemon()\n: usethreading.Thread.daemon\nattribute.threading.Thread.getName()\n,threading.Thread.setName()\n: usethreading.Thread.name\nattribute.threading.currentThread()\n: usethreading.current_thread()\n.threading.activeCount()\n: usethreading.active_count()\n.\nThe internal class\ntyping._UnionGenericAlias\nis no longer used to implementtyping.Union\n. To preserve compatibility with users using this private class, a compatibility shim will be provided until at least Python 3.17. (Contributed by Jelle Zijlstra in gh-105499.)unittest.IsolatedAsyncioTestCase\n: it is deprecated to return a value that is notNone\nfrom a test case.urllib.parse\ndeprecated functions:urlparse()\ninsteadsplitattr()\nsplithost()\nsplitnport()\nsplitpasswd()\nsplitport()\nsplitquery()\nsplittag()\nsplittype()\nsplituser()\nsplitvalue()\nto_bytes()\nwsgiref\n:SimpleHandler.stdout.write()\nshould not do partial writes.xml.etree.ElementTree\n: Testing the truth value of anElement\nis deprecated. In a future release it will always returnTrue\n. Prefer explicitlen(elem)\norelem is not None\ntests instead.sys._clear_type_cache()\nis deprecated: usesys._clear_internal_caches()\ninstead.\nRemoved\u00b6\nasynchat and asyncore\u00b6\nconfigparser\u00b6\nSeveral names deprecated in the\nconfigparser\nway back in 3.2 have been removed per gh-89336:configparser.ParsingError\nno longer has afilename\nattribute or argument. Use thesource\nattribute and argument instead.configparser\nno longer has aSafeConfigParser\nclass. Use the shorterConfigParser\nname instead.configparser.ConfigParser\nno longer has areadfp\nmethod. Useread_file()\ninstead.\ndistutils\u00b6\nensurepip\u00b6\nRemove the bundled setuptools wheel from\nensurepip\n, and stop installing setuptools in environments created byvenv\n.pip (>= 22.1)\ndoes not require setuptools to be installed in the environment.setuptools\n-based (anddistutils\n-based) packages can still be used withpip install\n, since pip will providesetuptools\nin the build environment it uses for building a package.easy_install\n,pkg_resources\n,setuptools\nanddistutils\nare no longer provided by default in environments created withvenv\nor bootstrapped withensurepip\n, since they are part of thesetuptools\npackage. For projects relying on these at runtime, thesetuptools\nproject should be declared as a dependency and installed separately (typically, using pip).(Contributed by Pradyun Gedam in gh-95299.)\nenum\u00b6\nftplib\u00b6\ngzip\u00b6\nRemove the\nfilename\nattribute ofgzip\n\u2019sgzip.GzipFile\n, deprecated since Python 2.6, use thename\nattribute instead. In write mode, thefilename\nattribute added'.gz'\nfile extension if it was not present. (Contributed by Victor Stinner in gh-94196.)\nhashlib\u00b6\nRemove the pure Python implementation of\nhashlib\n\u2019shashlib.pbkdf2_hmac()\n, deprecated in Python 3.10. Python 3.10 and newer requires OpenSSL 1.1.1 (PEP 644): this OpenSSL version provides a C implementation ofpbkdf2_hmac()\nwhich is faster. (Contributed by Victor Stinner in gh-94199.)\nimportlib\u00b6\nMany previously deprecated cleanups in\nimportlib\nhave now been completed:References to, and support for\nmodule_repr()\nhas been removed. (Contributed by Barry Warsaw in gh-97850.)importlib.util.set_package\n,importlib.util.set_loader\nandimportlib.util.module_for_loader\nhave all been removed. (Contributed by Brett Cannon and Nikita Sobolev in gh-65961 and gh-97850.)Support for\nfind_loader()\nandfind_module()\nAPIs have been removed. (Contributed by Barry Warsaw in gh-98040.)importlib.abc.Finder\n,pkgutil.ImpImporter\n, andpkgutil.ImpLoader\nhave been removed. (Contributed by Barry Warsaw in gh-98040.)\nimp\u00b6\nThe\nimp\nmodule has been removed. (Contributed by Barry Warsaw in gh-98040.)To migrate, consult the following correspondence table:\nimp\nimportlib\nimp.NullImporter\nInsert\nNone\nintosys.path_importer_cache\nimp.cache_from_source()\nimp.find_module()\nimp.get_magic()\nimp.get_suffixes()\nimportlib.machinery.SOURCE_SUFFIXES\n,importlib.machinery.EXTENSION_SUFFIXES\n, andimportlib.machinery.BYTECODE_SUFFIXES\nimp.get_tag()\nimp.load_module()\nimp.new_module(name)\ntypes.ModuleType(name)\nimp.reload()\nimp.source_from_cache()\nimp.load_source()\nSee below\nReplace\nimp.load_source()\nwith:import importlib.util import importlib.machinery def load_source(modname, filename): loader = importlib.machinery.SourceFileLoader(modname, filename) spec = importlib.util.spec_from_file_location(modname, filename, loader=loader) module = importlib.util.module_from_spec(spec) # The module is always executed and not cached in sys.modules. # Uncomment the following line to cache the module. # sys.modules[module.__name__] = module loader.exec_module(module) return module\nRemove\nimp\nfunctions and attributes with no replacements:Undocumented functions:\nimp.init_builtin()\nimp.load_compiled()\nimp.load_dynamic()\nimp.load_package()\nimp.lock_held()\n,imp.acquire_lock()\n,imp.release_lock()\n: the locking scheme has changed in Python 3.3 to per-module locks.imp.find_module()\nconstants:SEARCH_ERROR\n,PY_SOURCE\n,PY_COMPILED\n,C_EXTENSION\n,PY_RESOURCE\n,PKG_DIRECTORY\n,C_BUILTIN\n,PY_FROZEN\n,PY_CODERESOURCE\n,IMP_HOOK\n.\nio\u00b6\nlocale\u00b6\nRemove\nlocale\n\u2019slocale.format()\nfunction, deprecated in Python 3.7: uselocale.format_string()\ninstead. (Contributed by Victor Stinner in gh-94226.)\nsmtpd\u00b6\nsqlite3\u00b6\nThe following undocumented\nsqlite3\nfeatures, deprecated in Python 3.10, are now removed:sqlite3.enable_shared_cache()\nsqlite3.OptimizedUnicode\nIf a shared cache must be used, open the database in URI mode using the\ncache=shared\nquery parameter.The\nsqlite3.OptimizedUnicode\ntext factory has been an alias forstr\nsince Python 3.3. Code that previously set the text factory toOptimizedUnicode\ncan either usestr\nexplicitly, or rely on the default value which is alsostr\n.(Contributed by Erlend E. Aasland in gh-92548.)\nssl\u00b6\nRemove\nssl\n\u2019sssl.RAND_pseudo_bytes()\nfunction, deprecated in Python 3.6: useos.urandom()\norssl.RAND_bytes()\ninstead. (Contributed by Victor Stinner in gh-94199.)Remove the\nssl.match_hostname()\nfunction. It was deprecated in Python 3.7. OpenSSL performs hostname matching since Python 3.7, Python no longer uses thessl.match_hostname()\nfunction. (Contributed by Victor Stinner in gh-94199.)Remove the\nssl.wrap_socket()\nfunction, deprecated in Python 3.7: instead, create assl.SSLContext\nobject and call itsssl.SSLContext.wrap_socket\nmethod. Any package that still usesssl.wrap_socket()\nis broken and insecure. The function neither sends a SNI TLS extension nor validates the server hostname. Code is subject to CWE 295 (Improper Certificate Validation). (Contributed by Victor Stinner in gh-94199.)\nunittest\u00b6\nRemove many long-deprecated\nunittest\nfeatures:A number of\nTestCase\nmethod aliases:Deprecated alias\nMethod Name\nDeprecated in\nfailUnless\n3.1\nfailIf\n3.1\nfailUnlessEqual\n3.1\nfailIfEqual\n3.1\nfailUnlessAlmostEqual\n3.1\nfailIfAlmostEqual\n3.1\nfailUnlessRaises\n3.1\nassert_\n3.2\nassertEquals\n3.2\nassertNotEquals\n3.2\nassertAlmostEquals\n3.2\nassertNotAlmostEquals\n3.2\nassertRegexpMatches\n3.2\nassertRaisesRegexp\n3.2\nassertNotRegexpMatches\n3.5\nYou can use https://github.com/isidentical/teyit to automatically modernise your unit tests.\nUndocumented and broken\nTestCase\nmethodassertDictContainsSubset\n(deprecated in Python 3.2).Undocumented\nTestLoader.loadTestsFromModule\nparameter use_load_tests (deprecated and ignored since Python 3.5).An alias of the\nTextTestResult\nclass:_TextTestResult\n(deprecated in Python 3.2).\n(Contributed by Serhiy Storchaka in gh-89325.)\nwebbrowser\u00b6\nRemove support for obsolete browsers from\nwebbrowser\n. The removed browsers include: Grail, Mosaic, Netscape, Galeon, Skipstone, Iceape, Firebird, and Firefox versions 35 and below (gh-102871).\nxml.etree.ElementTree\u00b6\nRemove the\nElementTree.Element.copy()\nmethod of the pure Python implementation, deprecated in Python 3.10, use thecopy.copy()\nfunction instead. The C implementation ofxml.etree.ElementTree\nhas nocopy()\nmethod, only a__copy__()\nmethod. (Contributed by Victor Stinner in gh-94383.)\nzipimport\u00b6\nOthers\u00b6\nRemove the\nsuspicious\nrule from the documentationMakefile\nandDoc/tools/rstlint.py\n, both in favor of sphinx-lint. (Contributed by Julien Palard in gh-98179.)Remove the keyfile and certfile parameters from the\nftplib\n,imaplib\n,poplib\nandsmtplib\nmodules, and the key_file, cert_file and check_hostname parameters from thehttp.client\nmodule, all deprecated since Python 3.6. Use the context parameter (ssl_context inimaplib\n) instead. (Contributed by Victor Stinner in gh-94172.)Remove\nJython\ncompatibility hacks from several stdlib modules and tests. (Contributed by Nikita Sobolev in gh-99482.)Remove\n_use_broken_old_ctypes_structure_semantics_\nflag fromctypes\nmodule. (Contributed by Nikita Sobolev in gh-99285.)\nPorting to Python 3.12\u00b6\nThis section lists previously described changes and other bugfixes that may require changes to your code.\nChanges in the Python API\u00b6\nMore strict rules are now applied for numerical group references and group names in regular expressions. Only sequence of ASCII digits is now accepted as a numerical reference. The group name in bytes patterns and replacement strings can now only contain ASCII letters and digits and underscore. (Contributed by Serhiy Storchaka in gh-91760.)\nRemove\nrandrange()\nfunctionality deprecated since Python 3.10. Formerly,randrange(10.0)\nlosslessly converted torandrange(10)\n. Now, it raises aTypeError\n. Also, the exception raised for non-integer values such asrandrange(10.5)\norrandrange('10')\nhas been changed fromValueError\ntoTypeError\n. This also prevents bugs whererandrange(1e25)\nwould silently select from a larger range thanrandrange(10**25)\n. (Originally suggested by Serhiy Storchaka gh-86388.)argparse.ArgumentParser\nchanged encoding and error handler for reading arguments from file (e.g.fromfile_prefix_chars\noption) from default text encoding (e.g.locale.getpreferredencoding(False)\n) to filesystem encoding and error handler. Argument files should be encoded in UTF-8 instead of ANSI Codepage on Windows.Remove the\nasyncore\n-basedsmtpd\nmodule deprecated in Python 3.4.7 and 3.5.4. A recommended replacement is theasyncio\n-based aiosmtpd PyPI module.shlex.split()\n: PassingNone\nfor s argument now raises an exception, rather than readingsys.stdin\n. The feature was deprecated in Python 3.9. (Contributed by Victor Stinner in gh-94352.)The\nos\nmodule no longer accepts bytes-like paths, likebytearray\nandmemoryview\ntypes: only the exactbytes\ntype is accepted for bytes strings. (Contributed by Victor Stinner in gh-98393.)syslog.openlog()\nandsyslog.closelog()\nnow fail if used in subinterpreters.syslog.syslog()\nmay still be used in subinterpreters, but now only ifsyslog.openlog()\nhas already been called in the main interpreter. These new restrictions do not apply to the main interpreter, so only a very small set of users might be affected. This change helps with interpreter isolation. Furthermore,syslog\nis a wrapper around process-global resources, which are best managed from the main interpreter. (Contributed by Donghee Na in gh-99127.)The undocumented locking behavior of\ncached_property()\nis removed, because it locked across all instances of the class, leading to high lock contention. This means that a cached property getter function could now run more than once for a single instance, if two threads race. For most simple cached properties (e.g. those that are idempotent and simply calculate a value based on other attributes of the instance) this will be fine. If synchronization is needed, implement locking within the cached property getter function or around multi-threaded access points.sys._current_exceptions()\nnow returns a mapping from thread-id to an exception instance, rather than to a(typ, exc, tb)\ntuple. (Contributed by Irit Katriel in gh-103176.)When extracting tar files using\ntarfile\norshutil.unpack_archive()\n, pass the filter argument to limit features that may be surprising or dangerous. See Extraction filters for details.The output of the\ntokenize.tokenize()\nandtokenize.generate_tokens()\nfunctions is now changed due to the changes introduced in PEP 701. This means thatSTRING\ntokens are not emitted any more for f-strings and the tokens described in PEP 701 are now produced instead:FSTRING_START\n,FSTRING_MIDDLE\nandFSTRING_END\nare now emitted for f-string \u201cstring\u201d parts in addition to the appropriate tokens for the tokenization in the expression components. For example for the f-stringf\"start {1+1} end\"\nthe old version of the tokenizer emitted:1,0-1,18: STRING 'f\"start {1+1} end\"'\nwhile the new version emits:\n1,0-1,2: FSTRING_START 'f\"' 1,2-1,8: FSTRING_MIDDLE 'start ' 1,8-1,9: OP '{' 1,9-1,10: NUMBER '1' 1,10-1,11: OP '+' 1,11-1,12: NUMBER '1' 1,12-1,13: OP '}' 1,13-1,17: FSTRING_MIDDLE ' end' 1,17-1,18: FSTRING_END '\"'\nAdditionally, there may be some minor behavioral changes as a consequence of the changes required to support PEP 701. Some of these changes include:\nThe\ntype\nattribute of the tokens emitted when tokenizing some invalid Python characters such as!\nhas changed fromERRORTOKEN\ntoOP\n.Incomplete single-line strings now also raise\ntokenize.TokenError\nas incomplete multiline strings do.Some incomplete or invalid Python code now raises\ntokenize.TokenError\ninstead of returning arbitraryERRORTOKEN\ntokens when tokenizing it.Mixing tabs and spaces as indentation in the same file is not supported anymore and will raise a\nTabError\n.\nThe\nthreading\nmodule now expects the_thread\nmodule to have an_is_main_interpreter\nattribute. It is a function with no arguments that returnsTrue\nif the current interpreter is the main interpreter.Any library or application that provides a custom\n_thread\nmodule should provide_is_main_interpreter()\n. (See gh-112826.)\nBuild Changes\u00b6\nPython no longer uses\nsetup.py\nto build shared C extension modules. Build parameters like headers and libraries are detected inconfigure\nscript. Extensions are built byMakefile\n. Most extensions usepkg-config\nand fall back to manual detection. (Contributed by Christian Heimes in gh-93939.)va_start()\nwith two parameters, likeva_start(args, format),\nis now required to build Python.va_start()\nis no longer called with a single parameter. (Contributed by Kumar Aditya in gh-93207.)CPython now uses the ThinLTO option as the default link time optimization policy if the Clang compiler accepts the flag. (Contributed by Donghee Na in gh-89536.)\nAdd\nCOMPILEALL_OPTS\nvariable inMakefile\nto overridecompileall\noptions (default:-j0\n) inmake install\n. Also merged the 3compileall\ncommands into a single command to build .pyc files for all optimization levels (0, 1, 2) at once. (Contributed by Victor Stinner in gh-99289.)Add platform triplets for 64-bit LoongArch:\nloongarch64-linux-gnusf\nloongarch64-linux-gnuf32\nloongarch64-linux-gnu\n(Contributed by Zhang Na in gh-90656.)\nPYTHON_FOR_REGEN\nnow require Python 3.10 or newer.Autoconf 2.71 and aclocal 1.16.4 is now required to regenerate\nconfigure\n. (Contributed by Christian Heimes in gh-89886.)Windows builds and macOS installers from python.org now use OpenSSL 3.0.\nC API Changes\u00b6\nNew Features\u00b6\nPEP 697: Introduce the Unstable C API tier, intended for low-level tools like debuggers and JIT compilers. This API may change in each minor release of CPython without deprecation warnings. Its contents are marked by the\nPyUnstable_\nprefix in names.Code object constructors:\nPyUnstable_Code_New()\n(renamed fromPyCode_New\n)PyUnstable_Code_NewWithPosOnlyArgs()\n(renamed fromPyCode_NewWithPosOnlyArgs\n)\nExtra storage for code objects (PEP 523):\nPyUnstable_Eval_RequestCodeExtraIndex()\n(renamed from_PyEval_RequestCodeExtraIndex\n)PyUnstable_Code_GetExtra()\n(renamed from_PyCode_GetExtra\n)PyUnstable_Code_SetExtra()\n(renamed from_PyCode_SetExtra\n)\nThe original names will continue to be available until the respective API changes.\n(Contributed by Petr Viktorin in gh-101101.)\nPEP 697: Add an API for extending types whose instance memory layout is opaque:\nPyType_Spec.basicsize\ncan be zero or negative to specify inheriting or extending the base class size.PyObject_GetTypeData()\nandPyType_GetTypeDataSize()\nadded to allow access to subclass-specific instance data.Py_TPFLAGS_ITEMS_AT_END\nandPyObject_GetItemData()\nadded to allow safely extending certain variable-sized types, includingPyType_Type\n.Py_RELATIVE_OFFSET\nadded to allow definingmembers\nin terms of a subclass-specific struct.\n(Contributed by Petr Viktorin in gh-103509.)\nAdd the new limited C API function\nPyType_FromMetaclass()\n, which generalizes the existingPyType_FromModuleAndSpec()\nusing an additional metaclass argument. (Contributed by Wenzel Jakob in gh-93012.)API for creating objects that can be called using the vectorcall protocol was added to the Limited API:\nThe\nPy_TPFLAGS_HAVE_VECTORCALL\nflag is now removed from a class when the class\u2019s__call__()\nmethod is reassigned. This makes vectorcall safe to use with mutable types (i.e. heap types without the immutable flag,Py_TPFLAGS_IMMUTABLETYPE\n). Mutable types that do not overridetp_call\nnow inherit thePy_TPFLAGS_HAVE_VECTORCALL\nflag. (Contributed by Petr Viktorin in gh-93274.)The\nPy_TPFLAGS_MANAGED_DICT\nandPy_TPFLAGS_MANAGED_WEAKREF\nflags have been added. This allows extensions classes to support object__dict__\nand weakrefs with less bookkeeping, using less memory and with faster access.API for performing calls using the vectorcall protocol was added to the Limited API:\nThis means that both the incoming and outgoing ends of the vector call protocol are now available in the Limited API. (Contributed by Wenzel Jakob in gh-98586.)\nAdd two new public functions,\nPyEval_SetProfileAllThreads()\nandPyEval_SetTraceAllThreads()\n, that allow to set tracing and profiling functions in all running threads in addition to the calling one. (Contributed by Pablo Galindo in gh-93503.)Add new function\nPyFunction_SetVectorcall()\nto the C API which sets the vectorcall field of a givenPyFunctionObject\n. (Contributed by Andrew Frost in gh-92257.)The C API now permits registering callbacks via\nPyDict_AddWatcher()\n,PyDict_Watch()\nand related APIs to be called whenever a dictionary is modified. This is intended for use by optimizing interpreters, JIT compilers, or debuggers. (Contributed by Carl Meyer in gh-91052.)Add\nPyType_AddWatcher()\nandPyType_Watch()\nAPI to register callbacks to receive notification on changes to a type. (Contributed by Carl Meyer in gh-91051.)Add\nPyCode_AddWatcher()\nandPyCode_ClearWatcher()\nAPIs to register callbacks to receive notification on creation and destruction of code objects. (Contributed by Itamar Oren in gh-91054.)Add\nPyFrame_GetVar()\nandPyFrame_GetVarString()\nfunctions to get a frame variable by its name. (Contributed by Victor Stinner in gh-91248.)Add\nPyErr_GetRaisedException()\nandPyErr_SetRaisedException()\nfor saving and restoring the current exception. These functions return and accept a single exception object, rather than the triple arguments of the now-deprecatedPyErr_Fetch()\nandPyErr_Restore()\n. This is less error prone and a bit more efficient. (Contributed by Mark Shannon in gh-101578.)Add\n_PyErr_ChainExceptions1\n, which takes an exception instance, to replace the legacy-API_PyErr_ChainExceptions\n, which is now deprecated. (Contributed by Mark Shannon in gh-101578.)Add\nPyException_GetArgs()\nandPyException_SetArgs()\nas convenience functions for retrieving and modifying theargs\npassed to the exception\u2019s constructor. (Contributed by Mark Shannon in gh-101578.)Add\nPyErr_DisplayException()\n, which takes an exception instance, to replace the legacy-apiPyErr_Display()\n. (Contributed by Irit Katriel in gh-102755).\nPEP 683: Introduce Immortal Objects, which allows objects to bypass reference counts, and related changes to the C-API:\n_Py_IMMORTAL_REFCNT\n: The reference count that defines an objectas immortal.\n_Py_IsImmortal\nChecks if an object has the immortal reference count.PyObject_HEAD_INIT\nThis will now initialize reference count to_Py_IMMORTAL_REFCNT\nwhen used withPy_BUILD_CORE\n.\nSSTATE_INTERNED_IMMORTAL\nAn identifier for interned unicode objectsthat are immortal.\nSSTATE_INTERNED_IMMORTAL_STATIC\nAn identifier for interned unicodeobjects that are immortal and static\nsys.getunicodeinternedsize\nThis returns the total number of unicodeobjects that have been interned. This is now needed for\nrefleak.py\nto correctly track reference counts and allocated blocks\n(Contributed by Eddie Elizondo in gh-84436.)\nPEP 684: Add the new\nPy_NewInterpreterFromConfig()\nfunction andPyInterpreterConfig\n, which may be used to create sub-interpreters with their own GILs. (See PEP 684: A Per-Interpreter GIL for more info.) (Contributed by Eric Snow in gh-104110.)In the limited C API version 3.12,\nPy_INCREF()\nandPy_DECREF()\nfunctions are now implemented as opaque function calls to hide implementation details. (Contributed by Victor Stinner in gh-105387.)\nPorting to Python 3.12\u00b6\nLegacy Unicode APIs based on\nPy_UNICODE*\nrepresentation has been removed. Please migrate to APIs based on UTF-8 orwchar_t*\n.Argument parsing functions like\nPyArg_ParseTuple()\ndoesn\u2019t supportPy_UNICODE*\nbased format (e.g.u\n,Z\n) anymore. Please migrate to other formats for Unicode likes\n,z\n,es\n, andU\n.tp_weaklist\nfor all static builtin types is alwaysNULL\n. This is an internal-only field onPyTypeObject\nbut we\u2019re pointing out the change in case someone happens to be accessing the field directly anyway. To avoid breakage, consider using the existing public C-API instead, or, if necessary, the (internal-only)_PyObject_GET_WEAKREFS_LISTPTR()\nmacro.This internal-only\nPyTypeObject.tp_subclasses\nmay now not be a valid object pointer. Its type was changed to void* to reflect this. We mention this in case someone happens to be accessing the internal-only field directly.To get a list of subclasses, call the Python method\n__subclasses__()\n(usingPyObject_CallMethod()\n, for example).Add support of more formatting options (left aligning, octals, uppercase hexadecimals,\nintmax_t\n,ptrdiff_t\n,wchar_t\nC strings, variable width and precision) inPyUnicode_FromFormat()\nandPyUnicode_FromFormatV()\n. (Contributed by Serhiy Storchaka in gh-98836.)An unrecognized format character in\nPyUnicode_FromFormat()\nandPyUnicode_FromFormatV()\nnow sets aSystemError\n. In previous versions it caused all the rest of the format string to be copied as-is to the result string, and any extra arguments discarded. (Contributed by Serhiy Storchaka in gh-95781.)Fix wrong sign placement in\nPyUnicode_FromFormat()\nandPyUnicode_FromFormatV()\n. (Contributed by Philip Georgi in gh-95504.)Extension classes wanting to add a\n__dict__\nor weak reference slot should usePy_TPFLAGS_MANAGED_DICT\nandPy_TPFLAGS_MANAGED_WEAKREF\ninstead oftp_dictoffset\nandtp_weaklistoffset\n, respectively. The use oftp_dictoffset\nandtp_weaklistoffset\nis still supported, but does not fully support multiple inheritance (gh-95589), and performance may be worse. Classes declaringPy_TPFLAGS_MANAGED_DICT\nmust call_PyObject_VisitManagedDict()\nand_PyObject_ClearManagedDict()\nto traverse and clear their instance\u2019s dictionaries. To clear weakrefs, callPyObject_ClearWeakRefs()\n, as before.The\nPyUnicode_FSDecoder()\nfunction no longer accepts bytes-like paths, likebytearray\nandmemoryview\ntypes: only the exactbytes\ntype is accepted for bytes strings. (Contributed by Victor Stinner in gh-98393.)The\nPy_CLEAR\n,Py_SETREF\nandPy_XSETREF\nmacros now only evaluate their arguments once. If an argument has side effects, these side effects are no longer duplicated. (Contributed by Victor Stinner in gh-98724.)The interpreter\u2019s error indicator is now always normalized. This means that\nPyErr_SetObject()\n,PyErr_SetString()\nand the other functions that set the error indicator now normalize the exception before storing it. (Contributed by Mark Shannon in gh-101578.)_Py_RefTotal\nis no longer authoritative and only kept around for ABI compatibility. Note that it is an internal global and only available on debug builds. If you happen to be using it then you\u2019ll need to start using_Py_GetGlobalRefTotal()\n.The following functions now select an appropriate metaclass for the newly created type:\nCreating classes whose metaclass overrides\ntp_new\nis deprecated, and in Python 3.14+ it will be disallowed. Note that these functions ignoretp_new\nof the metaclass, possibly allowing incomplete initialization.Note that\nPyType_FromMetaclass()\n(added in Python 3.12) already disallows creating classes whose metaclass overridestp_new\n(__new__()\nin Python).Since\ntp_new\noverrides almost everythingPyType_From*\nfunctions do, the two are incompatible with each other. The existing behavior \u2013 ignoring the metaclass for several steps of type creation \u2013 is unsafe in general, since (meta)classes assume thattp_new\nwas called. There is no simple general workaround. One of the following may work for you:If you control the metaclass, avoid using\ntp_new\nin it:If initialization can be skipped, it can be done in\ntp_init\ninstead.If the metaclass doesn\u2019t need to be instantiated from Python, set its\ntp_new\ntoNULL\nusing thePy_TPFLAGS_DISALLOW_INSTANTIATION\nflag. This makes it acceptable forPyType_From*\nfunctions.\nAvoid\nPyType_From*\nfunctions: if you don\u2019t need C-specific features (slots or setting the instance size), create types by calling the metaclass.If you know the\ntp_new\ncan be skipped safely, filter the deprecation warning out usingwarnings.catch_warnings()\nfrom Python.\nPyOS_InputHook\nandPyOS_ReadlineFunctionPointer\nare no longer called in subinterpreters. This is because clients generally rely on process-wide global state (since these callbacks have no way of recovering extension module state).This also avoids situations where extensions may find themselves running in a subinterpreter that they don\u2019t support (or haven\u2019t yet been loaded in). See gh-104668 for more info.\nPyLongObject\nhas had its internals changed for better performance. Although the internals ofPyLongObject\nare private, they are used by some extension modules. The internal fields should no longer be accessed directly, instead the API functions beginningPyLong_...\nshould be used instead. Two new unstable API functions are provided for efficient access to the value ofPyLongObject\ns which fit into a single machine word:Custom allocators, set via\nPyMem_SetAllocator()\n, are now required to be thread-safe, regardless of memory domain. Allocators that don\u2019t have their own state, including \u201chooks\u201d, are not affected. If your custom allocator is not already thread-safe and you need guidance then please create a new GitHub issue and CC@ericsnowcurrently\n.\nDeprecated\u00b6\nIn accordance with PEP 699, the\nma_version_tag\nfield inPyDictObject\nis deprecated for extension modules. Accessing this field will generate a compiler warning at compile time. This field will be removed in Python 3.14. (Contributed by Ramvikrams and Kumar Aditya in gh-101193. PEP by Ken Jin.)Deprecate global configuration variable:\nPy_HashRandomizationFlag\n: usePyConfig.use_hash_seed\nandPyConfig.hash_seed\nPy_LegacyWindowsFSEncodingFlag\n: usePyPreConfig.legacy_windows_fs_encoding\nPy_LegacyWindowsStdioFlag\n: usePyConfig.legacy_windows_stdio\nPy_FileSystemDefaultEncoding\n: usePyConfig.filesystem_encoding\nPy_HasFileSystemDefaultEncoding\n: usePyConfig.filesystem_encoding\nPy_FileSystemDefaultEncodeErrors\n: usePyConfig.filesystem_errors\nPy_UTF8Mode\n: usePyPreConfig.utf8_mode\n(seePy_PreInitialize()\n)\nThe\nPy_InitializeFromConfig()\nAPI should be used withPyConfig\ninstead. (Contributed by Victor Stinner in gh-77782.)Creating\nimmutable types\nwith mutable bases is deprecated and will be disabled in Python 3.14. (gh-95388)The\nstructmember.h\nheader is deprecated, though it continues to be available and there are no plans to remove it.Its contents are now available just by including\nPython.h\n, with aPy\nprefix added if it was missing:Type macros like\nPy_T_INT\n,Py_T_DOUBLE\n, etc. (previouslyT_INT\n,T_DOUBLE\n, etc.)The flags\nPy_READONLY\n(previouslyREADONLY\n) andPy_AUDIT_READ\n(previously all uppercase)\nSeveral items are not exposed from\nPython.h\n:T_OBJECT\n(usePy_T_OBJECT_EX\n)T_NONE\n(previously undocumented, and pretty quirky)The macro\nWRITE_RESTRICTED\nwhich does nothing.The macros\nRESTRICTED\nandREAD_RESTRICTED\n, equivalents ofPy_AUDIT_READ\n.In some configurations,\n\nis not included fromPython.h\n. It should be included manually when usingoffsetof()\n.\nThe deprecated header continues to provide its original contents under the original names. Your old code can stay unchanged, unless the extra include and non-namespaced macros bother you greatly.\n(Contributed in gh-47146 by Petr Viktorin, based on earlier work by Alexander Belopolsky and Matthias Braun.)\nPyErr_Fetch()\nandPyErr_Restore()\nare deprecated. UsePyErr_GetRaisedException()\nandPyErr_SetRaisedException()\ninstead. (Contributed by Mark Shannon in gh-101578.)PyErr_Display()\nis deprecated. UsePyErr_DisplayException()\ninstead. (Contributed by Irit Katriel in gh-102755)._PyErr_ChainExceptions\nis deprecated. Use_PyErr_ChainExceptions1\ninstead. (Contributed by Irit Katriel in gh-102192.)Using\nPyType_FromSpec()\n,PyType_FromSpecWithBases()\norPyType_FromModuleAndSpec()\nto create a class whose metaclass overridestp_new\nis deprecated. Call the metaclass instead.\nPending removal in Python 3.14\u00b6\nThe\nma_version_tag\nfield inPyDictObject\nfor extension modules (PEP 699; gh-101193).Creating\nimmutable types\nwith mutable bases (gh-95388).\nPending removal in Python 3.15\u00b6\nThe\nPyImport_ImportModuleNoBlock()\n: UsePyImport_ImportModule()\ninstead.PyWeakref_GetObject()\nandPyWeakref_GET_OBJECT()\n: UsePyWeakref_GetRef()\ninstead. The pythoncapi-compat project can be used to getPyWeakref_GetRef()\non Python 3.12 and older.Py_UNICODE\ntype and thePy_UNICODE_WIDE\nmacro: Usewchar_t\ninstead.PyUnicode_AsDecodedObject()\n: UsePyCodec_Decode()\ninstead.PyUnicode_AsDecodedUnicode()\n: UsePyCodec_Decode()\ninstead; Note that some codecs (for example, \u201cbase64\u201d) may return a type other thanstr\n, such asbytes\n.PyUnicode_AsEncodedObject()\n: UsePyCodec_Encode()\ninstead.PyUnicode_AsEncodedUnicode()\n: UsePyCodec_Encode()\ninstead; Note that some codecs (for example, \u201cbase64\u201d) may return a type other thanbytes\n, such asstr\n.Python initialization functions, deprecated in Python 3.13:\nPy_GetPath()\n: UsePyConfig_Get(\"module_search_paths\")\n(sys.path\n) instead.Py_GetPrefix()\n: UsePyConfig_Get(\"base_prefix\")\n(sys.base_prefix\n) instead. UsePyConfig_Get(\"prefix\")\n(sys.prefix\n) if virtual environments need to be handled.Py_GetExecPrefix()\n: UsePyConfig_Get(\"base_exec_prefix\")\n(sys.base_exec_prefix\n) instead. UsePyConfig_Get(\"exec_prefix\")\n(sys.exec_prefix\n) if virtual environments need to be handled.Py_GetProgramFullPath()\n: UsePyConfig_Get(\"executable\")\n(sys.executable\n) instead.Py_GetProgramName()\n: UsePyConfig_Get(\"executable\")\n(sys.executable\n) instead.Py_GetPythonHome()\n: UsePyConfig_Get(\"home\")\nor thePYTHONHOME\nenvironment variable instead.\nThe pythoncapi-compat project can be used to get\nPyConfig_Get()\non Python 3.13 and older.Functions to configure Python\u2019s initialization, deprecated in Python 3.11:\nPySys_SetArgvEx()\n: SetPyConfig.argv\ninstead.PySys_SetArgv()\n: SetPyConfig.argv\ninstead.Py_SetProgramName()\n: SetPyConfig.program_name\ninstead.Py_SetPythonHome()\n: SetPyConfig.home\ninstead.PySys_ResetWarnOptions()\n: Clearsys.warnoptions\nandwarnings.filters\ninstead.\nThe\nPy_InitializeFromConfig()\nAPI should be used withPyConfig\ninstead.Global configuration variables:\nPy_DebugFlag\n: UsePyConfig.parser_debug\norPyConfig_Get(\"parser_debug\")\ninstead.Py_VerboseFlag\n: UsePyConfig.verbose\norPyConfig_Get(\"verbose\")\ninstead.Py_QuietFlag\n: UsePyConfig.quiet\norPyConfig_Get(\"quiet\")\ninstead.Py_InteractiveFlag\n: UsePyConfig.interactive\norPyConfig_Get(\"interactive\")\ninstead.Py_InspectFlag\n: UsePyConfig.inspect\norPyConfig_Get(\"inspect\")\ninstead.Py_OptimizeFlag\n: UsePyConfig.optimization_level\norPyConfig_Get(\"optimization_level\")\ninstead.Py_NoSiteFlag\n: UsePyConfig.site_import\norPyConfig_Get(\"site_import\")\ninstead.Py_BytesWarningFlag\n: UsePyConfig.bytes_warning\norPyConfig_Get(\"bytes_warning\")\ninstead.Py_FrozenFlag\n: UsePyConfig.pathconfig_warnings\norPyConfig_Get(\"pathconfig_warnings\")\ninstead.Py_IgnoreEnvironmentFlag\n: UsePyConfig.use_environment\norPyConfig_Get(\"use_environment\")\ninstead.Py_DontWriteBytecodeFlag\n: UsePyConfig.write_bytecode\norPyConfig_Get(\"write_bytecode\")\ninstead.Py_NoUserSiteDirectory\n: UsePyConfig.user_site_directory\norPyConfig_Get(\"user_site_directory\")\ninstead.Py_UnbufferedStdioFlag\n: UsePyConfig.buffered_stdio\norPyConfig_Get(\"buffered_stdio\")\ninstead.Py_HashRandomizationFlag\n: UsePyConfig.use_hash_seed\nandPyConfig.hash_seed\norPyConfig_Get(\"hash_seed\")\ninstead.Py_IsolatedFlag\n: UsePyConfig.isolated\norPyConfig_Get(\"isolated\")\ninstead.Py_LegacyWindowsFSEncodingFlag\n: UsePyPreConfig.legacy_windows_fs_encoding\norPyConfig_Get(\"legacy_windows_fs_encoding\")\ninstead.Py_LegacyWindowsStdioFlag\n: UsePyConfig.legacy_windows_stdio\norPyConfig_Get(\"legacy_windows_stdio\")\ninstead.Py_FileSystemDefaultEncoding\n,Py_HasFileSystemDefaultEncoding\n: UsePyConfig.filesystem_encoding\norPyConfig_Get(\"filesystem_encoding\")\ninstead.Py_FileSystemDefaultEncodeErrors\n: UsePyConfig.filesystem_errors\norPyConfig_Get(\"filesystem_errors\")\ninstead.Py_UTF8Mode\n: UsePyPreConfig.utf8_mode\norPyConfig_Get(\"utf8_mode\")\ninstead. (seePy_PreInitialize()\n)\nThe\nPy_InitializeFromConfig()\nAPI should be used withPyConfig\nto set these options. OrPyConfig_Get()\ncan be used to get these options at runtime.\nPending removal in Python 3.16\u00b6\nThe bundled copy of\nlibmpdec\n.\nPending removal in future versions\u00b6\nThe following APIs are deprecated and will be removed, although there is currently no date scheduled for their removal.\nPy_TPFLAGS_HAVE_FINALIZE\n: Unneeded since Python 3.8.PyErr_Fetch()\n: UsePyErr_GetRaisedException()\ninstead.PyErr_NormalizeException()\n: UsePyErr_GetRaisedException()\ninstead.PyErr_Restore()\n: UsePyErr_SetRaisedException()\ninstead.PyModule_GetFilename()\n: UsePyModule_GetFilenameObject()\ninstead.PyOS_AfterFork()\n: UsePyOS_AfterFork_Child()\ninstead.PySlice_GetIndicesEx()\n: UsePySlice_Unpack()\nandPySlice_AdjustIndices()\ninstead.PyUnicode_READY()\n: Unneeded since Python 3.12PyErr_Display()\n: UsePyErr_DisplayException()\ninstead._PyErr_ChainExceptions()\n: Use_PyErr_ChainExceptions1()\ninstead.PyBytesObject.ob_shash\nmember: callPyObject_Hash()\ninstead.Thread Local Storage (TLS) API:\nPyThread_create_key()\n: UsePyThread_tss_alloc()\ninstead.PyThread_delete_key()\n: UsePyThread_tss_free()\ninstead.PyThread_set_key_value()\n: UsePyThread_tss_set()\ninstead.PyThread_get_key_value()\n: UsePyThread_tss_get()\ninstead.PyThread_delete_key_value()\n: UsePyThread_tss_delete()\ninstead.PyThread_ReInitTLS()\n: Unneeded since Python 3.7.\nRemoved\u00b6\nRemove the\ntoken.h\nheader file. There was never any public tokenizer C API. Thetoken.h\nheader file was only designed to be used by Python internals. (Contributed by Victor Stinner in gh-92651.)Legacy Unicode APIs have been removed. See PEP 623 for detail.\nPyUnicode_WCHAR_KIND\nPyUnicode_AS_UNICODE()\nPyUnicode_AsUnicode()\nPyUnicode_AsUnicodeAndSize()\nPyUnicode_AS_DATA()\nPyUnicode_FromUnicode()\nPyUnicode_GET_SIZE()\nPyUnicode_GetSize()\nPyUnicode_GET_DATA_SIZE()\nRemove the\nPyUnicode_InternImmortal()\nfunction macro. (Contributed by Victor Stinner in gh-85858.)", "code_snippets": [" ", " ", " ", "\n ", "\n\n", "\n ", " ", " ", " ", " ", " ", "\n ", "\n\n ", " ", " ", " ", " ", "\n ", "\n", " ", " ", " ", " ", "\n", " ", " ", " ", " ", "\n", " ", " ", " ", " ", " ", "\n", " ", " ", " ", " ", " ", "\n", " ", " ", " ", " ", " ", "\n", " ", " ", " ", " ", " ", " ", "\n", " ", " ", "\n\n", "\n ", " ", "\n ", " ", "\n\n", " ", " ", "\n", " ", "\n\n", "\n ", " ", " ", "\n ", " ", "\n\n", "\n ", " ", "\n ", " ", " ", "\n ", " ", "\n\n", "\n ", " ", "\n ", " ", " ", "\n ", " ", "\n", " ", " ", "\n", "\n", "\n", " ", " ", " ", "\n", "\n", " ", "\n", "\n", " ", " ", "\n", " ", "\n", "\n", " ", " ", "\n", " ", "\n", "\n", " ", " ", "\n", " ", " ", "\n", "\n", " ", " ", " ", " ", "\n", "\n", " ", " ", "\n", "\n", " ", " ", "\n", " ", " ", "\n", " ", " ", "\n\n", " ", " ", " ", " ", " ", " ", " ", " ", "\n", "\n", " ", " ", " ", " ", "\n\n", " ", "\n", "\n\n", " ", "\n ", "\n\n", " ", "\n", "\n", "\n\n", " ", "\n ", " ", " ", " ", "\n ", " ", " ", " ", " ", "\n ", " ", " ", "\n ", "\n ", "\n ", "\n ", "\n ", " ", "\n", " ", " ", "\n", " ", " ", "\n", " ", " ", "\n", " ", " ", "\n", " ", " ", "\n", " ", " ", "\n", " ", " ", "\n", " ", " ", "\n", " ", " ", "\n", " ", " ", "\n"], "language": "Python", "source": "python.org", "token_count": 23154} +{"url": "https://docs.python.org/3/whatsnew/3.0.html", "title": "What\u2019s New In Python 3.0", "content": "What\u2019s New In Python 3.0\u00b6\n- Author:\nGuido van Rossum\nThis article explains the new features in Python 3.0, compared to 2.6. Python 3.0, also known as \u201cPython 3000\u201d or \u201cPy3K\u201d, is the first ever intentionally backwards incompatible Python release. Python 3.0 was released on December 3, 2008. There are more changes than in a typical release, and more that are important for all Python users. Nevertheless, after digesting the changes, you\u2019ll find that Python really hasn\u2019t changed all that much \u2013 by and large, we\u2019re mostly fixing well-known annoyances and warts, and removing a lot of old cruft.\nThis article doesn\u2019t attempt to provide a complete specification of all new features, but instead tries to give a convenient overview. For full details, you should refer to the documentation for Python 3.0, and/or the many PEPs referenced in the text. If you want to understand the complete implementation and design rationale for a particular feature, PEPs usually have more details than the regular documentation; but note that PEPs usually are not kept up-to-date once a feature has been fully implemented.\nDue to time constraints this document is not as complete as it should\nhave been. As always for a new release, the Misc/NEWS\nfile in the\nsource distribution contains a wealth of detailed information about\nevery small thing that was changed.\nCommon Stumbling Blocks\u00b6\nThis section lists those few changes that are most likely to trip you up if you\u2019re used to Python 2.5.\nPrint Is A Function\u00b6\nThe print\nstatement has been replaced with a print()\nfunction, with keyword arguments to replace most of the special syntax\nof the old print\nstatement (PEP 3105). Examples:\nOld: print \"The answer is\", 2*2\nNew: print(\"The answer is\", 2*2)\nOld: print x, # Trailing comma suppresses newline\nNew: print(x, end=\" \") # Appends a space instead of a newline\nOld: print # Prints a newline\nNew: print() # You must call the function!\nOld: print >>sys.stderr, \"fatal error\"\nNew: print(\"fatal error\", file=sys.stderr)\nOld: print (x, y) # prints repr((x, y))\nNew: print((x, y)) # Not the same as print(x, y)!\nYou can also customize the separator between items, e.g.:\nprint(\"There are <\", 2**32, \"> possibilities!\", sep=\"\")\nwhich produces:\nThere are <4294967296> possibilities!\nNote:\nThe\nprint()\nfunction doesn\u2019t support the \u201csoftspace\u201d feature of the oldprint\nstatement. For example, in Python 2.x,print \"A\\n\", \"B\"\nwould write\"A\\nB\\n\"\n; but in Python 3.0,print(\"A\\n\", \"B\")\nwrites\"A\\n B\\n\"\n.Initially, you\u2019ll be finding yourself typing the old\nprint x\na lot in interactive mode. Time to retrain your fingers to typeprint(x)\ninstead!When using the\n2to3\nsource-to-source conversion tool, allprint\nstatements are automatically converted toprint()\nfunction calls, so this is mostly a non-issue for larger projects.\nViews And Iterators Instead Of Lists\u00b6\nSome well-known APIs no longer return lists:\ndict\nmethodsdict.keys()\n,dict.items()\nanddict.values()\nreturn \u201cviews\u201d instead of lists. For example, this no longer works:k = d.keys(); k.sort()\n. Usek = sorted(d)\ninstead (this works in Python 2.5 too and is just as efficient).Also, the\ndict.iterkeys()\n,dict.iteritems()\nanddict.itervalues()\nmethods are no longer supported.map()\nandfilter()\nreturn iterators. If you really need a list and the input sequences are all of equal length, a quick fix is to wrapmap()\ninlist()\n, e.g.list(map(...))\n, but a better fix is often to use a list comprehension (especially when the original code useslambda\n), or rewriting the code so it doesn\u2019t need a list at all. Particularly tricky ismap()\ninvoked for the side effects of the function; the correct transformation is to use a regularfor\nloop (since creating a list would just be wasteful).If the input sequences are not of equal length,\nmap()\nwill stop at the termination of the shortest of the sequences. For full compatibility withmap()\nfrom Python 2.x, also wrap the sequences initertools.zip_longest()\n, e.g.map(func, *sequences)\nbecomeslist(map(func, itertools.zip_longest(*sequences)))\n.range()\nnow behaves likexrange()\nused to behave, except it works with values of arbitrary size. The latter no longer exists.zip()\nnow returns an iterator.\nOrdering Comparisons\u00b6\nPython 3.0 has simplified the rules for ordering comparisons:\nThe ordering comparison operators (\n<\n,<=\n,>=\n,>\n) raise a TypeError exception when the operands don\u2019t have a meaningful natural ordering. Thus, expressions like1 < ''\n,0 > None\norlen <= len\nare no longer valid, and e.g.None < None\nraisesTypeError\ninstead of returningFalse\n. A corollary is that sorting a heterogeneous list no longer makes sense \u2013 all the elements must be comparable to each other. Note that this does not apply to the==\nand!=\noperators: objects of different incomparable types always compare unequal to each other.sorted()\nandlist.sort()\nno longer accept the cmp argument providing a comparison function. Use the key argument instead. N.B. the key and reverse arguments are now \u201ckeyword-only\u201d.The\ncmp()\nfunction should be treated as gone, and the__cmp__()\nspecial method is no longer supported. Use__lt__()\nfor sorting,__eq__()\nwith__hash__()\n, and other rich comparisons as needed. (If you really need thecmp()\nfunctionality, you could use the expression(a > b) - (a < b)\nas the equivalent forcmp(a, b)\n.)\nIntegers\u00b6\nPEP 237: Essentially,\nlong\nrenamed toint\n. That is, there is only one built-in integral type, namedint\n; but it behaves mostly like the oldlong\ntype.PEP 238: An expression like\n1/2\nreturns a float. Use1//2\nto get the truncating behavior. (The latter syntax has existed for years, at least since Python 2.2.)The\nsys.maxint\nconstant was removed, since there is no longer a limit to the value of integers. However,sys.maxsize\ncan be used as an integer larger than any practical list or string index. It conforms to the implementation\u2019s \u201cnatural\u201d integer size and is typically the same assys.maxint\nin previous releases on the same platform (assuming the same build options).The\nrepr()\nof a long integer doesn\u2019t include the trailingL\nanymore, so code that unconditionally strips that character will chop off the last digit instead. (Usestr()\ninstead.)Octal literals are no longer of the form\n0720\n; use0o720\ninstead.\nText Vs. Data Instead Of Unicode Vs. 8-bit\u00b6\nEverything you thought you knew about binary data and Unicode has changed.\nPython 3.0 uses the concepts of text and (binary) data instead of Unicode strings and 8-bit strings. All text is Unicode; however encoded Unicode is represented as binary data. The type used to hold text is\nstr\n, the type used to hold data isbytes\n. The biggest difference with the 2.x situation is that any attempt to mix text and data in Python 3.0 raisesTypeError\n, whereas if you were to mix Unicode and 8-bit strings in Python 2.x, it would work if the 8-bit string happened to contain only 7-bit (ASCII) bytes, but you would getUnicodeDecodeError\nif it contained non-ASCII values. This value-specific behavior has caused numerous sad faces over the years.As a consequence of this change in philosophy, pretty much all code that uses Unicode, encodings or binary data most likely has to change. The change is for the better, as in the 2.x world there were numerous bugs having to do with mixing encoded and unencoded text. To be prepared in Python 2.x, start using\nunicode\nfor all unencoded text, andstr\nfor binary or encoded data only. Then the2to3\ntool will do most of the work for you.You can no longer use\nu\"...\"\nliterals for Unicode text. However, you must useb\"...\"\nliterals for binary data.As the\nstr\nandbytes\ntypes cannot be mixed, you must always explicitly convert between them. Usestr.encode()\nto go fromstr\ntobytes\n, andbytes.decode()\nto go frombytes\ntostr\n. You can also usebytes(s, encoding=...)\nandstr(b, encoding=...)\n, respectively.Like\nstr\n, thebytes\ntype is immutable. There is a separate mutable type to hold buffered binary data,bytearray\n. Nearly all APIs that acceptbytes\nalso acceptbytearray\n. The mutable API is based oncollections.MutableSequence\n.All backslashes in raw string literals are interpreted literally. This means that\n'\\U'\nand'\\u'\nescapes in raw strings are not treated specially. For example,r'\\u20ac'\nis a string of 6 characters in Python 3.0, whereas in 2.6,ur'\\u20ac'\nwas the single \u201ceuro\u201d character. (Of course, this change only affects raw string literals; the euro character is'\\u20ac'\nin Python 3.0.)The built-in\nbasestring\nabstract type was removed. Usestr\ninstead. Thestr\nandbytes\ntypes don\u2019t have functionality enough in common to warrant a shared base class. The2to3\ntool (see below) replaces every occurrence ofbasestring\nwithstr\n.Files opened as text files (still the default mode for\nopen()\n) always use an encoding to map between strings (in memory) and bytes (on disk). Binary files (opened with ab\nin the mode argument) always use bytes in memory. This means that if a file is opened using an incorrect mode or encoding, I/O will likely fail loudly, instead of silently producing incorrect data. It also means that even Unix users will have to specify the correct mode (text or binary) when opening a file. There is a platform-dependent default encoding, which on Unixy platforms can be set with theLANG\nenvironment variable (and sometimes also with some other platform-specific locale-related environment variables). In many cases, but not all, the system default is UTF-8; you should never count on this default. Any application reading or writing more than pure ASCII text should probably have a way to override the encoding. There is no longer any need for using the encoding-aware streams in thecodecs\nmodule.The initial values of\nsys.stdin\n,sys.stdout\nandsys.stderr\nare now unicode-only text files (i.e., they are instances ofio.TextIOBase\n). To read and write bytes data with these streams, you need to use theirio.TextIOBase.buffer\nattribute.Filenames are passed to and returned from APIs as (Unicode) strings. This can present platform-specific problems because on some platforms filenames are arbitrary byte strings. (On the other hand, on Windows filenames are natively stored as Unicode.) As a work-around, most APIs (e.g.\nopen()\nand many functions in theos\nmodule) that take filenames acceptbytes\nobjects as well as strings, and a few APIs have a way to ask for abytes\nreturn value. Thus,os.listdir()\nreturns a list ofbytes\ninstances if the argument is abytes\ninstance, andos.getcwdb()\nreturns the current working directory as abytes\ninstance. Note that whenos.listdir()\nreturns a list of strings, filenames that cannot be decoded properly are omitted rather than raisingUnicodeError\n.Some system APIs like\nos.environ\nandsys.argv\ncan also present problems when the bytes made available by the system is not interpretable using the default encoding. Setting theLANG\nvariable and rerunning the program is probably the best approach.PEP 3138: The\nrepr()\nof a string no longer escapes non-ASCII characters. It still escapes control characters and code points with non-printable status in the Unicode standard, however.PEP 3120: The default source encoding is now UTF-8.\nPEP 3131: Non-ASCII letters are now allowed in identifiers. (However, the standard library remains ASCII-only with the exception of contributor names in comments.)\nThe\nStringIO\nandcStringIO\nmodules are gone. Instead, import theio\nmodule and useio.StringIO\norio.BytesIO\nfor text and data respectively.See also the Unicode HOWTO, which was updated for Python 3.0.\nOverview Of Syntax Changes\u00b6\nThis section gives a brief overview of every syntactic change in Python 3.0.\nNew Syntax\u00b6\nPEP 3107: Function argument and return value annotations. This provides a standardized way of annotating a function\u2019s parameters and return value. There are no semantics attached to such annotations except that they can be introspected at runtime using the\n__annotations__\nattribute. The intent is to encourage experimentation through metaclasses, decorators or frameworks.PEP 3102: Keyword-only arguments. Named parameters occurring after\n*args\nin the parameter list must be specified using keyword syntax in the call. You can also use a bare*\nin the parameter list to indicate that you don\u2019t accept a variable-length argument list, but you do have keyword-only arguments.Keyword arguments are allowed after the list of base classes in a class definition. This is used by the new convention for specifying a metaclass (see next section), but can be used for other purposes as well, as long as the metaclass supports it.\nPEP 3104:\nnonlocal\nstatement. Usingnonlocal x\nyou can now assign directly to a variable in an outer (but non-global) scope.nonlocal\nis a new reserved word.PEP 3132: Extended Iterable Unpacking. You can now write things like\na, b, *rest = some_sequence\n. And even*rest, a = stuff\n. Therest\nobject is always a (possibly empty) list; the right-hand side may be any iterable. Example:(a, *rest, b) = range(5)\nThis sets a to\n0\n, b to4\n, and rest to[1, 2, 3]\n.Dictionary comprehensions:\n{k: v for k, v in stuff}\nmeans the same thing asdict(stuff)\nbut is more flexible. (This is PEP 274 vindicated. :-)Set literals, e.g.\n{1, 2}\n. Note that{}\nis an empty dictionary; useset()\nfor an empty set. Set comprehensions are also supported; e.g.,{x for x in stuff}\nmeans the same thing asset(stuff)\nbut is more flexible.New octal literals, e.g.\n0o720\n(already in 2.6). The old octal literals (0720\n) are gone.New binary literals, e.g.\n0b1010\n(already in 2.6), and there is a new corresponding built-in function,bin()\n.Bytes literals are introduced with a leading\nb\norB\n, and there is a new corresponding built-in function,bytes()\n.\nChanged Syntax\u00b6\nPEP 3109 and PEP 3134: new\nraise\nstatement syntax:raise [expr [from expr]]\n. See below.as\nandwith\nare now reserved words. (Since 2.6, actually.)True\n,False\n, andNone\nare reserved words. (2.6 partially enforced the restrictions onNone\nalready.)Change from\nexcept\nexc, var toexcept\nexcas\nvar. See PEP 3110.PEP 3115: New Metaclass Syntax. Instead of:\nclass C: __metaclass__ = M ...\nyou must now use:\nclass C(metaclass=M): ...\nThe module-global\n__metaclass__\nvariable is no longer supported. (It was a crutch to make it easier to default to new-style classes without deriving every class fromobject\n.)List comprehensions no longer support the syntactic form\n[... for var in item1, item2, ...]\n. Use[... for var in (item1, item2, ...)]\ninstead. Also note that list comprehensions have different semantics: they are closer to syntactic sugar for a generator expression inside alist()\nconstructor, and in particular the loop control variables are no longer leaked into the surrounding scope.The ellipsis (\n...\n) can be used as an atomic expression anywhere. (Previously it was only allowed in slices.) Also, it must now be spelled as...\n. (Previously it could also be spelled as. . .\n, by a mere accident of the grammar.)\nRemoved Syntax\u00b6\nPEP 3113: Tuple parameter unpacking removed. You can no longer write\ndef foo(a, (b, c)): ...\n. Usedef foo(a, b_c): b, c = b_c\ninstead.Removed backticks (use\nrepr()\ninstead).Removed\n<>\n(use!=\ninstead).Removed keyword:\nexec()\nis no longer a keyword; it remains as a function. (Fortunately the function syntax was also accepted in 2.x.) Also note thatexec()\nno longer takes a stream argument; instead ofexec(f)\nyou can useexec(f.read())\n.Integer literals no longer support a trailing\nl\norL\n.String literals no longer support a leading\nu\norU\n.The\nfrom\nmoduleimport\n*\nsyntax is only allowed at the module level, no longer inside functions.The only acceptable syntax for relative imports is\nfrom .[module] import name\n. Allimport\nforms not starting with.\nare interpreted as absolute imports. (PEP 328)Classic classes are gone.\nChanges Already Present In Python 2.6\u00b6\nSince many users presumably make the jump straight from Python 2.5 to Python 3.0, this section reminds the reader of new features that were originally designed for Python 3.0 but that were back-ported to Python 2.6. The corresponding sections in What\u2019s New in Python 2.6 should be consulted for longer descriptions.\nPEP 343: The \u2018with\u2019 statement. The\nwith\nstatement is now a standard feature and no longer needs to be imported from the__future__\n. Also check out Writing Context Managers and The contextlib module.PEP 366: Explicit Relative Imports From a Main Module. This enhances the usefulness of the\n-m\noption when the referenced module lives in a package.PEP 3101: Advanced String Formatting. Note: the 2.6 description mentions the\nformat()\nmethod for both 8-bit and Unicode strings. In 3.0, only thestr\ntype (text strings with Unicode support) supports this method; thebytes\ntype does not. The plan is to eventually make this the only API for string formatting, and to start deprecating the%\noperator in Python 3.1.PEP 3105: print As a Function. This is now a standard feature and no longer needs to be imported from\n__future__\n. More details were given above.PEP 3110: Exception-Handling Changes. The\nexcept\nexcas\nvar syntax is now standard andexcept\nexc, var is no longer supported. (Of course, theas\nvar part is still optional.)PEP 3112: Byte Literals. The\nb\"...\"\nstring literal notation (and its variants likeb'...'\n,b\"\"\"...\"\"\"\n, andbr\"...\"\n) now produces a literal of typebytes\n.PEP 3116: New I/O Library. The\nio\nmodule is now the standard way of doing file I/O. The built-inopen()\nfunction is now an alias forio.open()\nand has additional keyword arguments encoding, errors, newline and closefd. Also note that an invalid mode argument now raisesValueError\n, notIOError\n. The binary file object underlying a text file object can be accessed asf.buffer\n(but beware that the text object maintains a buffer of itself in order to speed up the encoding and decoding operations).PEP 3118: Revised Buffer Protocol. The old builtin\nbuffer()\nis now really gone; the new builtinmemoryview()\nprovides (mostly) similar functionality.PEP 3119: Abstract Base Classes. The\nabc\nmodule and the ABCs defined in thecollections\nmodule plays a somewhat more prominent role in the language now, and built-in collection types likedict\nandlist\nconform to thecollections.MutableMapping\nandcollections.MutableSequence\nABCs, respectively.PEP 3127: Integer Literal Support and Syntax. As mentioned above, the new octal literal notation is the only one supported, and binary literals have been added.\nPEP 3141: A Type Hierarchy for Numbers. The\nnumbers\nmodule is another new use of ABCs, defining Python\u2019s \u201cnumeric tower\u201d. Also note the newfractions\nmodule which implementsnumbers.Rational\n.\nLibrary Changes\u00b6\nDue to time constraints, this document does not exhaustively cover the very extensive changes to the standard library. PEP 3108 is the reference for the major changes to the library. Here\u2019s a capsule review:\nMany old modules were removed. Some, like\ngopherlib\n(no longer used) andmd5\n(replaced byhashlib\n), were already deprecated by PEP 4. Others were removed as a result of the removal of support for various platforms such as Irix, BeOS and Mac OS 9 (see PEP 11). Some modules were also selected for removal in Python 3.0 due to lack of use or because a better replacement exists. See PEP 3108 for an exhaustive list.The\nbsddb3\npackage was removed because its presence in the core standard library has proved over time to be a particular burden for the core developers due to testing instability and Berkeley DB\u2019s release schedule. However, the package is alive and well, externally maintained at https://www.jcea.es/programacion/pybsddb.htm.Some modules were renamed because their old name disobeyed PEP 8, or for various other reasons. Here\u2019s the list:\nOld Name\nNew Name\n_winreg\nwinreg\nConfigParser\nconfigparser\ncopy_reg\ncopyreg\nQueue\nqueue\nSocketServer\nsocketserver\nmarkupbase\n_markupbase\nrepr\nreprlib\ntest.test_support\ntest.support\nA common pattern in Python 2.x is to have one version of a module implemented in pure Python, with an optional accelerated version implemented as a C extension; for example,\npickle\nandcPickle\n. This places the burden of importing the accelerated version and falling back on the pure Python version on each user of these modules. In Python 3.0, the accelerated versions are considered implementation details of the pure Python versions. Users should always import the standard version, which attempts to import the accelerated version and falls back to the pure Python version. Thepickle\n/cPickle\npair received this treatment. Theprofile\nmodule is on the list for 3.1. TheStringIO\nmodule has been turned into a class in theio\nmodule.Some related modules have been grouped into packages, and usually the submodule names have been simplified. The resulting new packages are:\ndbm\n(anydbm\n,dbhash\n,dbm\n,dumbdbm\n,gdbm\n,whichdb\n).html\n(HTMLParser\n,htmlentitydefs\n).http\n(httplib\n,BaseHTTPServer\n,CGIHTTPServer\n,SimpleHTTPServer\n,Cookie\n,cookielib\n).tkinter\n(allTkinter\n-related modules exceptturtle\n). The target audience ofturtle\ndoesn\u2019t really care abouttkinter\n. Also note that as of Python 2.6, the functionality ofturtle\nhas been greatly enhanced.urllib\n(urllib\n,urllib2\n,urlparse\n,robotparse\n).xmlrpc\n(xmlrpclib\n,DocXMLRPCServer\n,SimpleXMLRPCServer\n).\nSome other changes to standard library modules, not covered by PEP 3108:\nKilled\nsets\n. Use the built-inset()\nclass.Cleanup of the\nsys\nmodule: removedsys.exitfunc()\n,sys.exc_clear()\n,sys.exc_type\n,sys.exc_value\n,sys.exc_traceback\n. (Note thatsys.last_type\netc. remain.)Cleanup of the\narray.array\ntype: theread()\nandwrite()\nmethods are gone; usefromfile()\nandtofile()\ninstead. Also, the'c'\ntypecode for array is gone \u2013 use either'b'\nfor bytes or'u'\nfor Unicode characters.Cleanup of the\noperator\nmodule: removedsequenceIncludes()\nandisCallable()\n.Cleanup of the\nthread\nmodule:acquire_lock()\nandrelease_lock()\nare gone; useacquire()\nandrelease()\ninstead.Cleanup of the\nrandom\nmodule: removed thejumpahead()\nAPI.The\nnew\nmodule is gone.The functions\nos.tmpnam()\n,os.tempnam()\nandos.tmpfile()\nhave been removed in favor of thetempfile\nmodule.The\ntokenize\nmodule has been changed to work with bytes. The main entry point is nowtokenize.tokenize()\n, instead of generate_tokens.string.letters\nand its friends (string.lowercase\nandstring.uppercase\n) are gone. Usestring.ascii_letters\netc. instead. (The reason for the removal is thatstring.letters\nand friends had locale-specific behavior, which is a bad idea for such attractively named global \u201cconstants\u201d.)Renamed module\n__builtin__\ntobuiltins\n(removing the underscores, adding an \u2018s\u2019). The__builtins__\nvariable found in most global namespaces is unchanged. To modify a builtin, you should usebuiltins\n, not__builtins__\n!\nPEP 3101: A New Approach To String Formatting\u00b6\nA new system for built-in string formatting operations replaces the\n%\nstring formatting operator. (However, the%\noperator is still supported; it will be deprecated in Python 3.1 and removed from the language at some later time.) Read PEP 3101 for the full scoop.\nChanges To Exceptions\u00b6\nThe APIs for raising and catching exception have been cleaned up and new powerful features added:\nPEP 352: All exceptions must be derived (directly or indirectly) from\nBaseException\n. This is the root of the exception hierarchy. This is not new as a recommendation, but the requirement to inherit fromBaseException\nis new. (Python 2.6 still allowed classic classes to be raised, and placed no restriction on what you can catch.) As a consequence, string exceptions are finally truly and utterly dead.Almost all exceptions should actually derive from\nException\n;BaseException\nshould only be used as a base class for exceptions that should only be handled at the top level, such asSystemExit\norKeyboardInterrupt\n. The recommended idiom for handling all exceptions except for this latter category is to useexcept\nException\n.StandardError\nwas removed.Exceptions no longer behave as sequences. Use the\nargs\nattribute instead.PEP 3109: Raising exceptions. You must now use\nraise Exception(args)\ninstead ofraise Exception, args\n. Additionally, you can no longer explicitly specify a traceback; instead, if you have to do this, you can assign directly to the__traceback__\nattribute (see below).PEP 3110: Catching exceptions. You must now use\nexcept SomeException as variable\ninstead ofexcept SomeException, variable\n. Moreover, the variable is explicitly deleted when theexcept\nblock is left.PEP 3134: Exception chaining. There are two cases: implicit chaining and explicit chaining. Implicit chaining happens when an exception is raised in an\nexcept\norfinally\nhandler block. This usually happens due to a bug in the handler block; we call this a secondary exception. In this case, the original exception (that was being handled) is saved as the__context__\nattribute of the secondary exception. Explicit chaining is invoked with this syntax:raise SecondaryException() from primary_exception\n(where primary_exception is any expression that produces an exception object, probably an exception that was previously caught). In this case, the primary exception is stored on the\n__cause__\nattribute of the secondary exception. The traceback printed when an unhandled exception occurs walks the chain of__cause__\nand__context__\nattributes and prints a separate traceback for each component of the chain, with the primary exception at the top. (Java users may recognize this behavior.)PEP 3134: Exception objects now store their traceback as the\n__traceback__\nattribute. This means that an exception object now contains all the information pertaining to an exception, and there are fewer reasons to usesys.exc_info()\n(though the latter is not removed).A few exception messages are improved when Windows fails to load an extension module. For example,\nerror code 193\nis now%1 is not a valid Win32 application\n. Strings now deal with non-English locales.\nMiscellaneous Other Changes\u00b6\nOperators And Special Methods\u00b6\n!=\nnow returns the opposite of==\n, unless==\nreturnsNotImplemented\n.The concept of \u201cunbound methods\u201d has been removed from the language. When referencing a method as a class attribute, you now get a plain function object.\n__getslice__()\n,__setslice__()\nand__delslice__()\nwere killed. The syntaxa[i:j]\nnow translates toa.__getitem__(slice(i, j))\n(or__setitem__()\nor__delitem__()\n, when used as an assignment or deletion target, respectively).PEP 3114: the standard\nnext()\nmethod has been renamed to__next__()\n.The\n__oct__()\nand__hex__()\nspecial methods are removed \u2013oct()\nandhex()\nuse__index__()\nnow to convert the argument to an integer.Removed support for\n__members__\nand__methods__\n.The function attributes named\nfunc_X\nhave been renamed to use the__X__\nform, freeing up these names in the function attribute namespace for user-defined attributes. To wit,func_closure\n,func_code\n,func_defaults\n,func_dict\n,func_doc\n,func_globals\n,func_name\nwere renamed to__closure__\n,__code__\n,__defaults__\n,__dict__\n,__doc__\n,__globals__\n,__name__\n, respectively.__nonzero__()\nis now__bool__()\n.\nBuiltins\u00b6\nPEP 3135: New\nsuper()\n. You can now invokesuper()\nwithout arguments and (assuming this is in a regular instance method defined inside aclass\nstatement) the right class and instance will automatically be chosen. With arguments, the behavior ofsuper()\nis unchanged.PEP 3111:\nraw_input()\nwas renamed toinput()\n. That is, the newinput()\nfunction reads a line fromsys.stdin\nand returns it with the trailing newline stripped. It raisesEOFError\nif the input is terminated prematurely. To get the old behavior ofinput()\n, useeval(input())\n.A new built-in function\nnext()\nwas added to call the__next__()\nmethod on an object.The\nround()\nfunction rounding strategy and return type have changed. Exact halfway cases are now rounded to the nearest even result instead of away from zero. (For example,round(2.5)\nnow returns2\nrather than3\n.)round(x[, n])\nnow delegates tox.__round__([n])\ninstead of always returning a float. It generally returns an integer when called with a single argument and a value of the same type asx\nwhen called with two arguments.Moved\nintern()\ntosys.intern()\n.Removed:\napply()\n. Instead ofapply(f, args)\nusef(*args)\n.Removed\ncallable()\n. Instead ofcallable(f)\nyou can useisinstance(f, collections.Callable)\n. Theoperator.isCallable()\nfunction is also gone.Removed\ncoerce()\n. This function no longer serves a purpose now that classic classes are gone.Removed\nexecfile()\n. Instead ofexecfile(fn)\nuseexec(open(fn).read())\n.Removed the\nfile\ntype. Useopen()\n. There are now several different kinds of streams that open can return in theio\nmodule.Removed\nreduce()\n. Usefunctools.reduce()\nif you really need it; however, 99 percent of the time an explicitfor\nloop is more readable.Removed\nreload()\n. Useimp.reload()\n.Removed.\ndict.has_key()\n\u2013 use thein\noperator instead.\nBuild and C API Changes\u00b6\nDue to time constraints, here is a very incomplete list of changes to the C API.\nSupport for several platforms was dropped, including but not limited to Mac OS 9, BeOS, RISCOS, Irix, and Tru64.\nPEP 3118: New Buffer API.\nPEP 3121: Extension Module Initialization & Finalization.\nPEP 3123: Making\nPyObject_HEAD\nconform to standard C.No more C API support for restricted execution.\nPyNumber_Coerce()\n,PyNumber_CoerceEx()\n,PyMember_Get()\n, andPyMember_Set()\nC APIs are removed.New C API\nPyImport_ImportModuleNoBlock()\n, works likePyImport_ImportModule()\nbut won\u2019t block on the import lock (returning an error instead).Renamed the boolean conversion C-level slot and method:\nnb_nonzero\nis nownb_bool\n.Removed\nMETH_OLDARGS\nandWITH_CYCLE_GC\nfrom the C API.\nPerformance\u00b6\nThe net result of the 3.0 generalizations is that Python 3.0 runs the pystone benchmark around 10% slower than Python 2.5. Most likely the biggest cause is the removal of special-casing for small integers. There\u2019s room for improvement, but it will happen after 3.0 is released!\nPorting To Python 3.0\u00b6\nFor porting existing Python 2.5 or 2.6 source code to Python 3.0, the best strategy is the following:\n(Prerequisite:) Start with excellent test coverage.\nPort to Python 2.6. This should be no more work than the average port from Python 2.x to Python 2.(x+1). Make sure all your tests pass.\n(Still using 2.6:) Turn on the\n-3\ncommand line switch. This enables warnings about features that will be removed (or change) in 3.0. Run your test suite again, and fix code that you get warnings about until there are no warnings left, and all your tests still pass.Run the\n2to3\nsource-to-source translator over your source code tree. Run the result of the translation under Python 3.0. Manually fix up any remaining issues, fixing problems until all tests pass again.\nIt is not recommended to try to write source code that runs unchanged\nunder both Python 2.6 and 3.0; you\u2019d have to use a very contorted\ncoding style, e.g. avoiding print\nstatements, metaclasses,\nand much more. If you are maintaining a library that needs to support\nboth Python 2.6 and Python 3.0, the best approach is to modify step 3\nabove by editing the 2.6 version of the source code and running the\n2to3\ntranslator again, rather than editing the 3.0 version of the\nsource code.\nFor porting C extensions to Python 3.0, please see Porting Extension Modules to Python 3.", "code_snippets": [" ", " ", " ", "\n", " ", " ", "\n\n", " ", " ", " ", "\n", " ", " ", " ", "\n\n", " ", " ", "\n", " ", " ", "\n\n", " ", " ", " ", "\n", " ", " ", "\n\n", " ", " ", " ", " ", "\n", " ", " ", " ", "\n", " ", " ", " ", "\n", " ", " ", " ", " ", "\n", "\n ", " ", " ", "\n ", "\n", "\n ", "\n", " ", " ", "\n"], "language": "Python", "source": "python.org", "token_count": 7760} +{"url": "https://docs.python.org/3/library/crypt.html", "title": " \u2014 Function to check Unix passwords", "content": "crypt\n\u2014 Function to check Unix passwords\u00b6\nDeprecated since version 3.11, removed in version 3.13.\nThis module is no longer part of the Python standard library. It was removed in Python 3.13 after being deprecated in Python 3.11. The removal was decided in PEP 594.\nApplications can use the hashlib\nmodule from the standard library.\nOther possible replacements are third-party libraries from PyPI:\nlegacycrypt, bcrypt, or argon2-cffi.\nThese are not supported or maintained by the Python core team.\nThe last version of Python that provided the crypt\nmodule was\nPython 3.12.", "code_snippets": [], "language": "Python", "source": "python.org", "token_count": 143} +{"url": "https://docs.python.org/3/c-api/complex.html", "title": "Complex Number Objects", "content": "Complex Number Objects\u00b6\nPython\u2019s complex number objects are implemented as two distinct types when viewed from the C API: one is the Python object exposed to Python programs, and the other is a C structure which represents the actual complex number value. The API provides functions for working with both.\nComplex Numbers as C Structures\u00b6\nNote that the functions which accept these structures as parameters and return them as results do so by value rather than dereferencing them through pointers. This is consistent throughout the API.\n-\ntype Py_complex\u00b6\nThe C structure which corresponds to the value portion of a Python complex number object. Most of the functions for dealing with complex number objects use structures of this type as input or output values, as appropriate.\nThe structure is defined as:\ntypedef struct { double real; double imag; } Py_complex;\n-\nPy_complex _Py_c_sum(Py_complex left, Py_complex right)\u00b6\nReturn the sum of two complex numbers, using the C\nPy_complex\nrepresentation.\n-\nPy_complex _Py_c_diff(Py_complex left, Py_complex right)\u00b6\nReturn the difference between two complex numbers, using the C\nPy_complex\nrepresentation.\n-\nPy_complex _Py_c_neg(Py_complex num)\u00b6\nReturn the negation of the complex number num, using the C\nPy_complex\nrepresentation.\n-\nPy_complex _Py_c_prod(Py_complex left, Py_complex right)\u00b6\nReturn the product of two complex numbers, using the C\nPy_complex\nrepresentation.\n-\nPy_complex _Py_c_quot(Py_complex dividend, Py_complex divisor)\u00b6\nReturn the quotient of two complex numbers, using the C\nPy_complex\nrepresentation.If divisor is null, this method returns zero and sets\nerrno\ntoEDOM\n.\n-\nPy_complex _Py_c_pow(Py_complex num, Py_complex exp)\u00b6\nReturn the exponentiation of num by exp, using the C\nPy_complex\nrepresentation.If num is null and exp is not a positive real number, this method returns zero and sets\nerrno\ntoEDOM\n.Set\nerrno\ntoERANGE\non overflows.\nComplex Numbers as Python Objects\u00b6\n-\nPyTypeObject PyComplex_Type\u00b6\n- Part of the Stable ABI.\nThis instance of\nPyTypeObject\nrepresents the Python complex number type. It is the same object ascomplex\nin the Python layer.\n-\nint PyComplex_Check(PyObject *p)\u00b6\nReturn true if its argument is a\nPyComplexObject\nor a subtype ofPyComplexObject\n. This function always succeeds.\n-\nint PyComplex_CheckExact(PyObject *p)\u00b6\nReturn true if its argument is a\nPyComplexObject\n, but not a subtype ofPyComplexObject\n. This function always succeeds.\n-\nPyObject *PyComplex_FromCComplex(Py_complex v)\u00b6\n- Return value: New reference.\nCreate a new Python complex number object from a C\nPy_complex\nvalue. ReturnNULL\nwith an exception set on error.\n-\nPyObject *PyComplex_FromDoubles(double real, double imag)\u00b6\n- Return value: New reference. Part of the Stable ABI.\nReturn a new\nPyComplexObject\nobject from real and imag. ReturnNULL\nwith an exception set on error.\n-\ndouble PyComplex_RealAsDouble(PyObject *op)\u00b6\n- Part of the Stable ABI.\nReturn the real part of op as a C double.\nIf op is not a Python complex number object but has a\n__complex__()\nmethod, this method will first be called to convert op to a Python complex number object. If__complex__()\nis not defined then it falls back to callPyFloat_AsDouble()\nand returns its result.Upon failure, this method returns\n-1.0\nwith an exception set, so one should callPyErr_Occurred()\nto check for errors.Changed in version 3.13: Use\n__complex__()\nif available.\n-\ndouble PyComplex_ImagAsDouble(PyObject *op)\u00b6\n- Part of the Stable ABI.\nReturn the imaginary part of op as a C double.\nIf op is not a Python complex number object but has a\n__complex__()\nmethod, this method will first be called to convert op to a Python complex number object. If__complex__()\nis not defined then it falls back to callPyFloat_AsDouble()\nand returns0.0\non success.Upon failure, this method returns\n-1.0\nwith an exception set, so one should callPyErr_Occurred()\nto check for errors.Changed in version 3.13: Use\n__complex__()\nif available.\n-\nPy_complex PyComplex_AsCComplex(PyObject *op)\u00b6\nReturn the\nPy_complex\nvalue of the complex number op.If op is not a Python complex number object but has a\n__complex__()\nmethod, this method will first be called to convert op to a Python complex number object. If__complex__()\nis not defined then it falls back to__float__()\n. If__float__()\nis not defined then it falls back to__index__()\n.Upon failure, this method returns\nPy_complex\nwithreal\nset to-1.0\nand with an exception set, so one should callPyErr_Occurred()\nto check for errors.Changed in version 3.8: Use\n__index__()\nif available.", "code_snippets": [], "language": "Python", "source": "python.org", "token_count": 1133} +{"url": "https://docs.python.org/3/library/cgi.html", "title": " \u2014 Common Gateway Interface support", "content": "cgi\n\u2014 Common Gateway Interface support\u00b6\nDeprecated since version 3.11, removed in version 3.13.\nThis module is no longer part of the Python standard library. It was removed in Python 3.13 after being deprecated in Python 3.11. The removal was decided in PEP 594.\nA fork of the module on PyPI can be used instead: legacy-cgi. This is a copy of the cgi module, no longer maintained or supported by the core Python team.\nThe last version of Python that provided the cgi\nmodule was\nPython 3.12.", "code_snippets": [], "language": "Python", "source": "python.org", "token_count": 122} +{"url": "https://docs.python.org/3/library/nis.html", "title": " \u2014 Interface to Sun\u2019s NIS (Yellow Pages)", "content": "nis\n\u2014 Interface to Sun\u2019s NIS (Yellow Pages)\u00b6\nDeprecated since version 3.11, removed in version 3.13.\nThis module is no longer part of the Python standard library. It was removed in Python 3.13 after being deprecated in Python 3.11. The removal was decided in PEP 594.\nThe last version of Python that provided the nis\nmodule was\nPython 3.12.", "code_snippets": [], "language": "Python", "source": "python.org", "token_count": 85} +{"url": "https://docs.python.org/3/library/asyncio-subprocess.html", "title": "Subprocesses", "content": "Subprocesses\u00b6\nSource code: Lib/asyncio/subprocess.py, Lib/asyncio/base_subprocess.py\nThis section describes high-level async/await asyncio APIs to create and manage subprocesses.\nHere\u2019s an example of how asyncio can run a shell command and obtain its result:\nimport asyncio\nasync def run(cmd):\nproc = await asyncio.create_subprocess_shell(\ncmd,\nstdout=asyncio.subprocess.PIPE,\nstderr=asyncio.subprocess.PIPE)\nstdout, stderr = await proc.communicate()\nprint(f'[{cmd!r} exited with {proc.returncode}]')\nif stdout:\nprint(f'[stdout]\\n{stdout.decode()}')\nif stderr:\nprint(f'[stderr]\\n{stderr.decode()}')\nasyncio.run(run('ls /zzz'))\nwill print:\n['ls /zzz' exited with 1]\n[stderr]\nls: /zzz: No such file or directory\nBecause all asyncio subprocess functions are asynchronous and asyncio provides many tools to work with such functions, it is easy to execute and monitor multiple subprocesses in parallel. It is indeed trivial to modify the above example to run several commands simultaneously:\nasync def main():\nawait asyncio.gather(\nrun('ls /zzz'),\nrun('sleep 1; echo \"hello\"'))\nasyncio.run(main())\nSee also the Examples subsection.\nCreating Subprocesses\u00b6\n- async asyncio.create_subprocess_exec(program, *args, stdin=None, stdout=None, stderr=None, limit=None, **kwds)\u00b6\nCreate a subprocess.\nThe limit argument sets the buffer limit for\nStreamReader\nwrappers forstdout\nandstderr\n(ifsubprocess.PIPE\nis passed to stdout and stderr arguments).Return a\nProcess\ninstance.See the documentation of\nloop.subprocess_exec()\nfor other parameters.If the process object is garbage collected while the process is still running, the child process will be killed.\nChanged in version 3.10: Removed the loop parameter.\n- async asyncio.create_subprocess_shell(cmd, stdin=None, stdout=None, stderr=None, limit=None, **kwds)\u00b6\nRun the cmd shell command.\nThe limit argument sets the buffer limit for\nStreamReader\nwrappers forstdout\nandstderr\n(ifsubprocess.PIPE\nis passed to stdout and stderr arguments).Return a\nProcess\ninstance.See the documentation of\nloop.subprocess_shell()\nfor other parameters.If the process object is garbage collected while the process is still running, the child process will be killed.\nImportant\nIt is the application\u2019s responsibility to ensure that all whitespace and special characters are quoted appropriately to avoid shell injection vulnerabilities. The\nshlex.quote()\nfunction can be used to properly escape whitespace and special shell characters in strings that are going to be used to construct shell commands.Changed in version 3.10: Removed the loop parameter.\nNote\nSubprocesses are available for Windows if a ProactorEventLoop\nis\nused. See Subprocess Support on Windows\nfor details.\nSee also\nasyncio also has the following low-level APIs to work with subprocesses:\nloop.subprocess_exec()\n, loop.subprocess_shell()\n,\nloop.connect_read_pipe()\n, loop.connect_write_pipe()\n,\nas well as the Subprocess Transports\nand Subprocess Protocols.\nConstants\u00b6\n- asyncio.subprocess.PIPE\u00b6\nCan be passed to the stdin, stdout or stderr parameters.\nIf PIPE is passed to stdin argument, the\nProcess.stdin\nattribute will point to aStreamWriter\ninstance.If PIPE is passed to stdout or stderr arguments, the\nProcess.stdout\nandProcess.stderr\nattributes will point toStreamReader\ninstances.\n- asyncio.subprocess.STDOUT\u00b6\nSpecial value that can be used as the stderr argument and indicates that standard error should be redirected into standard output.\n- asyncio.subprocess.DEVNULL\u00b6\nSpecial value that can be used as the stdin, stdout or stderr argument to process creation functions. It indicates that the special file\nos.devnull\nwill be used for the corresponding subprocess stream.\nInteracting with Subprocesses\u00b6\nBoth create_subprocess_exec()\nand create_subprocess_shell()\nfunctions return instances of the Process class. Process is a high-level\nwrapper that allows communicating with subprocesses and watching for\ntheir completion.\n- class asyncio.subprocess.Process\u00b6\nAn object that wraps OS processes created by the\ncreate_subprocess_exec()\nandcreate_subprocess_shell()\nfunctions.This class is designed to have a similar API to the\nsubprocess.Popen\nclass, but there are some notable differences:unlike Popen, Process instances do not have an equivalent to the\npoll()\nmethod;the\ncommunicate()\nandwait()\nmethods don\u2019t have a timeout parameter: use thewait_for()\nfunction;the\nProcess.wait()\nmethod is asynchronous, whereassubprocess.Popen.wait()\nmethod is implemented as a blocking busy loop;the universal_newlines parameter is not supported.\nThis class is not thread safe.\nSee also the Subprocess and Threads section.\n- async wait()\u00b6\nWait for the child process to terminate.\nSet and return the\nreturncode\nattribute.Note\nThis method can deadlock when using\nstdout=PIPE\norstderr=PIPE\nand the child process generates so much output that it blocks waiting for the OS pipe buffer to accept more data. Use thecommunicate()\nmethod when using pipes to avoid this condition.\n- async communicate(input=None)\u00b6\nInteract with process:\nsend data to stdin (if input is not\nNone\n);closes stdin;\nread data from stdout and stderr, until EOF is reached;\nwait for process to terminate.\nThe optional input argument is the data (\nbytes\nobject) that will be sent to the child process.Return a tuple\n(stdout_data, stderr_data)\n.If either\nBrokenPipeError\norConnectionResetError\nexception is raised when writing input into stdin, the exception is ignored. This condition occurs when the process exits before all data are written into stdin.If it is desired to send data to the process\u2019 stdin, the process needs to be created with\nstdin=PIPE\n. Similarly, to get anything other thanNone\nin the result tuple, the process has to be created withstdout=PIPE\nand/orstderr=PIPE\narguments.Note, that the data read is buffered in memory, so do not use this method if the data size is large or unlimited.\nChanged in version 3.12: stdin gets closed when\ninput=None\ntoo.\n- send_signal(signal)\u00b6\nSends the signal signal to the child process.\nNote\nOn Windows,\nSIGTERM\nis an alias forterminate()\n.CTRL_C_EVENT\nandCTRL_BREAK_EVENT\ncan be sent to processes started with a creationflags parameter which includesCREATE_NEW_PROCESS_GROUP\n.\n- terminate()\u00b6\nStop the child process.\nOn POSIX systems this method sends\nSIGTERM\nto the child process.On Windows the Win32 API function\nTerminateProcess()\nis called to stop the child process.\n- kill()\u00b6\nKill the child process.\nOn POSIX systems this method sends\nSIGKILL\nto the child process.On Windows this method is an alias for\nterminate()\n.\n- stdin\u00b6\nStandard input stream (\nStreamWriter\n) orNone\nif the process was created withstdin=None\n.\n- stdout\u00b6\nStandard output stream (\nStreamReader\n) orNone\nif the process was created withstdout=None\n.\n- stderr\u00b6\nStandard error stream (\nStreamReader\n) orNone\nif the process was created withstderr=None\n.\nWarning\nUse the\ncommunicate()\nmethod rather thanprocess.stdin.write()\n,await process.stdout.read()\norawait process.stderr.read()\n. This avoids deadlocks due to streams pausing reading or writing and blocking the child process.- pid\u00b6\nProcess identification number (PID).\nNote that for processes created by the\ncreate_subprocess_shell()\nfunction, this attribute is the PID of the spawned shell.\n- returncode\u00b6\nReturn code of the process when it exits.\nA\nNone\nvalue indicates that the process has not terminated yet.A negative value\n-N\nindicates that the child was terminated by signalN\n(POSIX only).\nSubprocess and Threads\u00b6\nStandard asyncio event loop supports running subprocesses from different threads by default.\nOn Windows subprocesses are provided by ProactorEventLoop\nonly (default),\nSelectorEventLoop\nhas no subprocess support.\nNote that alternative event loop implementations might have own limitations; please refer to their documentation.\nSee also\nThe Concurrency and multithreading in asyncio section.\nExamples\u00b6\nAn example using the Process\nclass to\ncontrol a subprocess and the StreamReader\nclass to read from\nits standard output.\nThe subprocess is created by the create_subprocess_exec()\nfunction:\nimport asyncio\nimport sys\nasync def get_date():\ncode = 'import datetime; print(datetime.datetime.now())'\n# Create the subprocess; redirect the standard output\n# into a pipe.\nproc = await asyncio.create_subprocess_exec(\nsys.executable, '-c', code,\nstdout=asyncio.subprocess.PIPE)\n# Read one line of output.\ndata = await proc.stdout.readline()\nline = data.decode('ascii').rstrip()\n# Wait for the subprocess exit.\nawait proc.wait()\nreturn line\ndate = asyncio.run(get_date())\nprint(f\"Current date: {date}\")\nSee also the same example written using low-level APIs.", "code_snippets": ["\n\n", " ", "\n ", " ", " ", " ", "\n ", "\n ", "\n ", "\n\n ", " ", " ", " ", " ", "\n\n ", "\n ", " ", "\n ", "\n ", " ", "\n ", "\n\n", "\n", " ", " ", " ", "\n", "\n", " ", " ", " ", " ", " ", " ", "\n", " ", "\n ", " ", "\n ", "\n ", "\n\n", "\n", "\n", "\n\n", " ", "\n ", " ", " ", "\n\n ", "\n ", "\n ", " ", " ", " ", "\n ", " ", " ", "\n ", "\n\n ", "\n ", " ", " ", " ", "\n ", " ", " ", "\n\n ", "\n ", " ", "\n ", " ", "\n\n", " ", " ", "\n", "\n"], "language": "Python", "source": "python.org", "token_count": 2147} +{"url": "https://docs.python.org/3/howto/cporting.html", "title": "Porting Extension Modules to Python 3", "content": "Porting Extension Modules to Python 3\u00b6\nWe recommend the following resources for porting extension modules to Python 3:\nThe Migrating C extensions chapter from Supporting Python 3: An in-depth guide, a book on moving from Python 2 to Python 3 in general, guides the reader through porting an extension module.\nThe Porting guide from the py3c project provides opinionated suggestions with supporting code.\nRecommended third party tools offer abstractions over the Python\u2019s C API. Extensions generally need to be re-written to use one of them, but the library then handles differences between various Python versions and implementations.", "code_snippets": [], "language": "Python", "source": "python.org", "token_count": 158} +{"url": "https://docs.python.org/3/c-api/lifecycle.html", "title": "Object Life Cycle", "content": "Object Life Cycle\u00b6\nThis section explains how a type\u2019s slots relate to each other throughout the life of an object. It is not intended to be a complete canonical reference for the slots; instead, refer to the slot-specific documentation in Type Object Structures for details about a particular slot.\nLife Events\u00b6\nThe figure below illustrates the order of events that can occur throughout an object\u2019s life. An arrow from A to B indicates that event B can occur after event A has occurred, with the arrow\u2019s label indicating the condition that must be true for B to occur after A.\nExplanation:\nWhen a new object is constructed by calling its type:\ntp_new\nis called to create a new object.tp_alloc\nis directly called bytp_new\nto allocate the memory for the new object.tp_init\ninitializes the newly created object.tp_init\ncan be called again to re-initialize an object, if desired. Thetp_init\ncall can also be skipped entirely, for example by Python code calling__new__()\n.\nAfter\ntp_init\ncompletes, the object is ready to use.Some time after the last reference to an object is removed:\nIf an object is not marked as finalized, it might be finalized by marking it as finalized and calling its\ntp_finalize\nfunction. Python does not finalize an object when the last reference to it is deleted; usePyObject_CallFinalizerFromDealloc()\nto ensure thattp_finalize\nis always called.If the object is marked as finalized,\ntp_clear\nmight be called by the garbage collector to clear references held by the object. It is not called when the object\u2019s reference count reaches zero.tp_dealloc\nis called to destroy the object. To avoid code duplication,tp_dealloc\ntypically calls intotp_clear\nto free up the object\u2019s references.When\ntp_dealloc\nfinishes object destruction, it directly callstp_free\n(usually set toPyObject_Free()\norPyObject_GC_Del()\nautomatically as appropriate for the type) to deallocate the memory.\nThe\ntp_finalize\nfunction is permitted to add a reference to the object if desired. If it does, the object is resurrected, preventing its pending destruction. (Onlytp_finalize\nis allowed to resurrect an object;tp_clear\nandtp_dealloc\ncannot without calling intotp_finalize\n.) Resurrecting an object may or may not cause the object\u2019s finalized mark to be removed. Currently, Python does not remove the finalized mark from a resurrected object if it supports garbage collection (i.e., thePy_TPFLAGS_HAVE_GC\nflag is set) but does remove the mark if the object does not support garbage collection; either or both of these behaviors may change in the future.tp_dealloc\ncan optionally calltp_finalize\nviaPyObject_CallFinalizerFromDealloc()\nif it wishes to reuse that code to help with object destruction. This is recommended because it guarantees thattp_finalize\nis always called before destruction. See thetp_dealloc\ndocumentation for example code.If the object is a member of a cyclic isolate and either\ntp_clear\nfails to break the reference cycle or the cyclic isolate is not detected (perhapsgc.disable()\nwas called, or thePy_TPFLAGS_HAVE_GC\nflag was erroneously omitted in one of the involved types), the objects remain indefinitely uncollectable (they \u201cleak\u201d). Seegc.garbage\n.\nIf the object is marked as supporting garbage collection (the\nPy_TPFLAGS_HAVE_GC\nflag is set in\ntp_flags\n), the following events are also possible:\nThe garbage collector occasionally calls\ntp_traverse\nto identify cyclic isolates.When the garbage collector discovers a cyclic isolate, it finalizes one of the objects in the group by marking it as finalized and calling its\ntp_finalize\nfunction, if it has one. This repeats until the cyclic isolate doesn\u2019t exist or all of the objects have been finalized.tp_finalize\nis permitted to resurrect the object by adding a reference from outside the cyclic isolate. The new reference causes the group of objects to no longer form a cyclic isolate (the reference cycle may still exist, but if it does the objects are no longer isolated).When the garbage collector discovers a cyclic isolate and all of the objects in the group have already been marked as finalized, the garbage collector clears one or more of the uncleared objects in the group (possibly concurrently) by calling each\u2019s\ntp_clear\nfunction. This repeats as long as the cyclic isolate still exists and not all of the objects have been cleared.\nCyclic Isolate Destruction\u00b6\nListed below are the stages of life of a hypothetical cyclic isolate\nthat continues to exist after each member object is finalized or cleared. It\nis a memory leak if a cyclic isolate progresses through all of these stages; it should\nvanish once all objects are cleared, if not sooner. A cyclic isolate can\nvanish either because the reference cycle is broken or because the objects are\nno longer isolated due to finalizer resurrection (see\ntp_finalize\n).\nReachable (not yet a cyclic isolate): All objects are in their normal, reachable state. A reference cycle could exist, but an external reference means the objects are not yet isolated.\nUnreachable but consistent: The final reference from outside the cyclic group of objects has been removed, causing the objects to become isolated (thus a cyclic isolate is born). None of the group\u2019s objects have been finalized or cleared yet. The cyclic isolate remains at this stage until some future run of the garbage collector (not necessarily the next run because the next run might not scan every object).\nMix of finalized and not finalized: Objects in a cyclic isolate are finalized one at a time, which means that there is a period of time when the cyclic isolate is composed of a mix of finalized and non-finalized objects. Finalization order is unspecified, so it can appear random. A finalized object must behave in a sane manner when non-finalized objects interact with it, and a non-finalized object must be able to tolerate the finalization of an arbitrary subset of its referents.\nAll finalized: All objects in a cyclic isolate are finalized before any of them are cleared.\nMix of finalized and cleared: The objects can be cleared serially or concurrently (but with the GIL held); either way, some will finish before others. A finalized object must be able to tolerate the clearing of a subset of its referents. PEP 442 calls this stage \u201ccyclic trash\u201d.\nLeaked: If a cyclic isolate still exists after all objects in the group have been finalized and cleared, then the objects remain indefinitely uncollectable (see\ngc.garbage\n). It is a bug if a cyclic isolate reaches this stage\u2014it means thetp_clear\nmethods of the participating objects have failed to break the reference cycle as required.\nIf tp_clear\ndid not exist, then Python would have no\nway to safely break a reference cycle. Simply destroying an object in a cyclic\nisolate would result in a dangling pointer, triggering undefined behavior when\nan object referencing the destroyed object is itself destroyed. The clearing\nstep makes object destruction a two-phase process: first\ntp_clear\nis called to partially destroy the objects\nenough to detangle them from each other, then\ntp_dealloc\nis called to complete the destruction.\nUnlike clearing, finalization is not a phase of destruction. A finalized\nobject must still behave properly by continuing to fulfill its design\ncontracts. An object\u2019s finalizer is allowed to execute arbitrary Python code,\nand is even allowed to prevent the impending destruction by adding a reference.\nThe finalizer is only related to destruction by call order\u2014if it runs, it runs\nbefore destruction, which starts with tp_clear\n(if\ncalled) and concludes with tp_dealloc\n.\nThe finalization step is not necessary to safely reclaim the objects in a cyclic isolate, but its existence makes it easier to design types that behave in a sane manner when objects are cleared. Clearing an object might necessarily leave it in a broken, partially destroyed state\u2014it might be unsafe to call any of the cleared object\u2019s methods or access any of its attributes. With finalization, only finalized objects can possibly interact with cleared objects; non-finalized objects are guaranteed to interact with only non-cleared (but potentially finalized) objects.\nTo summarize the possible interactions:\nA non-finalized object might have references to or from non-finalized and finalized objects, but not to or from cleared objects.\nA finalized object might have references to or from non-finalized, finalized, and cleared objects.\nA cleared object might have references to or from finalized and cleared objects, but not to or from non-finalized objects.\nWithout any reference cycles, an object can be simply destroyed once its last\nreference is deleted; the finalization and clearing steps are not necessary to\nsafely reclaim unused objects. However, it can be useful to automatically call\ntp_finalize\nand tp_clear\nbefore destruction anyway because type design is simplified when all objects\nalways experience the same series of events regardless of whether they\nparticipated in a cyclic isolate. Python currently only calls\ntp_finalize\nand tp_clear\nas\nneeded to destroy a cyclic isolate; this may change in a future version.\nFunctions\u00b6\nTo allocate and free memory, see Allocating Objects on the Heap.\n-\nvoid PyObject_CallFinalizer(PyObject *op)\u00b6\nFinalizes the object as described in\ntp_finalize\n. Call this function (orPyObject_CallFinalizerFromDealloc()\n) instead of callingtp_finalize\ndirectly because this function may deduplicate multiple calls totp_finalize\n. Currently, calls are only deduplicated if the type supports garbage collection (i.e., thePy_TPFLAGS_HAVE_GC\nflag is set); this may change in the future.Added in version 3.4.\n-\nint PyObject_CallFinalizerFromDealloc(PyObject *op)\u00b6\nSame as\nPyObject_CallFinalizer()\nbut meant to be called at the beginning of the object\u2019s destructor (tp_dealloc\n). There must not be any references to the object. If the object\u2019s finalizer resurrects the object, this function returns -1; no further destruction should happen. Otherwise, this function returns 0 and destruction can continue normally.Added in version 3.4.\nSee also\ntp_dealloc\nfor example code.", "code_snippets": [], "language": "Python", "source": "python.org", "token_count": 2516} +{"url": "https://docs.python.org/3/c-api/type.html", "title": "Type Objects", "content": "Type Objects\u00b6\n-\ntype PyTypeObject\u00b6\n- Part of the Limited API (as an opaque struct).\nThe C structure of the objects used to describe built-in types.\n-\nPyTypeObject PyType_Type\u00b6\n- Part of the Stable ABI.\nThis is the type object for type objects; it is the same object as\ntype\nin the Python layer.\n-\nint PyType_Check(PyObject *o)\u00b6\nReturn non-zero if the object o is a type object, including instances of types derived from the standard type object. Return 0 in all other cases. This function always succeeds.\n-\nint PyType_CheckExact(PyObject *o)\u00b6\nReturn non-zero if the object o is a type object, but not a subtype of the standard type object. Return 0 in all other cases. This function always succeeds.\n-\nunsigned int PyType_ClearCache()\u00b6\n- Part of the Stable ABI.\nClear the internal lookup cache. Return the current version tag.\n-\nunsigned long PyType_GetFlags(PyTypeObject *type)\u00b6\n- Part of the Stable ABI.\nReturn the\ntp_flags\nmember of type. This function is primarily meant for use withPy_LIMITED_API\n; the individual flag bits are guaranteed to be stable across Python releases, but access totp_flags\nitself is not part of the limited API.Added in version 3.2.\nChanged in version 3.4: The return type is now\nunsigned long\nrather thanlong\n.\n-\nPyObject *PyType_GetDict(PyTypeObject *type)\u00b6\nReturn the type object\u2019s internal namespace, which is otherwise only exposed via a read-only proxy (\ncls.__dict__\n). This is a replacement for accessingtp_dict\ndirectly. The returned dictionary must be treated as read-only.This function is meant for specific embedding and language-binding cases, where direct access to the dict is necessary and indirect access (e.g. via the proxy or\nPyObject_GetAttr()\n) isn\u2019t adequate.Extension modules should continue to use\ntp_dict\n, directly or indirectly, when setting up their own types.Added in version 3.12.\n-\nvoid PyType_Modified(PyTypeObject *type)\u00b6\n- Part of the Stable ABI.\nInvalidate the internal lookup cache for the type and all of its subtypes. This function must be called after any manual modification of the attributes or base classes of the type.\n-\nint PyType_AddWatcher(PyType_WatchCallback callback)\u00b6\nRegister callback as a type watcher. Return a non-negative integer ID which must be passed to future calls to\nPyType_Watch()\n. In case of error (e.g. no more watcher IDs available), return-1\nand set an exception.In free-threaded builds,\nPyType_AddWatcher()\nis not thread-safe, so it must be called at start up (before spawning the first thread).Added in version 3.12.\n-\nint PyType_ClearWatcher(int watcher_id)\u00b6\nClear watcher identified by watcher_id (previously returned from\nPyType_AddWatcher()\n). Return0\non success,-1\non error (e.g. if watcher_id was never registered.)An extension should never call\nPyType_ClearWatcher\nwith a watcher_id that was not returned to it by a previous call toPyType_AddWatcher()\n.Added in version 3.12.\n-\nint PyType_Watch(int watcher_id, PyObject *type)\u00b6\nMark type as watched. The callback granted watcher_id by\nPyType_AddWatcher()\nwill be called wheneverPyType_Modified()\nreports a change to type. (The callback may be called only once for a series of consecutive modifications to type, if_PyType_Lookup()\nis not called on type between the modifications; this is an implementation detail and subject to change.)An extension should never call\nPyType_Watch\nwith a watcher_id that was not returned to it by a previous call toPyType_AddWatcher()\n.Added in version 3.12.\n-\nint PyType_Unwatch(int watcher_id, PyObject *type)\u00b6\nMark type as not watched. This undoes a previous call to\nPyType_Watch()\n. type must not beNULL\n.An extension should never call this function with a watcher_id that was not returned to it by a previous call to\nPyType_AddWatcher()\n.On success, this function returns\n0\n. On failure, this function returns-1\nwith an exception set.Added in version 3.12.\n-\ntypedef int (*PyType_WatchCallback)(PyObject *type)\u00b6\nType of a type-watcher callback function.\nThe callback must not modify type or cause\nPyType_Modified()\nto be called on type or any type in its MRO; violating this rule could cause infinite recursion.Added in version 3.12.\n-\nint PyType_HasFeature(PyTypeObject *o, int feature)\u00b6\nReturn non-zero if the type object o sets the feature feature. Type features are denoted by single bit flags.\n-\nint PyType_FastSubclass(PyTypeObject *type, int flag)\u00b6\nReturn non-zero if the type object type sets the subclass flag flag. Subclass flags are denoted by\nPy_TPFLAGS_*_SUBCLASS\n. This function is used by many_Check\nfunctions for common types.See also\nPyObject_TypeCheck()\n, which is used as a slower alternative in_Check\nfunctions for types that don\u2019t come with subclass flags.\n-\nint PyType_IS_GC(PyTypeObject *o)\u00b6\nReturn true if the type object includes support for the cycle detector; this tests the type flag\nPy_TPFLAGS_HAVE_GC\n.\n-\nint PyType_IsSubtype(PyTypeObject *a, PyTypeObject *b)\u00b6\n- Part of the Stable ABI.\nReturn true if a is a subtype of b.\nThis function only checks for actual subtypes, which means that\n__subclasscheck__()\nis not called on b. CallPyObject_IsSubclass()\nto do the same check thatissubclass()\nwould do.\n-\nPyObject *PyType_GenericAlloc(PyTypeObject *type, Py_ssize_t nitems)\u00b6\n- Return value: New reference. Part of the Stable ABI.\nGeneric handler for the\ntp_alloc\nslot of a type object. Uses Python\u2019s default memory allocation mechanism to allocate memory for a new instance, zeros the memory, then initializes the memory as if by callingPyObject_Init()\norPyObject_InitVar()\n.Do not call this directly to allocate memory for an object; call the type\u2019s\ntp_alloc\nslot instead.For types that support garbage collection (i.e., the\nPy_TPFLAGS_HAVE_GC\nflag is set), this function behaves likePyObject_GC_New\norPyObject_GC_NewVar\n(except the memory is guaranteed to be zeroed before initialization), and should be paired withPyObject_GC_Del()\nintp_free\n. Otherwise, it behaves likePyObject_New\norPyObject_NewVar\n(except the memory is guaranteed to be zeroed before initialization) and should be paired withPyObject_Free()\nintp_free\n.\n-\nPyObject *PyType_GenericNew(PyTypeObject *type, PyObject *args, PyObject *kwds)\u00b6\n- Return value: New reference. Part of the Stable ABI.\nGeneric handler for the\ntp_new\nslot of a type object. Creates a new instance using the type\u2019stp_alloc\nslot and returns the resulting object.\n-\nint PyType_Ready(PyTypeObject *type)\u00b6\n- Part of the Stable ABI.\nFinalize a type object. This should be called on all type objects to finish their initialization. This function is responsible for adding inherited slots from a type\u2019s base class. Return\n0\non success, or return-1\nand sets an exception on error.Note\nIf some of the base classes implements the GC protocol and the provided type does not include the\nPy_TPFLAGS_HAVE_GC\nin its flags, then the GC protocol will be automatically implemented from its parents. On the contrary, if the type being created does includePy_TPFLAGS_HAVE_GC\nin its flags then it must implement the GC protocol itself by at least implementing thetp_traverse\nhandle.\n-\nPyObject *PyType_GetName(PyTypeObject *type)\u00b6\n- Return value: New reference. Part of the Stable ABI since version 3.11.\nReturn the type\u2019s name. Equivalent to getting the type\u2019s\n__name__\nattribute.Added in version 3.11.\n-\nPyObject *PyType_GetQualName(PyTypeObject *type)\u00b6\n- Return value: New reference. Part of the Stable ABI since version 3.11.\nReturn the type\u2019s qualified name. Equivalent to getting the type\u2019s\n__qualname__\nattribute.Added in version 3.11.\n-\nPyObject *PyType_GetFullyQualifiedName(PyTypeObject *type)\u00b6\n- Part of the Stable ABI since version 3.13.\nReturn the type\u2019s fully qualified name. Equivalent to\nf\"{type.__module__}.{type.__qualname__}\"\n, ortype.__qualname__\niftype.__module__\nis not a string or is equal to\"builtins\"\n.Added in version 3.13.\n-\nPyObject *PyType_GetModuleName(PyTypeObject *type)\u00b6\n- Part of the Stable ABI since version 3.13.\nReturn the type\u2019s module name. Equivalent to getting the\ntype.__module__\nattribute.Added in version 3.13.\n-\nvoid *PyType_GetSlot(PyTypeObject *type, int slot)\u00b6\n- Part of the Stable ABI since version 3.4.\nReturn the function pointer stored in the given slot. If the result is\nNULL\n, this indicates that either the slot isNULL\n, or that the function was called with invalid parameters. Callers will typically cast the result pointer into the appropriate function type.See\nPyType_Slot.slot\nfor possible values of the slot argument.Added in version 3.4.\nChanged in version 3.10:\nPyType_GetSlot()\ncan now accept all types. Previously, it was limited to heap types.\n-\nPyObject *PyType_GetModule(PyTypeObject *type)\u00b6\n- Part of the Stable ABI since version 3.10.\nReturn the module object associated with the given type when the type was created using\nPyType_FromModuleAndSpec()\n.If no module is associated with the given type, sets\nTypeError\nand returnsNULL\n.This function is usually used to get the module in which a method is defined. Note that in such a method,\nPyType_GetModule(Py_TYPE(self))\nmay not return the intended result.Py_TYPE(self)\nmay be a subclass of the intended class, and subclasses are not necessarily defined in the same module as their superclass. SeePyCMethod\nto get the class that defines the method. SeePyType_GetModuleByDef()\nfor cases whenPyCMethod\ncannot be used.Added in version 3.9.\n-\nvoid *PyType_GetModuleState(PyTypeObject *type)\u00b6\n- Part of the Stable ABI since version 3.10.\nReturn the state of the module object associated with the given type. This is a shortcut for calling\nPyModule_GetState()\non the result ofPyType_GetModule()\n.If no module is associated with the given type, sets\nTypeError\nand returnsNULL\n.If the type has an associated module but its state is\nNULL\n, returnsNULL\nwithout setting an exception.Added in version 3.9.\n-\nPyObject *PyType_GetModuleByDef(PyTypeObject *type, struct PyModuleDef *def)\u00b6\n- Return value: Borrowed reference. Part of the Stable ABI since version 3.13.\nFind the first superclass whose module was created from the given\nPyModuleDef\ndef, and return that module.If no module is found, raises a\nTypeError\nand returnsNULL\n.This function is intended to be used together with\nPyModule_GetState()\nto get module state from slot methods (such astp_init\nornb_add\n) and other places where a method\u2019s defining class cannot be passed using thePyCMethod\ncalling convention.The returned reference is borrowed from type, and will be valid as long as you hold a reference to type. Do not release it with\nPy_DECREF()\nor similar.Added in version 3.11.\n-\nint PyType_GetBaseByToken(PyTypeObject *type, void *token, PyTypeObject **result)\u00b6\n- Part of the Stable ABI since version 3.14.\nFind the first superclass in type\u2019s method resolution order whose\nPy_tp_token\ntoken is equal to the given one.If found, set *result to a new strong reference to it and return\n1\n.If not found, set *result to\nNULL\nand return0\n.On error, set *result to\nNULL\nand return-1\nwith an exception set.\nThe result argument may be\nNULL\n, in which case *result is not set. Use this if you need only the return value.The token argument may not be\nNULL\n.Added in version 3.14.\n-\nint PyUnstable_Type_AssignVersionTag(PyTypeObject *type)\u00b6\n- This is Unstable API. It may change without warning in minor releases.\nAttempt to assign a version tag to the given type.\nReturns 1 if the type already had a valid version tag or a new one was assigned, or 0 if a new tag could not be assigned.\nAdded in version 3.12.\n-\nint PyType_SUPPORTS_WEAKREFS(PyTypeObject *type)\u00b6\nReturn true if instances of type support creating weak references, false otherwise. This function always succeeds. type must not be\nNULL\n.See also\nCreating Heap-Allocated Types\u00b6\nThe following functions and structs are used to create heap types.\n-\nPyObject *PyType_FromMetaclass(PyTypeObject *metaclass, PyObject *module, PyType_Spec *spec, PyObject *bases)\u00b6\n- Part of the Stable ABI since version 3.12.\nCreate and return a heap type from the spec (see\nPy_TPFLAGS_HEAPTYPE\n).The metaclass metaclass is used to construct the resulting type object. When metaclass is\nNULL\n, the metaclass is derived from bases (or Py_tp_base[s] slots if bases isNULL\n, see below).Metaclasses that override\ntp_new\nare not supported, except iftp_new\nisNULL\n.The bases argument can be used to specify base classes; it can either be only one class or a tuple of classes. If bases is\nNULL\n, thePy_tp_bases\nslot is used instead. If that also isNULL\n, thePy_tp_base\nslot is used instead. If that also isNULL\n, the new type derives fromobject\n.The module argument can be used to record the module in which the new class is defined. It must be a module object or\nNULL\n. If notNULL\n, the module is associated with the new type and can later be retrieved withPyType_GetModule()\n. The associated module is not inherited by subclasses; it must be specified for each class individually.This function calls\nPyType_Ready()\non the new type.Note that this function does not fully match the behavior of calling\ntype()\nor using theclass\nstatement. With user-provided base types or metaclasses, prefer callingtype\n(or the metaclass) overPyType_From*\nfunctions. Specifically:__new__()\nis not called on the new class (and it must be set totype.__new__\n).__init__()\nis not called on the new class.__init_subclass__()\nis not called on any bases.__set_name__()\nis not called on new descriptors.\nAdded in version 3.12.\n-\nPyObject *PyType_FromModuleAndSpec(PyObject *module, PyType_Spec *spec, PyObject *bases)\u00b6\n- Return value: New reference. Part of the Stable ABI since version 3.10.\nEquivalent to\nPyType_FromMetaclass(NULL, module, spec, bases)\n.Added in version 3.9.\nChanged in version 3.10: The function now accepts a single class as the bases argument and\nNULL\nas thetp_doc\nslot.Changed in version 3.12: The function now finds and uses a metaclass corresponding to the provided base classes. Previously, only\ntype\ninstances were returned.The\ntp_new\nof the metaclass is ignored. which may result in incomplete initialization. Creating classes whose metaclass overridestp_new\nis deprecated.Changed in version 3.14: Creating classes whose metaclass overrides\ntp_new\nis no longer allowed.\n-\nPyObject *PyType_FromSpecWithBases(PyType_Spec *spec, PyObject *bases)\u00b6\n- Return value: New reference. Part of the Stable ABI since version 3.3.\nEquivalent to\nPyType_FromMetaclass(NULL, NULL, spec, bases)\n.Added in version 3.3.\nChanged in version 3.12: The function now finds and uses a metaclass corresponding to the provided base classes. Previously, only\ntype\ninstances were returned.The\ntp_new\nof the metaclass is ignored. which may result in incomplete initialization. Creating classes whose metaclass overridestp_new\nis deprecated.Changed in version 3.14: Creating classes whose metaclass overrides\ntp_new\nis no longer allowed.\n-\nPyObject *PyType_FromSpec(PyType_Spec *spec)\u00b6\n- Return value: New reference. Part of the Stable ABI.\nEquivalent to\nPyType_FromMetaclass(NULL, NULL, spec, NULL)\n.Changed in version 3.12: The function now finds and uses a metaclass corresponding to the base classes provided in Py_tp_base[s] slots. Previously, only\ntype\ninstances were returned.The\ntp_new\nof the metaclass is ignored. which may result in incomplete initialization. Creating classes whose metaclass overridestp_new\nis deprecated.Changed in version 3.14: Creating classes whose metaclass overrides\ntp_new\nis no longer allowed.\n-\nint PyType_Freeze(PyTypeObject *type)\u00b6\n- Part of the Stable ABI since version 3.14.\nMake a type immutable: set the\nPy_TPFLAGS_IMMUTABLETYPE\nflag.All base classes of type must be immutable.\nOn success, return\n0\n. On error, set an exception and return-1\n.The type must not be used before it\u2019s made immutable. For example, type instances must not be created before the type is made immutable.\nAdded in version 3.14.\n-\ntype PyType_Spec\u00b6\n- Part of the Stable ABI (including all members).\nStructure defining a type\u2019s behavior.\n-\nconst char *name\u00b6\nName of the type, used to set\nPyTypeObject.tp_name\n.\n-\nint basicsize\u00b6\nIf positive, specifies the size of the instance in bytes. It is used to set\nPyTypeObject.tp_basicsize\n.If zero, specifies that\ntp_basicsize\nshould be inherited.If negative, the absolute value specifies how much space instances of the class need in addition to the superclass. Use\nPyObject_GetTypeData()\nto get a pointer to subclass-specific memory reserved this way. For negativebasicsize\n, Python will insert padding when needed to meettp_basicsize\n\u2019s alignment requirements.Changed in version 3.12: Previously, this field could not be negative.\n-\nint itemsize\u00b6\nSize of one element of a variable-size type, in bytes. Used to set\nPyTypeObject.tp_itemsize\n. Seetp_itemsize\ndocumentation for caveats.If zero,\ntp_itemsize\nis inherited. Extending arbitrary variable-sized classes is dangerous, since some types use a fixed offset for variable-sized memory, which can then overlap fixed-sized memory used by a subclass. To help prevent mistakes, inheritingitemsize\nis only possible in the following situations:The base is not variable-sized (its\ntp_itemsize\n).The requested\nPyType_Spec.basicsize\nis positive, suggesting that the memory layout of the base class is known.The requested\nPyType_Spec.basicsize\nis zero, suggesting that the subclass does not access the instance\u2019s memory directly.With the\nPy_TPFLAGS_ITEMS_AT_END\nflag.\n-\nunsigned int flags\u00b6\nType flags, used to set\nPyTypeObject.tp_flags\n.If the\nPy_TPFLAGS_HEAPTYPE\nflag is not set,PyType_FromSpecWithBases()\nsets it automatically.\n-\nPyType_Slot *slots\u00b6\nArray of\nPyType_Slot\nstructures. Terminated by the special slot value{0, NULL}\n.Each slot ID should be specified at most once.\n-\nconst char *name\u00b6\n-\ntype PyType_Slot\u00b6\n- Part of the Stable ABI (including all members).\nStructure defining optional functionality of a type, containing a slot ID and a value pointer.\n-\nint slot\u00b6\nA slot ID.\nSlot IDs are named like the field names of the structures\nPyTypeObject\n,PyNumberMethods\n,PySequenceMethods\n,PyMappingMethods\nandPyAsyncMethods\nwith an addedPy_\nprefix. For example, use:Py_nb_add\nto setPyNumberMethods.nb_add\nAn additional slot is supported that does not correspond to a\nPyTypeObject\nstruct field:The following \u201coffset\u201d fields cannot be set using\nPyType_Slot\n:tp_weaklistoffset\n(usePy_TPFLAGS_MANAGED_WEAKREF\ninstead if possible)tp_dictoffset\n(usePy_TPFLAGS_MANAGED_DICT\ninstead if possible)tp_vectorcall_offset\n(use\"__vectorcalloffset__\"\nin PyMemberDef)\nIf it is not possible to switch to a\nMANAGED\nflag (for example, for vectorcall or to support Python older than 3.12), specify the offset inPy_tp_members\n. See PyMemberDef documentation for details.The following internal fields cannot be set at all when creating a heap type:\ntp_dict\n,tp_mro\n,tp_cache\n,tp_subclasses\n, andtp_weaklist\n.\nSetting\nPy_tp_bases\norPy_tp_base\nmay be problematic on some platforms. To avoid issues, use the bases argument ofPyType_FromSpecWithBases()\ninstead.Changed in version 3.9: Slots in\nPyBufferProcs\nmay be set in the unlimited API.Changed in version 3.11:\nbf_getbuffer\nandbf_releasebuffer\nare now available under the limited API.Changed in version 3.14: The field\ntp_vectorcall\ncan now be set usingPy_tp_vectorcall\n. See the field\u2019s documentation for details.\n-\nvoid *pfunc\u00b6\nThe desired value of the slot. In most cases, this is a pointer to a function.\npfunc values may not be\nNULL\n, except for the following slots:Py_tp_token\n(for clarity, preferPy_TP_USE_SPEC\nrather thanNULL\n)\n-\nint slot\u00b6\n-\nPy_tp_token\u00b6\n- Part of the Stable ABI since version 3.14.\nA\nslot\nthat records a static memory layout ID for a class.If the\nPyType_Spec\nof the class is statically allocated, the token can be set to the spec using the special valuePy_TP_USE_SPEC\n:static PyType_Slot foo_slots[] = { {Py_tp_token, Py_TP_USE_SPEC},\nIt can also be set to an arbitrary pointer, but you must ensure that:\nThe pointer outlives the class, so it\u2019s not reused for something else while the class exists.\nIt \u201cbelongs\u201d to the extension module where the class lives, so it will not clash with other extensions.\nUse\nPyType_GetBaseByToken()\nto check if a class\u2019s superclass has a given token \u2013 that is, check whether the memory layout is compatible.To get the token for a given class (without considering superclasses), use\nPyType_GetSlot()\nwithPy_tp_token\n.Added in version 3.14.\n-\nPy_TP_USE_SPEC\u00b6\n- Part of the Stable ABI since version 3.14.\nUsed as a value with\nPy_tp_token\nto set the token to the class\u2019sPyType_Spec\n. Expands toNULL\n.Added in version 3.14.", "code_snippets": [], "language": "Python", "source": "python.org", "token_count": 5103} +{"url": "https://docs.python.org/3/library/asyncore.html", "title": " \u2014 Asynchronous socket handler", "content": "asyncore\n\u2014 Asynchronous socket handler\u00b6\nDeprecated since version 3.6, removed in version 3.12.\nThis module is no longer part of the Python standard library. It was removed in Python 3.12 after being deprecated in Python 3.6. The removal was decided in PEP 594.\nApplications should use the asyncio\nmodule instead.\nThe last version of Python that provided the asyncore\nmodule was\nPython 3.11.", "code_snippets": [], "language": "Python", "source": "python.org", "token_count": 97}