repo
stringclasses
1 value
instance_id
stringlengths
20
22
problem_statement
stringlengths
126
60.8k
merge_commit
stringlengths
40
40
base_commit
stringlengths
40
40
python/cpython
python__cpython-118311
# Incorrect example in docs for enum.Enum.__new__ # Documentation In the documentation for [3.12](https://docs.python.org/3.12/library/enum.html#enum.Enum.__new__) and [3.13](https://docs.python.org/3.13/library/enum.html#enum.Enum.__new__) the example is incorrect for the documentation of `enum.Enum.__new__`. Quote: ``` py from enum import Enum class MyIntEnum(Enum): SEVENTEEN = '1a', 16 ``` > results in the call int('1a', 16) and a value of 17 for the member. Should be: ``` py from enum import Enum class MyIntEnum(int, Enum): SEVENTEEN = '1a', 16 ``` > results in the call int('1a', 16) and a value of 26 for the member. Additionally, the code example is followed by a `note` which is not resolving correctly (seemingly because it is missing a space): ![Screenshot 2024-04-26 at 11 30 57](https://github.com/python/cpython/assets/9484085/44062ee6-6c71-4df0-b6e5-b41b8c7cc8ea) <!-- gh-linked-prs --> ### Linked PRs * gh-118311 * gh-118699 <!-- /gh-linked-prs -->
48e52fe2c9a7b33671f6b5d1420a71a6f31ad64b
44a9f3db2b40ba41999002799a74e6b6f2a3a50a
python/cpython
python__cpython-118307
# Update JIT compilation to use LLVM 18 Today, the JIT compiles using LLVM 16 so we should consider updating to a later version. <!-- gh-linked-prs --> ### Linked PRs * gh-118307 <!-- /gh-linked-prs -->
8b56d82c59c2983b4292a7f506982f2cab352bb2
8e4fb5d260e529c9d4ca60980225fbd00dd5c3c8
python/cpython
python__cpython-118298
# _Py_FinishPendingCalls() Doesn't Necessarily Run All Remaining Pending Calls # Bug report ### Bug description: In Python/ceval_gil.c, `_Py_FinishPendingCalls()` calls `make_pending_calls()` once. `make_pending_calls()` will fail with the first pending call that fails (returns a non-zero value), leaving any remaining pending calls in the queue. `_Py_FinishPendingCalls()` basically throws away the error and walks away. Instead, `_Py_FinishPendingCalls()` should keep trying until there are no pending calls left in the queue. (I found this while working on gh-110693.) ### CPython versions tested on: 3.13 ### Operating systems tested on: _No response_ <!-- gh-linked-prs --> ### Linked PRs * gh-118298 * gh-121806 <!-- /gh-linked-prs -->
985dd8e17b55ae35fc31546384fc9364f2f59f86
6522f0e438a8c56a8f3cce2095b193ea6e3f5016
python/cpython
python__cpython-118315
# Spawning multiprocessing workers on Windows changes mouse cursor # Bug report ### Bug description: The multiprocessing module uses the CreateProcess function on Windows to spawn processes, which has a flag for controlling whether Windows should display feedback to the user in the form of the "Working in Background cursor" (i.e. pointer and hourglass/circle) while the process is launching (see `STARTF_FORCEONFEEDBACK` in the [docs for STARTUPINFO](https://learn.microsoft.com/en-us/windows/win32/api/processthreadsapi/ns-processthreadsapi-startupinfow)). Since multiprocessing doesn't specify any flags, it gets the default behavior, resulting in this launch feedback mouse cursor being displayed whenever a worker process is spawned. Since processes in multiprocessing are used as an alternative to threads, not for launching applications but for running background tasks in an existing application, I believe it would make sense to disable the launch feedback. The application is already running and can provide its own UI for displaying the state of background tasks, just as it would when using threads, so the extra feedback is not needed. The launch feedback is more confusing than helpful when launching a background task, because it's not tied to the lifetime of the task – it's just displayed for some arbitrary period of time until it times out waiting for the worker process to display its UI, which it never will. It's particularly confusing in the case of process pools: The user will see some feedback the first time a task is submitted to the pool (when it starts a worker process) but nothing when subsequent tasks are submitted. And, as mentioned above, it doesn't actually tell the user anything about whether the task is finished, so it's not a useful feature to rely on for this. (Tkinter applications that need a reliable busy cursor can presumably use the new functions added in #72684.) To fix this, pass the `STARTF_FORCEOFFFEEDBACK` flag in the [STARTUPINFO structure](https://learn.microsoft.com/en-us/windows/win32/api/processthreadsapi/ns-processthreadsapi-startupinfow) when calling CreateProcess. The relevant Python code is in the [multiprocessing.popen_spawn_win32](https://github.com/python/cpython/blob/main/Lib/multiprocessing/popen_spawn_win32.py) module, which currently passes `None` for this parameter. Screenshot of running a background task with `threading.Thread` – no mouse cursor change: ![taskdemo1](https://github.com/python/cpython/assets/556423/08019680-1999-4947-bb07-17ee053c84f3) Screenshot of running the same task with `multiprocessing.Process` – mouse cursor changes when launching the task: ![taskdemo2](https://github.com/python/cpython/assets/556423/71625e4f-45e5-4702-9ec9-f9ae0ff77695) **Steps to reproduce the issue**: 1. Launch the attached [taskdemo.txt](https://github.com/python/cpython/files/15119358/taskdemo.txt) (which is a Python script, but GitHub doesn't allow attaching `.pyw` files) using `pythonw.exe` – the issue is not reproducible from a console application. 2. Click the "Run Thread task" button. Note any mouse cursor changes. (It will finish after 10 seconds.) 3. Click the "Run Process task" button. Note any mouse cursor changes. **Expected behavior**: The two buttons should exhibit identical behavior. The mouse cursor should not change in either case. **Actual behavior**: Unlike the "Run Thread task" button, the "Run Process task" button will result in a "Working in Background cursor" being displayed for a few seconds. The cursor will then change back before the task is actually completed (which happens after 10 seconds, as indicated by the app's UI). The attached demo script lets you monkeypatch the multiprocessing module at runtime by checking the "Enable patch" checkbox. The steps above will then produce the expected behavior. ### CPython versions tested on: 3.10, 3.11 ### Operating systems tested on: Windows <!-- gh-linked-prs --> ### Linked PRs * gh-118315 <!-- /gh-linked-prs -->
133c1a7cdb19dd9317e7607ecf8f4fd4fb5842f6
f5b7e397c0a0e180257450843ab622ab8783adf6
python/cpython
python__cpython-118316
# Fix inspect.signature() of operator.{attrgetter,itemgetter,methodcaller} instances # Bug report ### Bug description: On Python 3.12.3 and 3.11.9, it looks like the `inspect.Signature` object for operator's `attrgetter`, `itemgetter`, and `methodcaller` classes doesn't match their `__call__` method signature: ```pycon >>> import inspect >>> import operator >>> inspect.signature(operator.attrgetter("spam")) <Signature (*args, **kwargs)> >>> inspect.signature(operator.itemgetter("spam")) <Signature (*args, **kwargs)> >>> inspect.signature(operator.methodcaller("spam")) <Signature (*args, **kwargs)> ``` but their `__call__` methods only accept a single argument: ```python # attrgetter / itemgetter def __call__(self, obj): return self._call(obj) # methodcaller def __call__(self, obj): return getattr(obj, self._name)(*self._args, **self._kwargs) ``` ### CPython versions tested on: 3.11, 3.12 ### Operating systems tested on: macOS <!-- gh-linked-prs --> ### Linked PRs * gh-118316 <!-- /gh-linked-prs -->
444ac0b7a64ff6b6caba9c2731bd33151ce18ad1
51c70de998ead35674bf4b2b236e9ce8e89d17b4
python/cpython
python__cpython-118277
# Generator .close does not release resources # Bug report ### Bug description: Sending a generator `.close()` in Python 3.12 does not release resources used by local variables as it used to do in earlier versions of Python. Wrapping the generator content in a `try: ... except GeneratorExit: pass` releases the resources as expected. The following shows the difference in space usage between wrapping the code with `try`-`except` and not. ```python def generator_using_a_lot_of_space(): a_big_set = set(range(10_000_000)) yield 42 g = generator_using_a_lot_of_space() input('A <press enter>') # 13.4 MB print(next(g)) input('B <press enter>') # 576.6 MB g.close() input('C <press enter>') # 576.6 MB <== space usage used to drop to 13-14 MB def generator_using_a_lot_of_space(): a_big_set = set(range(10_000_000)) try: yield 42 except GeneratorExit: pass g = generator_using_a_lot_of_space() input('D <press enter>') # 14.6 MB print(next(g)) input('E <press enter>') # 575.6 MB g.close() input('F <press enter>') # 14.6 MB <== drop in space usage as expected ``` The above space measurements were done with Python 3.12.3 (tags/v3.12.3:f6650f9, Apr 9 2024, 14:05:25) [MSC v.1938 64 bit (AMD64)] on win32 and looking at the space usage reported by the Windows 11 Task Manager. ### CPython versions tested on: 3.12 ### Operating systems tested on: Windows <!-- gh-linked-prs --> ### Linked PRs * gh-118277 * gh-118451 * gh-118478 <!-- /gh-linked-prs -->
1f16b4ce569f222af74fcbb7b2ef98eee2398d20
f7747f73a9d9b9b1661c1a69cd8d934d56bbd3b3
python/cpython
python__cpython-118273
# Support more options for reading/writing images in Tkinter # Feature or enhancement PhotoImage has method `write()` which writes the image (or its part) to the file. But it lacks methods for two other related subcommand -- `read`, to read the image from the file, and `data`, to get the image data. I propose to add methods `read()` and `data()`. Also, the `write()` method can support two new options: `background` and `grayscale`. <!-- gh-linked-prs --> ### Linked PRs * gh-118273 <!-- /gh-linked-prs -->
709ca90a00e66cea432096a7ba61aa6459d2a9a7
fc50f1bdbad3aa52d7cbd3cb836a35806266ec54
python/cpython
python__cpython-118355
# Generalize `path_t` for C level optimizations # Feature or enhancement ### Proposal: Quoting @eryksun: > The implementation of `path_t` could be generalized to support fields to configure the converter to use `wide` regardless of platform, to allow null characters, to allow arbitrary length paths (e.g. no 32767 length limit on Windows), and a new field such as `bytes_input` to determine whether a path result has to be converted back to `bytes`. The option to always use a wide-character path is a generalization of the current behavior on Windows. Argument Clinic would be extended to support the new options. The implementations of `_path_splitroot_ex()`, `_path_normpath()`, and `_path_abspath()` (if adopted) would benefit, and also the `_path_is*()` helpers on Windows. ### Has this already been discussed elsewhere? This is a minor feature, which does not need previous discussion elsewhere ### Links to previous discussion of this feature: - #118089 <!-- gh-linked-prs --> ### Linked PRs * gh-118355 * gh-119513 * gh-119608 <!-- /gh-linked-prs -->
96b392df303b2cfaea823afcb462c0b455704ce8
f0ed1863bd7a0b9d021fb59e156663a7ec553f0e
python/cpython
python__cpython-118536
# Skip individual tests (not entire files) when running emulated JIT CI Since we don't have AArch64 runners, four JIT CI jobs are [run under emulation](https://github.com/python/cpython/blob/345e1e04ec72698a1e257c805b3840d9f55eb80d/.github/workflows/jit.yml#L126-L144). Unfortunately, our test suite wasn't exactly designed to be run using `qemu-user`. That's okay, but it means that these jobs currently `--exclude` [a bunch of test files](https://github.com/python/cpython/blob/345e1e04ec72698a1e257c805b3840d9f55eb80d/.github/workflows/jit.yml#L74-L85) for low-level OS functionality that currently either crash, fail, or hang under emulation. Rather than excluding the entire files, we should probably maintain a text file of individual tests and use our test runner's `--ignorefile` option instead. That way, we would still have pretty good coverage of these modules. Anyone interested in doing this? <!-- gh-linked-prs --> ### Linked PRs * gh-118536 * gh-118564 * gh-118661 <!-- /gh-linked-prs -->
52485967813acdb35c274e1b2eaedd34e9ac01fc
9c14ed06188aa4d462cd0fc4218c6023f9bf03cb
python/cpython
python__cpython-118247
# JIT CI is broken due to `test_pathlib` and `test_posixpath` Looks like something in emulation/QEMU changed and now `test_pathlib` and `test_posixpath` are causing the GHA workflow to fail for Linux Arm. <!-- gh-linked-prs --> ### Linked PRs * gh-118247 <!-- /gh-linked-prs -->
8942bf41dac49149a77f5396ab086d340de9c009
93b7ed7c6b1494f41818fa571b1843ca3dfe1bd1
python/cpython
python__cpython-118237
# Full Grammar specification lists invalid syntax The [Full Grammar specification](https://docs.python.org/3.13/reference/grammar.html) lists these production rules: ``` type_param: | NAME [type_param_bound] | '*' NAME ':' expression | '*' NAME | '**' NAME ':' expression | '**' NAME for_if_clause: | 'async' 'for' star_targets 'in' ~ disjunction ('if' disjunction )* | 'for' star_targets 'in' ~ disjunction ('if' disjunction )* | 'async'? 'for' (bitwise_or (',' bitwise_or)* [',']) !'in' starred_expression: | '*' expression | '*' ``` However, some of the alternatives are actually invalid syntax: in the [grammar file](https://github.com/python/cpython/blob/main/Grammar/python.gram) we can find that they're handled by raising a syntax error. For example: ``` for_if_clause[comprehension_ty]: [...] | 'async'? 'for' (bitwise_or (',' bitwise_or)* [',']) !'in' { RAISE_SYNTAX_ERROR("'in' expected after for-loop variables") } ``` These should be hidden in the same way as other invalid rules. <!-- gh-linked-prs --> ### Linked PRs * gh-118237 * gh-118309 * gh-119731 <!-- /gh-linked-prs -->
ef940dec409f0a9e4f353c6188990aeb3ad4ffb4
09c29475813ff2a763931fc0b45aaaef57cd2ac7
python/cpython
python__cpython-118228
# Support more options for copying images in Tkinter # Feature or enhancement Tk's photo image has the copy subcommand which supports a number of options, https://www.tcl.tk/man/tcl8.4/TkCmd/photo.htm#M17 ``` imageName copy sourceImage ?option value(s) ...? Copies a region from the image called sourceImage (which must be a photo image) to the image called imageName, possibly with pixel zooming and/or subsampling. If no options are specified, this command copies the whole of sourceImage into imageName, starting at coordinates (0,0) in imageName. The following op‐ tions may be specified: -from x1 y1 x2 y2 Specifies a rectangular sub-region of the source image to be copied. (x1,y1) and (x2,y2) specify diagonally oppo‐ site corners of the rectangle. If x2 and y2 are not specified, the default value is the bottom-right corner of the source image. The pixels copied will include the left and top edges of the specified rectangle but not the bottom or right edges. If the -from option is not given, the default is the whole source image. -to x1 y1 x2 y2 Specifies a rectangular sub-region of the destination im‐ age to be affected. (x1,y1) and (x2,y2) specify diago‐ nally opposite corners of the rectangle. If x2 and y2 are not specified, the default value is (x1,y1) plus the size of the source region (after subsampling and zooming, if specified). If x2 and y2 are specified, the source region will be replicated if necessary to fill the desti‐ nation region in a tiled fashion. -shrink Specifies that the size of the destination image should be reduced, if necessary, so that the region being copied into is at the bottom-right corner of the image. This option will not affect the width or height of the image if the user has specified a non-zero value for the -width or -height configuration option, respectively. -zoom x y Specifies that the source region should be magnified by a factor of x in the X direction and y in the Y direction. If y is not given, the default value is the same as x. With this option, each pixel in the source image will be expanded into a block of x x y pixels in the destination image, all the same color. x and y must be greater than 0. -subsample x y Specifies that the source image should be reduced in size by using only every xth pixel in the X direction and yth pixel in the Y direction. Negative values will cause the image to be flipped about the Y or X axes, respectively. If y is not given, the default value is the same as x. -compositingrule rule Specifies how transparent pixels in the source image are combined with the destination image. When a compositing rule of overlay is set, the old contents of the destina‐ tion image are visible, as if the source image were printed on a piece of transparent film and placed over the top of the destination. When a compositing rule of set is set, the old contents of the destination image are discarded and the source image is used as-is. The de‐ fault compositing rule is overlay. ``` The Tkinter wrapper provides three `PhotoImage` methods `copy()`, `zoom()` and `subsample()` which implement only `-zoom` and `-subsample` options (but not their combination) and the call without options. They also have a different semantic, because always return a new image, without possibility to change the part of the existing image. I propose to add a new method which supports all options and has closer to the origin semantic. Since the name `copy()` is already used, the preliminary name of the new method is `copy_replace()`. It replaces a part of this image with a part of the specified image, possible with zooming and/or subsampling. It is possible also to add new options to existing methods if they make sense: `from_` to all three of them, and `zoom` and `subsample` to copy. Recent discussion: https://discuss.python.org/t/add-additional-options-for-the-tkinters-photoimage-copy-method/51598. IIRC there was also a discussion on the bug tracker a long time ago, but I cannot find it. Better suggestions for the method name are welcome. We could implement the copying in the opposite direction: `src.copy_into(dst, ...)` instead of `dst.copy_replace(src, ...)`, it could be simpler with choosing the right name, but this is less future proof. What if tomorrow Tk allows to copy from bitmap images (currently it is forbidden) or from other sources? We could not use it in Python until implement a new method of BitmapImage. <!-- gh-linked-prs --> ### Linked PRs * gh-118228 <!-- /gh-linked-prs -->
1b639a04cab0e858d90e2ac459fb34b73700701f
09871c922393cba4c85bc29d210d76425e076c1d
python/cpython
python__cpython-118223
# Unable to call iterdump method for Sqlite3 connections # Bug report ### Bug description: ```python import sqlite3 def dict_factory(cursor, row): d = {} for idx, col in enumerate(cursor.description): d[col[0]] = row[idx] return d def main(): conn = sqlite3.connect(":memory:") cur = conn.cursor() # conn.row_factory = sqlite3.Row # is ok conn.row_factory = dict_factory # is fail cur.executescript(""" create table if not exists test( id integer primary key AUTOINCREMENT ); """) cur.close() for line in conn.iterdump(): print(line) main() ``` ### CPython versions tested on: 3.12 ### Operating systems tested on: Windows <!-- gh-linked-prs --> ### Linked PRs * gh-118223 * gh-118270 <!-- /gh-linked-prs -->
e38b43c213a8ab2ad9748bac2732af9b58c816ae
796b3fb28057948ea5b98f7eb0c0f3af6a1e276e
python/cpython
python__cpython-118219
# Speed up itertools.pairwise The pairwise implementation mentions that we can reuse the result tuple There's currently some discussion of performance ongoing at https://discuss.python.org/t/nwise-itertools/51718 <!-- gh-linked-prs --> ### Linked PRs * gh-118219 <!-- /gh-linked-prs -->
6999d68d2878871493d85dc63599f3d44eada104
b568c2c1ff5c0b1922a6402dc95c588d7f9aa914
python/cpython
python__cpython-118267
# `__future__` imports allow dots before them # Bug report ### Bug description: `__future__` imports don't check the amount of dots while checking `from`-`import` nodes for the module name, so they are still valid even if dots are put before the module name. ```pycon >>> from .__future__ import barry_as_FLUFL Traceback (most recent call last): File "<stdin>", line 1, in <module> from .__future__ import barry_as_FLUFL ImportError: attempted relative import with no known parent package >>> 1 <> 2 True ``` Is this intended? Allowing dots before the module name seems improper considering `__future__` is supposed to be part of the stdlib. ### CPython versions tested on: 3.12, 3.13 ### Operating systems tested on: Windows <!-- gh-linked-prs --> ### Linked PRs * gh-118267 <!-- /gh-linked-prs -->
7c97dc8c9594c71bd3d1f69758a27de45f57e4c3
67bba9dd0f5b9c2d24c2bc6d239c4502040484af
python/cpython
python__cpython-118213
# mmap lacks error handling (SEH) on Windows which can lead to interpreter crashes # Crash report ### What happened? `mmap` reads and writes require structured exception handling (SEH) for correct handling of errors (like hardware read issues, or disk full write issues) on Windows. See <https://learn.microsoft.com/en-us/windows/win32/api/memoryapi/nf-memoryapi-mapviewoffile#remarks> for a description of the possible exceptions and <https://learn.microsoft.com/en-us/windows/win32/memory/reading-and-writing-from-a-file-view> for how to handle them. I found this issue when trying to read a file from a corrupt hdd. A normal `open().read()` would raise a `OSError: [Errno 22] Invalid argument` exception. Whereas doing the same using `mmap` crashes the interpreter. I only tested on Windows 7 and Python 3.8, but from looking at the Microsoft docs and the Python source code, this issue should exist for all Windows versions and the latest Python source. (I don't have access to the broken file from a newer machine) I think there is nothing here which guards against these errors https://github.com/python/cpython/blob/258408239a4fe8a14919d81b73a16e2cfa374050/Modules/mmapmodule.c#L294-L313 ```python with open(path, "rb") as fp: with mmap.mmap(fp.fileno(), 0, access=mmap.ACCESS_READ) as fr: fr.read(1024*8) # this crashes when `path` is on a corrupt hdd for example. ``` ### CPython versions tested on: 3.8 ### Operating systems tested on: Windows ### Output from running 'python -VV' on the command line: _No response_ <!-- gh-linked-prs --> ### Linked PRs * gh-118213 * gh-118887 <!-- /gh-linked-prs -->
e85e8deaf3220c8d12b69294e45645aaf20187b9
7e6fcab20003b07621dc02ea78d6ea2fda500371
python/cpython
python__cpython-118208
# COMMON_FIELDS macro in funcobject.h leaks to user code # Bug report ### Bug description: the `COMMON_FIELDS` macro is defined [here](https://github.com/python/cpython/blob/7e87d30f1f30d39c3005e03195f3d7648b38a1e2/Include/cpython/funcobject.h#L11-L19), and can end up being included in user code (via `Python.h`). "COMMON_FIELDS" is not a unique enough identifier, and can conflict with other use of the same name in user C/C++ code (has happened to us at Meta). this macro is used only twice within the same header as it is defined, so it should be straight forward to undef it after it is used to prevent it from leaking (PR coming up). ### CPython versions tested on: 3.10, 3.12, 3.13 ### Operating systems tested on: Linux, macOS <!-- gh-linked-prs --> ### Linked PRs * gh-118208 * gh-118269 <!-- /gh-linked-prs -->
796b3fb28057948ea5b98f7eb0c0f3af6a1e276e
546cbcfa0eeeb533950bd49e30423f3d3bbd5ebe
python/cpython
python__cpython-118452
# Intermittent "unrecognized configuration name" failure on iOS and Android # Bug report ### Bug description: The iOS buildbot is seeing an intermittent testing failure in `test_posix.PosixTester.test_confstr: ```python ====================================================================== ERROR: test_confstr (test.test_posix.PosixTester.test_confstr) ---------------------------------------------------------------------- Traceback (most recent call last): File "/Users/buildbot/Library/Developer/XCTestDevices/4FED991E-7280-4D2F-B63A-C7FECDE66EAD/data/Containers/Bundle/Application/0809D9E4-34B6-4223-A3AD-079D76671D9D/iOSTestbed.app/python/lib/python3.13/test/test_posix.py", line 569, in test_confstr self.assertEqual(len(posix.confstr("CS_PATH")) > 0, True) ~~~~~~~~~~~~~^^^^^^^^^^^ ValueError: unrecognized configuration name ---------------------------------------------------------------------- ``` See [this PR](https://github.com/python/cpython/pull/118190#issuecomment-2073083883) for the buildbot report; [this build](https://buildbot.python.org/all/#/builders/1380/builds/66) is the resulting failure. The failure appears to be completely transient, affecting ~1 in 10 builds; the next buildbot run almost always passes, with no changes addressing this issue. I've been unsuccessful reproducing the test locally. ### CPython versions tested on: CPython main branch ### Operating systems tested on: Other <!-- gh-linked-prs --> ### Linked PRs * gh-118452 * gh-118453 * gh-126089 * gh-131375 <!-- /gh-linked-prs -->
9c468e2c5dffb6fa9811fd16e70fa0463bdfce5f
7fabcc727dee52a3e0dfe4f903ad414e93cf2dc9
python/cpython
python__cpython-124473
# AST docs: Fix parameter markup https://github.com/python/cpython/pull/116129#discussion_r1575794751 I'll try to do this in a bit, but if someone else gets to it first, feel free to pick it up. <!-- gh-linked-prs --> ### Linked PRs * gh-124473 * gh-124600 * gh-124705 <!-- /gh-linked-prs -->
09aebb1fbc0c1d771d4942844d5e2077fcdf56c9
274d9ab619b8150a613275835234ea9ef935f21f
python/cpython
python__cpython-118169
# Incorrect argument substitution on Unpack[tuple[...]] # Bug report ### Bug description: In the code below, all of the prints should be equivalent: ```python from typing import Generic, Tuple, TypeVarTuple, Unpack Ts = TypeVarTuple("Ts") class Old(Generic[*Ts]): ... class New[*Ts]: ... PartOld = Old[int, *Ts] print(PartOld[str]) print(PartOld[*tuple[str]]) print(PartOld[*Tuple[str]]) print(PartOld[Unpack[tuple[str]]]) # Old[int, typing.Unpack[tuple[str]]] print(PartOld[Unpack[Tuple[str]]]) PartNew = New[int, *Ts] print(PartNew[str]) print(PartNew[*tuple[str]]) print(PartNew[*Tuple[str]]) print(PartNew[Unpack[tuple[str]]]) # New[int, typing.Unpack[tuple[str]]] print(PartNew[Unpack[Tuple[str]]]) ``` However, the `Unpack[tuple[]]` variants print something different. This is because the implementation of `Unpack` doesn't deal correctly with builtin aliases. I'll send a PR. ### CPython versions tested on: 3.12, CPython main branch ### Operating systems tested on: macOS <!-- gh-linked-prs --> ### Linked PRs * gh-118169 * gh-118178 <!-- /gh-linked-prs -->
d0b664ee065e69fc4f1506b00391e093d2d6638d
d687d3fcfaa13b173005897634fc5ab515c8a660
python/cpython
python__cpython-118483
# str(10**10000) hangs if the C `_decimal` module is missing # Bug report ### Bug description: The following code ```python >>> import sys; sys.set_int_max_str_digits(0); 10**10000 ``` works in 3.11 but hangs in 3.12. Interrupting it gives this backtrace: ``` File "<stdin>", line 1, in <module> File "/usr/lib/python3.12/_pylong.py", line 85, in int_to_decimal_string return str(int_to_decimal(n)) ^^^^^^^^^^^^^^^^^ File "/usr/lib/python3.12/_pylong.py", line 77, in int_to_decimal result = inner(n, n.bit_length()) ^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/lib/python3.12/_pylong.py", line 64, in inner return inner(lo, w2) + inner(hi, w - w2) * w2pow(w2) ^^^^^^^^^^^^^ File "/usr/lib/python3.12/_pylong.py", line 64, in inner return inner(lo, w2) + inner(hi, w - w2) * w2pow(w2) ^^^^^^^^^^^^^ File "/usr/lib/python3.12/_pylong.py", line 64, in inner return inner(lo, w2) + inner(hi, w - w2) * w2pow(w2) ^^^^^^^^^^^^^ [Previous line repeated 6 more times] File "/usr/lib/python3.12/_pylong.py", line 45, in w2pow result = D2**w ~~^^~ File "/usr/lib/python3.12/_pydecimal.py", line 2339, in __pow__ ans = self._power_exact(other, context.prec + 1) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/lib/python3.12/_pydecimal.py", line 2187, in _power_exact if xc > 10**p: ~~^^~ KeyboardInterrupt ``` ### CPython versions tested on: 3.11, 3.12 ### Operating systems tested on: Linux <!-- gh-linked-prs --> ### Linked PRs * gh-118483 * gh-118503 * gh-118584 * gh-118590 <!-- /gh-linked-prs -->
711c80bfca5dd17cb7c6ec26f0e44848b33aec04
5dd36732c850084ce262b7869ed90d73a281296a
python/cpython
python__cpython-118149
# Improve tests for shutil.make_archive() While working on my idea for #80145, I noticed that not all combinations of None and not-None `root_dir` and `base_dir` are tested. The proposed PR expands the tests. <!-- gh-linked-prs --> ### Linked PRs * gh-118149 * gh-118151 <!-- /gh-linked-prs -->
287d939ed4445089e8312ab44110cbb6b6306a5c
a6647d16abf4dd65997865e857371673238e60bf
python/cpython
python__cpython-118141
# ``test_concurrent_futures.test_init`` prints unnecessary information # Bug report ### Bug description: ```python ./python.exe -m test -q test_concurrent_futures -m test_init Using random seed: 2943892526 0:00:00 load avg: 2.95 Run 8 tests sequentially . ---------------------------------------------------------------------- Ran 1 test in 0.228s OK . ---------------------------------------------------------------------- Ran 1 test in 0.234s OK test_concurrent_futures.test_wait ran no tests == Tests result: SUCCESS == 7 tests run no tests: test_concurrent_futures.test_as_completed test_concurrent_futures.test_deadlock test_concurrent_futures.test_future test_concurrent_futures.test_process_pool test_concurrent_futures.test_shutdown test_concurrent_futures.test_thread_pool test_concurrent_futures.test_wait Total duration: 1.8 sec Total tests: run=10 (filtered) Total test files: run=8/8 (filtered) run_no_tests=7 Result: SUCCESS ``` Please note that the test suite was runned in quiet mode (`-q`). For me it's not obvious which test is being run, so I consider this information unnecessary. ### CPython versions tested on: CPython main branch ### Operating systems tested on: macOS <!-- gh-linked-prs --> ### Linked PRs * gh-118141 <!-- /gh-linked-prs -->
d687d3fcfaa13b173005897634fc5ab515c8a660
de1f6868270d31f56c388ef416daacd35feb152d
python/cpython
python__cpython-118132
# Command-line interface for the `random` module # Feature or enhancement ### Proposal: Many stdlib libraries have a simple CLI: * https://docs.python.org/3/library/cmdline.html Some of my favourites: ```console $ python3 -m http.server Serving HTTP on :: port 8000 (http://[::]:8000/) ... $ python3 -m webbrowser https://www.python.org $ python3 -m uuid 5f73cb76-01d7-4390-8cda-17fe9672a29f $ python3 -m calendar 2024 January February March Mo Tu We Th Fr Sa Su Mo Tu We Th Fr Sa Su Mo Tu We Th Fr Sa Su 1 2 3 4 5 6 7 1 2 3 4 1 2 3 [snip] ``` It would be useful to add a CLI to `random` to be randomly select a choice (using [`random.choice`](https://docs.python.org/3/library/random.html#random.choice)): ```console $ python3 -m random curry "fish n chips" tacos fish n chips ``` We can also print a random number: * if the input is an integer, print a random integer between 1 and the input (via [`random.randint`](https://docs.python.org/3/library/random.html#random.randint)) * if it's a float, print a random float between 0 and the input (via [`random.uniform`](https://docs.python.org/3/library/random.html#random.uniform)). For example: ```console $ python3 -m random a b c c $ python3 -m random 6 4 $ python3 -m random 2.5 1.9597922929238814 ``` Also with explicit arguments: ```console $ python3 -m random --choice a b c b $ python3 -m random --integer 6 6 $ python3 -m random --float 2.5 1.1778540129562416 $ python3 -m random --float 6 1.4311142568750403 ``` This isn't a Python-specific tool, like `pdb` or `pickletools`, but a generally useful tool like `http.server` and `uuid`. I can't find an existing cross-platform tool for this. Linux has `shuf` to return a random choice, but it's not natively available in macOS (it's part of a Homebrew package) or Windows (`uutils` can be installed using Scoop). There's also a `num-utils` package for Unix to print a random integer. For me, just the random choice is the most important, but I can see at least picking a random integer is also useful, like a dice throw. Currently, `python -m random` generates some sample test output; if we still need it I propose moving it to `python -m random --test`. ### Has this already been discussed elsewhere? I have already discussed this feature proposal on Discourse ### Links to previous discussion of this feature: * https://discuss.python.org/t/command-line-interface-for-the-random-module/51304 * https://mastodon.social/@hugovk/112292755013247928 <!-- gh-linked-prs --> ### Linked PRs * gh-118132 <!-- /gh-linked-prs -->
3b32575ed6b0905f434f9395d26293c0ae928032
fed8d73fde779fca41026398376cb3038e9b2b5f
python/cpython
python__cpython-118122
# `test_doctest.test_look_in_unwrapped` does not test anything # Bug report Here's a test that ensures that `Wrapper` on top of it handles `__doc__` correcty: https://github.com/python/cpython/blob/d8f350309ded3130c43f0d2809dcb8ec13112320/Lib/test/test_doctest/test_doctest.py#L2539-L2554 The problem is that it does not check that tests are actually executed. If you remove this doctest from it, this will be the result: ``` » ./python.exe -m test test_doctest Using random seed: 2548604185 0:00:00 load avg: 1.71 Run 1 test sequentially 0:00:00 load avg: 1.71 [1/1] test_doctest == Tests result: SUCCESS == 1 test OK. Total duration: 955 ms Total tests: run=67 Total test files: run=1/1 Result: SUCCESS ``` So, it does not test anything. And it will continue to pass if `Wrapper` won't handle `__doc__` correctly at some point. I have a PR ready. <!-- gh-linked-prs --> ### Linked PRs * gh-118122 * gh-118129 <!-- /gh-linked-prs -->
ccda73828473576c57d1bb31774f56542d6e8964
5fa5b7facbcd1f725e51daf31c321e02b7db3f02
python/cpython
python__cpython-118120
# Re-use `sep` in `posixpath.expanduser()` # Feature or enhancement ### Proposal: We can replace `root` with the already assigned `sep` from earlier: ```diff if isinstance(path, bytes): userhome = os.fsencode(userhome) - root = b'/' -else: - root = '/' -userhome = userhome.rstrip(root) -return (userhome + path[i:]) or root +userhome = userhome.rstrip(sep) +return (userhome + path[i:]) or sep ``` ### Has this already been discussed elsewhere? This is a minor feature, which does not need previous discussion elsewhere ### Links to previous discussion of this feature: - #117634 <!-- gh-linked-prs --> ### Linked PRs * gh-118120 <!-- /gh-linked-prs -->
6f768b71bab837c6c4aac4d3ddd251e55025fe0b
1e428426c836b9a434810a6b99f70454d3a9611e
python/cpython
python__cpython-118108
# zipimport.zipimporter breaks for zips containing files with size > 0xFFFFFFFF # Bug report ### Bug description: The following test in the Pex project fails under CPython 3.13.0a6: https://github.com/pex-tool/pex/blob/27c2db2bf26039bef41323c964bc4e0317a7b4f5/tests/test_pex_builder.py#L520-L547 The failure includes stderr output from zipimporter like: ``` Failed checking if argv[0] is an import path entry Traceback (most recent call last): File "<frozen zipimport>", line 98, in __init__ File "<frozen zipimport>", line 520, in _read_directory NameError: name 'struct' is not defined. Did you forget to import 'struct'? File "/tmp/pytest-of-jsirois/pytest-14/test_check0/too-big.pyz", line 1 PK- SyntaxError: source code cannot contain null bytes ``` Inspection of #94146 indicates the code added to handle too large files is missing an import: https://github.com/python/cpython/pull/94146/files#r1572552939 The tests added in that PR do stress too many files for a 32 bit zip, but they do not stress too big a file for a 32 bit zip; so it makes sense that this import omission slipped past. ### CPython versions tested on: 3.13, CPython main branch ### Operating systems tested on: Linux <!-- gh-linked-prs --> ### Linked PRs * gh-118108 <!-- /gh-linked-prs -->
49258efada0cb0fc58ccffc018ff310b8f7f4570
698417f2f677b7b9373f8a2f202b6c18870bf3c2
python/cpython
python__cpython-118101
# Improve links in `ast.rst` # Bug report There are two problems: 1. https://github.com/python/cpython/blob/15b3555e4a47ec925c965778a415dc11f0f981fd/Doc/library/ast.rst#L2539 but https://leoeditor.com/appendices.html#leoast-py does not exist anymore, correct link is https://leo-editor.github.io/leo-editor/appendices.html#leoast-py 2. I also propose to remove `parso`, because it is no longer maintained for the last 4 years, no new syntax since 3.9? is supported: https://github.com/davidhalter/parso/graphs/contributors <!-- gh-linked-prs --> ### Linked PRs * gh-118101 * gh-118110 <!-- /gh-linked-prs -->
2aa11cca115add03f39cb6cd7299135ecf4d4d82
1e4a4c4897d0f45b1f594bc429284c82efe49188
python/cpython
python__cpython-118091
# Improve `class A[]: ...` syntax error message # Feature or enhancement Right now it will generate this default error: ```python >>> class A[]: ... File "<stdin>", line 1 class A[]: ... ^ SyntaxError: invalid syntax >>> def some[](arg: int) -> None: ... File "<stdin>", line 1 def some[](arg: int) -> None: ... ^ SyntaxError: expected '(' >>> type Alias[] = int File "<stdin>", line 1 type Alias[] = int ^ SyntaxError: invalid syntax ``` I propose to change it to: ```python >>> class A[]: File "<stdin>", line 1 class A[]: ^ SyntaxError: At least one type variable definition is expected >>> def some[](arg: int) -> None: ... File "<stdin>", line 1 def some[](arg: int) -> None: ... ^ SyntaxError: At least one type variable definition is expected >>> type Alias[] = int File "<stdin>", line 1 type Alias[] = int ^ SyntaxError: At least one type variable definition is expected ``` I have a PR ready. <!-- gh-linked-prs --> ### Linked PRs * gh-118091 <!-- /gh-linked-prs -->
b60d4c0d53b6aafbf4a6e560b4cb6f1d5c7240c8
04859228aa11756558807bcf99ccff78e4e8c56d
python/cpython
python__cpython-118083
# Improve `from x import` error message # Feature or enhancement While reading new ruff release notes: https://astral.sh/blog/ruff-v0.4.0 I found that it has a new nice parser feature that we can addopt. Before: ```python >>> from x import File "<stdin>", line 1 from x import ^ SyntaxError: invalid syntax >>> from . import File "<stdin>", line 1 from . import ^ SyntaxError: invalid syntax ``` ```python >>> from x import File "<stdin>", line 1 from x import ^ SyntaxError: Expected one or more names after 'import' >>> from . import File "<stdin>", line 1 from . import ^ SyntaxError: Expected one or more names after 'import' >>> ``` I have a PR ready :) <!-- gh-linked-prs --> ### Linked PRs * gh-118083 <!-- /gh-linked-prs -->
de1f6868270d31f56c388ef416daacd35feb152d
eb927e9fc823de9539fcb82c9ea9d055462eb04a
python/cpython
python__cpython-118081
# ``test_import`` raises a ``DeprecationWarning`` # Bug report ### Bug description: ```python ./python.exe -m test -q test_import Using random seed: 828001900 0:00:00 load avg: 15.22 Run 1 test sequentially /Users/admin/Projects/cpython/Lib/unittest/case.py:707: DeprecationWarning: It is deprecated to return a value that is not None from a test case (<bound method _id of <test.test_import.SubinterpImportTests testMethod=test_disallowed_reimport>>) return self.run(*args, **kwds) /Users/admin/Projects/cpython/Lib/unittest/case.py:707: DeprecationWarning: It is deprecated to return a value that is not None from a test case (<bound method _id of <test.test_import.SubinterpImportTests testMethod=test_single_init_extension_compat>>) return self.run(*args, **kwds) /Users/admin/Projects/cpython/Lib/unittest/case.py:707: DeprecationWarning: It is deprecated to return a value that is not None from a test case (<bound method _id of <test.test_import.SubinterpImportTests testMethod=test_singlephase_check_with_setting_and_override>>) return self.run(*args, **kwds) == Tests result: SUCCESS == Total duration: 3.6 sec Total tests: run=97 skipped=3 Total test files: run=1/1 Result: SUCCESS ``` ### CPython versions tested on: CPython main branch ### Operating systems tested on: macOS <!-- gh-linked-prs --> ### Linked PRs * gh-118081 <!-- /gh-linked-prs -->
8d4a244f1516dcde23becc2a273d30c202237598
1e3e7ce11e3b0fc76e981db85d27019d6d210bbc
python/cpython
python__cpython-118117
# global-buffer-overflow in test_opt.py # Crash report ### What happened? Hello when building cpython with address sanitizer test_opt.py crashed with a global-buffer-overflow, I will add build flags, reduced code that causes crash. https://github.com/python/cpython/blob/main/Lib/test/test_capi/test_opt.py ```sh ./configure CFLAGS="-fsanitize=address -g" LDFLAGS="-fsanitize=address" CXXFLAGS="-fsanitize=address -g” make make test ``` After this you can reproduce it just by running following scripts reduced from test_opt.py ```python import contextlib import textwrap import unittest from test.support import import_helper _testinternalcapi = import_helper.import_module("_testinternalcapi") @contextlib.contextmanager def temporary_optimizer(opt): _testinternalcapi.set_optimizer(opt) class TestOptimizerAPI(unittest.TestCase): def test_long_loop(self): ns = {} exec(textwrap.dedent(""), ns) opt = _testinternalcapi.new_counter_optimizer() with temporary_optimizer(opt): return if __name__ == "__main__": unittest.main() ``` Stack trace will be: ```c ==24730==ERROR: AddressSanitizer: global-buffer-overflow on address 0x0001056cb7b8 at pc 0x000105054760 bp 0x00016b1af940 sp 0x00016b1af938 READ of size 8 at 0x0001056cb7b8 thread T0 #0 0x10505475c in visit_decref gc.c:531 #1 0x1050aebf4 in executor_traverse optimizer.c:392 #2 0x105054358 in deduce_unreachable gc.c:1162 #3 0x105052690 in gc_collect_region gc.c:1509 #4 0x10504fa08 in _PyGC_Collect gc.c:1815 #5 0x105131e20 in gc_collect gcmodule.c.h:140 #6 0x104df22f8 in cfunction_vectorcall_FASTCALL_KEYWORDS methodobject.c:441 #7 0x104d2c244 in PyObject_Vectorcall call.c:327 #8 0x104fd576c in _PyEval_EvalFrameDefault generated_cases.c.h:813 #9 0x104d327c4 in method_vectorcall classobject.c:92 #10 0x104d2c030 in _PyVectorcall_Call call.c:273 #11 0x104fd4c04 in _PyEval_EvalFrameDefault generated_cases.c.h:1267 #12 0x104d2abf8 in _PyObject_VectorcallDictTstate call.c:135 #13 0x104d2d0dc in _PyObject_Call_Prepend call.c:504 #14 0x104e6f70c in slot_tp_call typeobject.c:9225 #15 0x104d2afcc in _PyObject_MakeTpCall call.c:242 #16 0x104fd576c in _PyEval_EvalFrameDefault generated_cases.c.h:813 #17 0x104d327c4 in method_vectorcall classobject.c:92 #18 0x104d2c030 in _PyVectorcall_Call call.c:273 #19 0x104fd4c04 in _PyEval_EvalFrameDefault generated_cases.c.h:1267 #20 0x104d2abf8 in _PyObject_VectorcallDictTstate call.c:135 #21 0x104d2d0dc in _PyObject_Call_Prepend call.c:504 #22 0x104e6f70c in slot_tp_call typeobject.c:9225 #23 0x104d2afcc in _PyObject_MakeTpCall call.c:242 #24 0x104fd576c in _PyEval_EvalFrameDefault generated_cases.c.h:813 #25 0x104d327c4 in method_vectorcall classobject.c:92 #26 0x104d2c030 in _PyVectorcall_Call call.c:273 #27 0x104fd4c04 in _PyEval_EvalFrameDefault generated_cases.c.h:1267 #28 0x104d2abf8 in _PyObject_VectorcallDictTstate call.c:135 #29 0x104d2d0dc in _PyObject_Call_Prepend call.c:504 #30 0x104e6f70c in slot_tp_call typeobject.c:9225 #31 0x104d2afcc in _PyObject_MakeTpCall call.c:242 #32 0x104fd576c in _PyEval_EvalFrameDefault generated_cases.c.h:813 #33 0x104d2abf8 in _PyObject_VectorcallDictTstate call.c:135 #34 0x104d2d0dc in _PyObject_Call_Prepend call.c:504 #35 0x104e724e8 in slot_tp_init typeobject.c:9469 #36 0x104e633e8 in type_call typeobject.c:1854 #37 0x104d2afcc in _PyObject_MakeTpCall call.c:242 #38 0x104fd576c in _PyEval_EvalFrameDefault generated_cases.c.h:813 #39 0x104fb425c in PyEval_EvalCode ceval.c:601 #40 0x1050ddcb8 in run_mod pythonrun.c:1376 #41 0x1050d98e8 in _PyRun_SimpleFileObject pythonrun.c:461 #42 0x1050d8f7c in _PyRun_AnyFileObject pythonrun.c:77 #43 0x10512f140 in Py_RunMain main.c:707 #44 0x10512ff80 in pymain_main main.c:737 #45 0x1051304a0 in Py_BytesMain main.c:761 #46 0x18f5a60dc (<unknown module>) 0x0001056cb7b8 is located 8 bytes before global variable 'COLD_EXITS' defined in 'Python/optimizer.c' (0x1056cb7c0) of size 27200 0x0001056cb7b8 is located 23 bytes after global variable 'cold_exits_initialized' defined in 'Python/optimizer.c' (0x1056cb7a0) of size 1 SUMMARY: AddressSanitizer: global-buffer-overflow gc.c:531 in visit_decref Shadow bytes around the buggy address: 0x0001056cb500: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 0x0001056cb580: 00 00 00 00 00 00 00 00 00 00 00 00 f9 f9 f9 f9 0x0001056cb600: f9 f9 f9 f9 f9 f9 f9 f9 01 f9 f9 f9 00 00 00 00 0x0001056cb680: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 0x0001056cb700: 00 00 00 00 00 00 00 00 00 00 00 02 f9 f9 f9 f9 =>0x0001056cb780: 00 f9 f9 f9 01 f9 f9[f9]00 00 00 00 00 00 00 00 0x0001056cb800: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 0x0001056cb880: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 0x0001056cb900: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 0x0001056cb980: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 0x0001056cba00: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 Shadow byte legend (one shadow byte represents 8 application bytes): Addressable: 00 Partially addressable: 01 02 03 04 05 06 07 Heap left redzone: fa Freed heap region: fd Stack left redzone: f1 Stack mid redzone: f2 Stack right redzone: f3 Stack after return: f5 Stack use after scope: f8 Global redzone: f9 Global init order: f6 Poisoned by user: f7 Container overflow: fc Array cookie: ac Intra object redzone: bb ASan internal: fe Left alloca redzone: ca Right alloca redzone: cb ==24730==ABORTING zsh: abort``` ### CPython versions tested on: 3.12 ### Operating systems tested on: macOS ### Output from running 'python -VV' on the command line: Python 3.12.3 (main, Apr 9 2024, 08:09:14) [Clang 15.0.0 (clang-1500.3.9.4)] <!-- gh-linked-prs --> ### Linked PRs * gh-118117 <!-- /gh-linked-prs -->
7e87d30f1f30d39c3005e03195f3d7648b38a1e2
258408239a4fe8a14919d81b73a16e2cfa374050
python/cpython
python__cpython-118278
# The "finder" Glossary Entry May Need Updating # Documentation I'm referring to https://docs.python.org/3.13/glossary.html#term-finder. (There are probably other import-related glossary entries that could use a similar update.) <details> <summary>(expand for extra context)</summary> ---- The glossary entry looks like so: ``` finder An object that tries to find the [loader] for a module that is being imported. Since Python 3.3, there are two types of finder: [meta path finders] for use with [sys.meta_path], and [path entry finders] for use with [sys.path_hooks]. See [PEP 302], [PEP 420] and [PEP 451] for much more detail. ```` The glossary entry was added by @brettcannon in early 2009 (51d4aabf09fa0107a7263a45ad85ab3c0398390b), around the time he landed the initial importlib docs (afccd63ac9541630953cd4e59a421696d3869311), for Python 3.1. Brett updated the entry, including the "Since 3.3" and the PEP references, in late 2015 (ccddbb186bcaec77f52a8c37d8b3f56de4b871dd), before the 3.5 release. [PEP 302](https://peps.python.org/pep-0302/) landed in late 2002 (52e14d640be3a7fa2c17f5a2a6bc9626d622aa40), for Python 2.3 (not 3.3). The "imports" page in the language reference was added by @warsaw in 2012 (dadebab42c87e29342de67501a6b280e547fb633), before the 3.3 release. import system: https://docs.python.org/3.13/reference/import.html importlib: https://docs.python.org/3.13/library/importlib.html ---- </details> I happened to see the entry today and noticed two things: * it says "Since 3.3", but I'm pretty sure it should say "Since 2.3", which PEP 302 targeted (perhaps I missed something?) * along with (or even instead of) recommending the 3 PEPs, it would probably be even more helpful to refer to the language reference [^1] and to the importlib docs page [^2] [^1]: https://docs.python.org/3.13/reference/import.html [^2]: https://docs.python.org/3.13/library/importlib.html Regarding the second point, the language reference and the importlib docs both would be better resources than the PEPs for explaining the various facets of the import system, including finders. PEP 302 even has a prominent warning about that. FWIW, I had expected that the glossary entry predated that language reference page (2012) and importlib (2009), and that no one thought to update the glossary entry at the time. Having checked, that does correspond with the addition of the entry in early 2009, but not with the update in late 2015 that introduced the two bits of text I identified above. Since he committed both changes, perhaps @brettcannon has some insight into the "why" of the two parts of the glossary entry I identified above? <!-- gh-linked-prs --> ### Linked PRs * gh-118278 * gh-119773 * gh-119774 <!-- /gh-linked-prs -->
db009348b4b7a4b0aec39472ea074c1b5feeba9b
48f21b3631eb20871fe234e9714b19aa76cf3a49
python/cpython
python__cpython-118137
# Problem with config cache of WASI [This workflow](https://github.com/python/cpython/actions/runs/8723744945/job/23932686454?pr=117855) on #117855 keeps failing. It asks to delete the cache, which I obviously can't do. How should this be fixed? Prune the cache or update the workflow? ```none Run python3 Tools/wasm/wasi.py configure-build-python -- --config-cache --with-pydebug python3 Tools/wasm/wasi.py configure-build-python -- --config-cache --with-pydebug shell: /usr/bin/bash -e {0} env: WASMTIME_VERSION: 18.0.3 WASI_SDK_VERSION: [2](https://github.com/python/cpython/actions/runs/8733446361/job/23962151465?pr=117855#step:10:2)1 WASI_SDK_PATH: /opt/wasi-sdk CROSS_BUILD_PYTHON: cross-build/build CROSS_BUILD_WASI: cross-build/wasm[3](https://github.com/python/cpython/actions/runs/8733446361/job/23962151465?pr=117855#step:10:3)2-wasi PATH: /usr/lib/ccache:/opt/hostedtoolcache/wasmtime/18.0.3/x6[4](https://github.com/python/cpython/actions/runs/8733446361/job/23962151465?pr=117855#step:10:4):/snap/bin:/home/runner/.local/bin:/opt/pipx_bin:/home/runner/.cargo/bin:/home/runner/.config/composer/vendor/bin:/usr/local/.ghcup/bin:/home/runner/.dotnet/tools:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin pythonLocation: /opt/hostedtoolcache/Python/3.12.3/x[6](https://github.com/python/cpython/actions/runs/8733446361/job/23962151465?pr=117855#step:10:6)4 PKG_CONFIG_PATH: /opt/hostedtoolcache/Python/3.12.3/x64/lib/pkgconfig Python_ROOT_DIR: /opt/hostedtoolcache/Python/3.12.3/x64 Python2_ROOT_DIR: /opt/hostedtoolcache/Python/3.12.3/x64 Python3_ROOT_DIR: /opt/hostedtoolcache/Python/3.12.3/x64 LD_LIBRARY_PATH: /opt/hostedtoolcache/Python/3.12.3/x64/lib tput: No value for $TERM and no -T specified configure: loading cache config.cache configure: error: `PKG_CONFIG_PATH' has changed since the previous run: configure: former value: `/opt/hostedtoolcache/Python/3.12.2/x64/lib/pkgconfig' configure: current value: `/opt/hostedtoolcache/Python/3.12.3/x64/lib/pkgconfig' configure: error: in `/home/runner/work/cpython/cpython/cross-build/build': configure: error: changes in the environment can compromise the build configure: error: run `make distclean' and/or `rm config.cache' and start over Traceback (most recent call last): File "/home/runner/work/cpython/cpython/Tools/wasm/wasi.py", line 34[7](https://github.com/python/cpython/actions/runs/8733446361/job/23962151465?pr=117855#step:10:7), in <module> ⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯ main() File "/home/runner/work/cpython/cpython/Tools/wasm/wasi.py", line 343, in main dispatch[context.subcommand](context) File "/home/runner/work/cpython/cpython/Tools/wasm/wasi.py", line 7[9](https://github.com/python/cpython/actions/runs/8733446361/job/23962151465?pr=117855#step:10:9), in wrapper return func(context, working_dir) ^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/runner/work/cpython/cpython/Tools/wasm/wasi.py", line 137, in configure_build_python call(configure, quiet=context.quiet) File "/home/runner/work/cpython/cpython/Tools/wasm/wasi.py", line [10](https://github.com/python/cpython/actions/runs/8733446361/job/23962151465?pr=117855#step:10:10)3, in call subprocess.check_call(command, **kwargs, stdout=stdout, stderr=stderr) File "/opt/hostedtoolcache/Python/3.12.3/x64/lib/python3.12/subprocess.py", line 413, in check_call 📁 /home/runner/work/cpython/cpython/cross-build/build 📝 Touching /home/runner/work/cpython/cpython/Modules/Setup.local ... ❯ ../../configure --config-cache --with-pydebug raise CalledProcessError(retcode, cmd) subprocess.CalledProcessError: Command '['../../configure', '--config-cache', '--with-pydebug']' returned non-zero exit status 1. Error: Process completed with exit code 1 ``` <!-- gh-linked-prs --> ### Linked PRs * gh-118137 <!-- /gh-linked-prs -->
456c29cf85847c67dfc0fa36d6fe6168569b46fe
8e86579caef59fad0c54ac698d589f23a7951c55
python/cpython
python__cpython-117933
# Call stats are incorrect for tier 2 and maybe for tier 1 as well # Bug report ### Bug description: The call stats for tier 2 differ significantly from tier 1, but it is not clear that the tier 1 call stats are correct either. The main purpose of the call stats is to track the fraction of Python frames that are created with and without calls to `PyEval_EvalDefault`. However, the stats don't make that clear and may be incorrect. Currently the table looks like this:   | Count | Ratio -- | -- | -- Calls to PyEval_EvalDefault | 2,309,411,110 | 32.0% Calls to Python functions inlined | 4,908,275,087 | 68.0% Calls via PyEval_EvalFrame (total) | 2,309,411,110 | 32.0% Calls via PyEval_EvalFrame (vector) | 1,447,866,962 | 20.1% Calls via PyEval_EvalFrame (generator) | 861,544,148 | 11.9% Calls via PyEval_EvalFrame (legacy) | 4,418,464 | 0.1% Calls via PyEval_EvalFrame (function vectorcall) | 1,443,417,872 | 20.0% Calls via PyEval_EvalFrame (build class) | 30,626 | 0.0% Calls via PyEval_EvalFrame (slot) | 475,837,706 | 6.6% Calls via PyEval_EvalFrame (function ex) | 38,365,096 | 0.5% Calls via PyEval_EvalFrame (api) | 256,353,600 | 3.6% Calls via PyEval_EvalFrame (method) | 213,159,135 | 3.0% Frame objects created | 88,466,853 | 1.2% Frames pushed | 4,953,781,868 | 68.6% Note that the numbers don't add up. The number of "frames pushed" is actually the total number of frames created. As it is function frames pushed, generator frames are not counted. Frame objects created should be a fraction of "frames created". All other numbers should be a fraction of "frames pushed" ### CPython versions tested on: CPython main branch ### Operating systems tested on: _No response_ <!-- gh-linked-prs --> ### Linked PRs * gh-117933 <!-- /gh-linked-prs -->
40f4d641a93b1cba89be4bc7b26cdb481e0450d5
81a926bd20a8c66646e51b66ef1cfb309b73ebe7
python/cpython
python__cpython-118099
# `dataclasses`: 3.12.3 regression with `weakref_slot` # Bug report ### Bug description: The `__weakref__` slot is not set for classes that have a `Generic[T]` base. ```python from typing import Generic, TypeVar from dataclasses import dataclass T = TypeVar("T") @dataclass(slots=True, weakref_slot=True) class Token(Generic[T]): ctx: T print(f"{Token.__slots__=!r}") print(f"{hasattr(Token, '__weakref__')=}") ``` Output on 3.12.2: ``` Token.__slots__=('ctx', '__weakref__') hasattr(Token, '__weakref__')=True ``` On 3.12.3: ``` Token.__slots__=('ctx',) hasattr(Token, '__weakref__')=False ``` ### CPython versions tested on: 3.12 ### Operating systems tested on: Linux, macOS, Windows <!-- gh-linked-prs --> ### Linked PRs * gh-118099 * gh-118821 * gh-118822 <!-- /gh-linked-prs -->
fa9b9cb11379806843ae03b1e4ad4ccd95a63c02
e8cbcf49555c694975a6af56b5cb0af7817e889e
python/cpython
python__cpython-118154
# Don't needlessly repeat Sphinx directives # Documentation If you repeat Sphinx directives they are separated by an empty line. For example: https://github.com/python/cpython/blob/f70395786f6da2412d7406606feb64347fc71262/Doc/library/typing.rst?plain=1#L1974-L1975 Renders with a newline: ![image](https://github.com/python/cpython/assets/65588599/b8cababc-af5e-4f26-bd45-5c3dffa3b8b4) Could be changed to: ```rst .. data:: ParamSpecArgs ParamSpecKwargs ``` Which renders without a newline: ![image](https://github.com/python/cpython/assets/65588599/bb38f6ea-a6f4-403e-b404-4bcb18623fa9) Regexes: - ` \.\. attribute::.*\n \.\. attribute::`: 35 matches - ` \.\. data::.*\n \.\. data::`: 2 matches - ` \.\. decorator::.*\n \.\. decorator::`: 1 match - ` \.\. index::.*\n \.\. index::`: 3 matches - ` \.\. method::.*\n \.\. method::`: 3 matches - ~` \.\. versionadded::.*\n \.\. versionadded::`: 1 match~ - ~` \.\. versionchanged::.*\n \.\. versionchanged::`: 1 match~ - `\.\. attribute::.*\n\.\. attribute::`: 4 matches - `\.\. data::.*\n\.\. data::`: 6 matches - `\.\. describe::.*\n\.\. describe::`: 1 match - `\.\. exception::.*\n\.\. exception::`: 1 match - `\.\. function::.*\n\.\. function::`: 11 matches - `\.\. index::.*\n\.\. index::`: 2 matches - `\.\. method::.*\n\.\. method::`: 2 matches - `\.\. module::.*\n\.\. module::`: 1 match - `\.\. moduleauthor::.*\n\.\. moduleauthor::`: 13 matches - `\.\. opcode::.*\n\.\. opcode::`: 1 match - `\.\. option::.*\n\.\. option::`: 17 matches - `\.\. sectionauthor::.*\n\.\. sectionauthor::`: 26 matches - ~`\.\. versionadded::.*\n\.\. versionadded::`: 4 matches~ <!-- gh-linked-prs --> ### Linked PRs * gh-118154 * gh-118155 <!-- /gh-linked-prs -->
78ba4cb758ba1e40d27af6bc2fa15ed3e33a29d2
287d939ed4445089e8312ab44110cbb6b6306a5c
python/cpython
python__cpython-118503
# _pydecimal.Decimal.__pow__ tries to compute 10**1000000000000000000 and thus doesn't terminate # Bug report ### Bug description: `decimal.Decimal(2) ** 117` does not terminate when using the pure Python implementation `_pydecimal.py` of the `decimal` module, and setting the precision to `MAX_PREC`: ```python import sys if len(sys.argv) > 1 and sys.argv[1] == "pydecimal": # block _decimal sys.modules['_decimal'] = None import decimal with decimal.localcontext() as ctx: ctx.prec = decimal.MAX_PREC ctx.Emax = decimal.MAX_EMAX ctx.Emin = decimal.MIN_EMIN ctx.traps[decimal.Inexact] = 1 D2 = decimal.Decimal(2) res = D2 ** 117 print(res) ``` this behaves as follows: ``` ./python decbug.py # instant 166153499473114484112975882535043072 $ ./python decbug.py pydecimal # hangs ^CTraceback (most recent call last): File "/home/cfbolz/projects/cpython/decbug.py", line 14, in <module> res = D2 ** 117 ~~~^^~~~~ File "/home/cfbolz/projects/cpython/Lib/_pydecimal.py", line 2436, in __pow__ ans = self._power_exact(other, context.prec + 1) ~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/cfbolz/projects/cpython/Lib/_pydecimal.py", line 2284, in _power_exact if xc > 10**p: ~~^^~ KeyboardInterrupt ``` The bug happens because `p` is the precision + 1 in the last traceback entry. `MAX_PREC = 999999999999999999` on 64-bit systems. Therefore the the [code in `_power_exact`](https://github.com/python/cpython/blob/6078f2033ea15a16cf52fe8d644a95a3be72d2e3/Lib/_pydecimal.py#L2187) tries to compute `10**1000000000000000000`, which obviously won't work. ### CPython versions tested on: 3.9, 3.10, 3.11, 3.12, 3.13, CPython main branch ### Operating systems tested on: Linux <!-- gh-linked-prs --> ### Linked PRs * gh-118503 <!-- /gh-linked-prs -->
999f0c512281995fb61a0d9eda075fd846e8c505
08d169f14a715ceaae3d563ced2ff1633d009359
python/cpython
python__cpython-118025
# ``test_compile`` leaks references # Bug report ### Bug description: ```python ./python.exe -m test -R 3:3 test_compile Using random seed: 3846844224 0:00:00 load avg: 4.57 Run 1 test sequentially 0:00:00 load avg: 4.57 [1/1] test_compile beginning 6 repetitions. Showing number of leaks (. for 0 or less, X for 10 or more) 123:456 XX4 444 test_compile leaked [2, 2, 2] references, sum=6 test_compile leaked [4, 4, 4] memory blocks, sum=12 test_compile failed (reference leak) == Tests result: FAILURE == 1 test failed: test_compile Total duration: 12.3 sec Total tests: run=164 skipped=1 Total test files: run=1/1 failed=1 Result: FAILURE ``` ### CPython versions tested on: CPython main branch ### Operating systems tested on: macOS <!-- gh-linked-prs --> ### Linked PRs * gh-118025 <!-- /gh-linked-prs -->
cd7cf155886cea880c1e80d4313f35f8af210b5e
b848b944bb4730ab4dcaeb15b0b1713c3f68ec7d
python/cpython
python__cpython-118202
# [3.12] getattr_static can result in reference leaks # Bug report ### Bug description: I discovered this bug while working on https://github.com/pytorch/pytorch/issues/124302. It looks like calling `getattr_static` is causing an object to be reference leaked: ```python import gc from inspect import getattr_static import weakref class C1: pass def main(): obj = C1() weakref.finalize(obj, lambda: print("obj deleted!")) class C2: def __init__(self): self.obj = obj c2 = C2() getattr_static(c2, "bad", None) print("done main!") main() gc.collect() print("done!") ``` Output: ``` done main! done! obj deleted! ``` If I comment out the `getattr_static` line, the output is as expected: ``` done main! obj deleted! done! ``` It looks like this PR https://github.com/python/cpython/pull/104267 indirectly cached calls to `getattr_static`, which is resulting in reference leaks. Perhaps this cache needs to use weak references? Original PyTorch code for reference (`torch.compile` calls `getattr_static` on `mod` at some point): ```python import torch import gc import sys import weakref from inspect import getattr_static def dbg(o): refs = gc.get_referrers(o) print(len(refs), sys.getrefcount(o)) return refs gm_list = [] def backend(gm, _): gm_list.append(weakref.ref(gm, lambda _: print("gm deleted"))) # breakpoint() return gm def main(): param = torch.nn.Parameter(torch.randn(5, 5)) class Mod(torch.nn.Module): def __init__(self): super().__init__() self.param = param def forward(self, x): return self.param * x mod = Mod() ref = weakref.ref(param, lambda _: print("obj deleted")) opt_mod = torch.compile(mod, backend=backend) print(opt_mod(torch.randn(5, 5))) return ref ref = main() gc.collect() print("done!") print(ref) ``` ### CPython versions tested on: 3.12 ### Operating systems tested on: Linux <!-- gh-linked-prs --> ### Linked PRs * gh-118202 * gh-118232 <!-- /gh-linked-prs -->
8227883d1f1bbb6560e5f175d7ee49f013c094bd
83235f7791fbe6ee2618192f2341de9cd22d0511
python/cpython
python__cpython-118001
# sqlite3 seems to consider `?1` a named placeholder # Bug report ### Bug description: Starting in python 3.12, the following snippet generates a deprecation warning: ```python import sqlite3 db = sqlite3.connect(':memory:') db.execute('CREATE TABLE a (b, c)') db.execute('INSERT INTO a (b, c) VALUES (?2, ?1)', [3, 4]) # This line isn't necessary to reproduce the warning, it's just to show that # the insert did in fact put "4" in column "b" and "3" in column "c". print(db.execute('SELECT * FROM a').fetchall()) ``` Here's the warning for the first placeholder (there's another identical one for the second): > DeprecationWarning: Binding 1 ('?1') is a named parameter, but you supplied a sequence which requires nameless (qmark) placeholders. Starting with Python 3.14 an sqlite3.ProgrammingError will be raised. I'll admit to not having a great understanding of how databases are supposed to work in python, but I don't think this warning should be issued. The [sqlite docs](https://www.sqlite.org/lang_expr.html#varparam) specify that the `?<number>` syntax is used to specify a parameter index, not a parameter name. So this kind of placeholder is meant to be used with sequence-style parameters like `[3, 4]`. I think the above warning should be issued only when the user tries to use `:<word>` placeholders with sequence-style parameters. The above example is very simplified, so I think it might also be helpful to show the real-life query that triggered this warning for me. The goal is to insert key/value pairs from a dictionary, updating any keys that are already in the table. The query requires referring to the value in two places. `?<number>` placeholders seem like the right syntax to use here, because they allow the `metadata.items()` to be used directly: ```python def upsert_metadata(db: sqlite3.Connection, metadata: dict[str, Any]): db.executemany( '''\ INSERT INTO metadata (key, value) VALUES (?1, ?2) ON CONFLICT (key) DO UPDATE SET value=?2 ''', metadata.items(), ) ``` ### CPython versions tested on: 3.11, 3.12 ### Operating systems tested on: Linux <!-- gh-linked-prs --> ### Linked PRs * gh-118001 * gh-118142 <!-- /gh-linked-prs -->
550483b7e6c54b2a25d4db0c4ca41bd9c1132f93
8b541c017ea92040add608b3e0ef8dc85e9e6060
python/cpython
python__cpython-117993
# Restore removed Py_SetProgramName() and PySys_SetArgv() functions My Fedora team identified that the removal of "Python initialization" functions impacts at least 17 projects. I propose to restore them to give more time to impacted projects to be updated to the new [PEP 587 PyConfig API](https://peps.python.org/pep-0587/) added to Python 3.8. These functions were marked as deprecated in Python 3.11. `Py_SetProgramName()` and/or `Py_SetPythonHome()`, 7 impacted projects: * dionaea * fontforge * glade * libreoffice * neuron * python-pyqt6 * python-pyside6 `PySys_SetArgv()` and/or `PySys_SetArgvEx()`, 12 impacted projects (glade and neuron are both lists): * collectd * glade * gnumeric * kernel * nautilus-python * nemo-extensions * neuron * obs-studio * pyotherside * remmina * scribus * sourcextractor++ In Python 3.13, I also proposed [PEP 741 – Python Configuration C API](https://peps.python.org/pep-0741/) but it's still a draft. <!-- gh-linked-prs --> ### Linked PRs * gh-117993 <!-- /gh-linked-prs -->
340a02b590681d4753eef0ff63037d0ecb512271
0a0756c5edd8c32783a39ef00c47fe4a54deecbc
python/cpython
python__cpython-120233
# GH-114781 potentially breaks gevent: threading becomes pre-imported at startup # Bug report ### Bug description: Hello up there. I've discovered that starting from CPython >= 3.11.10 gevent-based applications become potentially broken because, in a venv with gevent installed, `threading` module becomes pre-imported by python stdlib itself. As gevent injects its own implementation of threading primitives, it helps to know that there are no instances of C-implemented locks at the time of the monkey-patching: if there are no such instances(*) the program is known to have only instances of gevent-provided locks and it will not run into deadlock scenarios described at e.g. https://github.com/gevent/gevent/issues/1865 where the code tries to take a lock, that lock turns to be standard thread lock, but there are only greenlets running and there is no other real thread to release that lock. So not having `threading` pre-imported at startup is very desirable property in the context of gevent. For this reason gpython from pygolang actually verifies that invariant and it is through which I've discovered the issue when testing pygolang on py3.11.10: ```console (py311-venv) kirr@deca:~/src/tools/go/pygolang-master$ gpython Traceback (most recent call last): File "/home/kirr/src/tools/go/py311-venv/bin/gpython", line 8, in <module> sys.exit(main()) ^^^^^^ File "/home/kirr/src/tools/go/pygolang-master/gpython/__init__.py", line 368, in main raise RuntimeError('gpython: internal error: the following modules are pre-imported, but must be not:' RuntimeError: gpython: internal error: the following modules are pre-imported, but must be not: ['threading'] sys.modules: ['__editable___pygolang_0_1_finder', '__future__', '__main__', '_abc', '_codecs', '_collections', '_collections_abc', '_distutils_hack', '_frozen_importlib', '_frozen_importlib_external', '_functools', '_imp', '_io', '_operator', '_signal', '_sitebuiltins', '_sre', '_stat', '_thread', '_warnings', '_weakref', '_weakrefset', 'abc', 'builtins', 'codecs', 'collections', 'contextlib', 'copyreg', 'encodings', 'encodings.aliases', 'encodings.utf_8', 'enum', 'errno', 'fnmatch', 'functools', 'genericpath', 'gpython', 'importlib', 'importlib._abc', 'importlib._bootstrap', 'importlib._bootstrap_external', 'importlib.machinery', 'importlib.util', 'io', 'ipaddress', 'itertools', 'keyword', 'marshal', 'ntpath', 'operator', 'os', 'os.path', 'pathlib', 'posix', 'posixpath', 're', 're._casefix', 're._compiler', 're._constants', 're._parser', 'reprlib', 'site', 'stat', 'sys', 'threading', 'time', 'types', 'urllib', 'urllib.parse', 'warnings', 'zipimport', 'zope'] ``` The problem is there because `importlib.util` started to import threading after https://github.com/python/cpython/commit/46f821d62b5a, and because setuptools emits `importlib.util` usage in installed `*-nspkg.pth` and in generated finders for packages installed in editable mode: https://github.com/pypa/setuptools/blob/92b45e9817ae829a5ca5a5962313a56b943cad91/setuptools/namespaces.py#L46-L61 https://github.com/pypa/setuptools/blob/92b45e98/setuptools/command/editable_wheel.py#L786-L790 So for example the following breaks because e.g. zope.event is installed via nspkg way: ```console kirr@deca:~/tmp/trashme/X$ python3.12 -m venv 1.venv kirr@deca:~/tmp/trashme/X$ . 1.venv/bin/activate (1.venv) kirr@deca:~/tmp/trashme/X$ pip list Package Version ------- ------- pip 24.0 (1.venv) kirr@deca:~/tmp/trashme/X$ pip install zope.event Collecting zope.event Using cached zope.event-5.0-py3-none-any.whl.metadata (4.4 kB) Collecting setuptools (from zope.event) Using cached setuptools-69.5.1-py3-none-any.whl.metadata (6.2 kB) Using cached zope.event-5.0-py3-none-any.whl (6.8 kB) Using cached setuptools-69.5.1-py3-none-any.whl (894 kB) Installing collected packages: setuptools, zope.event Successfully installed setuptools-69.5.1 zope.event-5.0 (1.venv) kirr@deca:~/tmp/trashme/X$ python -c 'import sys; assert "threading" not in sys.modules, sys.modules' Traceback (most recent call last): File "<string>", line 1, in <module> AssertionError: {'threading': <module 'threading' ...>, 'importlib.util': <module 'importlib.util' (frozen)>, ...} ``` So would you please consider applying the following suggested patch to fix this problem: ```diff --- a/Lib/importlib/util.py +++ b/Lib/importlib/util.py @@ -15,7 +15,7 @@ import _imp import functools import sys -import threading +#import threading delayed to avoid pre-importing threading early at startup import types import warnings @@ -316,7 +316,7 @@ def exec_module(self, module): loader_state = {} loader_state['__dict__'] = module.__dict__.copy() loader_state['__class__'] = module.__class__ - loader_state['lock'] = threading.RLock() + loader_state['lock'] = __import__('threading').RLock() loader_state['is_loading'] = False module.__spec__.loader_state = loader_state module.__class__ = _LazyModule ``` ? Thanks beforehand, Kirill /cc @effigies, @vstinner, @ericsnowcurrently, @doko42, @brettcannon /cc @jamadden, @arcivanov, @maciejp-ro /cc @eduardo-elizondo, @wilson3q, @vsajip (*) or the list of C-implemented lock instances is limited and well defined - for example gevent reinstantiates `thrading._active_limbo_lock` on the monkey-patching. ### CPython versions tested on: 3.11, 3.12, CPython main branch ### Operating systems tested on: Linux <!-- gh-linked-prs --> ### Linked PRs * gh-120233 * gh-121349 * gh-121350 <!-- /gh-linked-prs -->
94f50f8ee6872007d46c385f7af253497273255a
e245ed7d1e23b5c8bc0d568bd1a2f06ae92d631a
python/cpython
python__cpython-117978
# `os.chmod()`, `os.chown()` & `os.listdir()` weren't added in Python 3.3 # Documentation The documentation of these functions states they were added in Python 3.3: - [os.chmod()](https://docs.python.org/3.13/library/os.html#os.chmod) > New in version 3.3: Added support for specifying path as an open file descriptor, and the dir_fd and follow_symlinks arguments. - [os.chown](https://docs.python.org/3.13/library/os.html#os.chown) > New in version 3.3: Added support for specifying path as an open file descriptor, and the dir_fd and follow_symlinks arguments. - [os.listdir](https://docs.python.org/3.13/library/os.html#os.listdir) > New in version 3.3: Added support for specifying path as an open file descriptor. That's clearly wrong, it should've been "Changed in version 3.3". <!-- gh-linked-prs --> ### Linked PRs * gh-117978 * gh-117992 <!-- /gh-linked-prs -->
fccedbda9316d52d93b2db855c07f947fab26ae2
5a0209fc23de113747058858a4d2e5fc8213711e
python/cpython
python__cpython-117976
# If a flush level is specified as a text value in a logging configuration dictionary, `dictConfig()` does not convert it to a numeric value. # Bug report ### Bug description: If setting up e.g. a `MemoryHandler` with a `flushLevel` using a configuration dictionary like this: ```python { # other elements omitted "flushLevel": "ERROR", # other elements omitted } ``` then you can get a `TypeError` when logging because the value is not converted to `logging.ERROR`: ```console logging/handlers.py", line 1384, in shouldFlush (record.levelno >= self.flushLevel) TypeError: '>=' not supported between instances of 'int' and 'str' ``` ### CPython versions tested on: CPython main branch ### Operating systems tested on: Linux <!-- gh-linked-prs --> ### Linked PRs * gh-117976 * gh-117986 <!-- /gh-linked-prs -->
6d0bb43232dd6ebc5245daa4fe29f07f815f0bad
b9b3c455f0293be67a762f653bd22f864d15fe3c
python/cpython
python__cpython-117982
# Add tests for the PyEval_Run family of the C API Functional tests for the `PyEval_Run` family of the C API lack tests. Discovered in the discussions of PR #116637. - [ ] `PyRun_AnyFile` (may be a macro) - [ ] `PyRun_AnyFileEx` (may be a macro) - [ ] `PyRun_AnyFileExFlags` - [ ] `PyRun_AnyFileFlags` (may be a macro) - [ ] `PyRun_File` (may be a macro) - [ ] `PyRun_FileEx` (may be a macro) - [ ] `PyRun_FileFlags` (may be a macro) - [ ] `PyRun_InteractiveLoop` (may be a macro) - [ ] `PyRun_InteractiveLoopFlags` - [ ] `PyRun_InteractiveOne` (may be a macro) - [ ] `PyRun_InteractiveOneFlags` - [ ] `PyRun_InteractiveOneObject` - [ ] `PyRun_SimpleFile` (may be a macro) - [ ] `PyRun_SimpleFileEx` (may be a macro) - [ ] `PyRun_SimpleFileExFlags` - [ ] `PyRun_SimpleString` (may be a macro) - [ ] `PyRun_SimpleStringFlags` - [ ] `PyRun_String` (may be a macro) <!-- gh-linked-prs --> ### Linked PRs * gh-117982 * gh-118011 * gh-118230 * gh-118266 <!-- /gh-linked-prs -->
6078f2033ea15a16cf52fe8d644a95a3be72d2e3
c1d7147c820545bb0a97a072fdba82154fd97ab6
python/cpython
python__cpython-117959
# Expose jit_code field for UOp Executor # Feature or enhancement ### Proposal: I would like to add a method to the UOpExecutor type to expose the JIT code via a byte string using the `jit_code` and `jit_size` fields of the executor object. This would be useful for testing and debugging, as well as verification code. ```python from _opcode import get_executor def get_executors(func): code = func.__code__ co_code = code.co_code executors = {} for i in range(0, len(co_code), 2): try: executors[i] = co_code[i], get_executor(code, i) except ValueError: pass return executors def testfunc(x): i = 0 while i < x: i += 1 testfunc(20) ex = get_executors(testfunc) with open('jit_dump.raw', 'wb') as dumpfile: for i, executor in ex.items(): print(i, executor[0], executor[1]) try: code = executor[1].get_jit_code() dumpfile.write(code) except ValueError: print('Failed to get JIT code for', executor[0]) def f(): a = [0, 1, 2, 3] return a[1] ``` ### Has this already been discussed elsewhere? https://discuss.python.org/t/jit-mapping-bytecode-instructions-and-assembly/50809 ### Links to previous discussion of this feature: _No response_ <!-- gh-linked-prs --> ### Linked PRs * gh-117959 <!-- /gh-linked-prs -->
beb653cc24275025708758d444835db2ddbb74e4
4a08a75cf4c490f7c43ede69bdf6e5a79c6a3af3
python/cpython
python__cpython-117947
# `os.path.ismount()` wasn't added in Python 3.4 # Documentation The [documentation](https://docs.python.org/3.13/library/os.path.html#os.path.ismount) of os.path.ismount() states it was added in Python 3.4: > New in version 3.4: Support for detecting non-root mount points on Windows. That's clearly wrong, it should've been: > Changed in version 3.4: Support for detecting non-root mount points on Windows. <!-- gh-linked-prs --> ### Linked PRs * gh-117947 * gh-117952 <!-- /gh-linked-prs -->
a23fa3368e50866f31d6fc1c66a9a5ca2a580239
dd4383f3c12fc938a445d974543f897c3fc07c0a
python/cpython
python__cpython-117931
# C API: Restore removed PyEval_InitThreads() function Since Python 3.7, PyEval_InitThreads() does nothing, since the GIL is now always created: https://vstinner.github.io/python37-gil-change.html This function is deprecated since Python 3.9 and I removed it in Python 3.13 alpha1. Problem: my Fedora team identified that 16 projects are affected by this function removal. * collectd * freeradius * gnumeric * libesedb * libsigrokdecode * OpenIPMI * openscap * profanity * pyliblo * pyotherside * python-confluent-kafka * python-cradox * python-gphoto2 * python-simpleaudio * python-subvertpy * rb_libtorrent I propose to restore the function in Python 3.13 beta1, and remove it again in Python 3.14 alpha1. <!-- gh-linked-prs --> ### Linked PRs * gh-117931 <!-- /gh-linked-prs -->
75eed5b3734edb221cabb8322d8b8bdf9e3ee6b1
6d0bb43232dd6ebc5245daa4fe29f07f815f0bad
python/cpython
python__cpython-117924
# ``test_webbrowser`` prints unnecessary information # Bug report ### Bug description: ```python ./python.exe -m test -v test_webbrowser -m test_parse_args_error == CPython 3.13.0a6+ (heads/main:2cc916e147, Apr 16 2024, 11:08:11) [Clang 15.0.0 (clang-1500.1.0.2.5)] == macOS-14.2.1-arm64-arm-64bit-Mach-O little-endian == Python build: debug == cwd: /Users/admin/Projects/cpython/build/test_python_worker_26916æ == CPU count: 8 == encodings: locale=UTF-8 FS=utf-8 == resources: all test resources are disabled, use -u option to unskip tests Using random seed: 2903510780 0:00:00 load avg: 2.42 Run 1 test sequentially 0:00:00 load avg: 2.42 [1/1] test_webbrowser test_parse_args_error (test.test_webbrowser.CliTest.test_parse_args_error) ... usage: __main__.py [-h] [-n | -t] url __main__.py: error: argument -t/--new-tab: not allowed with argument -n/--new-window usage: __main__.py [-h] [-n | -t] url __main__.py: error: argument -t/--new-tab: not allowed with argument -n/--new-window usage: __main__.py [-h] [-n | -t] url __main__.py: error: argument -t/--new-tab: not allowed with argument -n/--new-window usage: __main__.py [-h] [-n | -t] url __main__.py: error: argument -t/--new-tab: not allowed with argument -n/--new-window usage: __main__.py [-h] [-n | -t] url __main__.py: error: ambiguous option: --new could match --new-window, --new-tab ok ---------------------------------------------------------------------- Ran 1 test in 0.003s OK == Tests result: SUCCESS == 1 test OK. Total duration: 145 ms Total tests: run=1 (filtered) Total test files: run=1/1 (filtered) Result: SUCCESS ``` ### CPython versions tested on: CPython main branch ### Operating systems tested on: macOS <!-- gh-linked-prs --> ### Linked PRs * gh-117924 <!-- /gh-linked-prs -->
8123c34faa5aab20edc268c7f8a81e6a765af366
c69968ff69b59b27d43708379e4399f424f92075
python/cpython
python__cpython-117922
# test_inspect: ValueError: no signature found for builtin <built-in function getobjects> # Bug report ### Bug description: ### Configuration: ```sh ./configure --with-trace-refs ``` ### Tests ```python ./python -m test test_inspect ``` Output: ```python sing random seed: 1177238287 0:00:00 load avg: 2.78 Run 1 test sequentially 0:00:00 load avg: 2.78 [1/1] test_inspect.test_inspect test test_inspect.test_inspect failed -- Traceback (most recent call last): File "/home/arf/cpython/Lib/test/test_inspect/test_inspect.py", line 5261, in _test_module_has_signatures self.assertIsNotNone(inspect.signature(obj)) ~~~~~~~~~~~~~~~~~^^^^^ File "/home/arf/cpython/Lib/inspect.py", line 3360, in signature return Signature.from_callable(obj, follow_wrapped=follow_wrapped, ~~~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ globals=globals, locals=locals, eval_str=eval_str) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/arf/cpython/Lib/inspect.py", line 3090, in from_callable return _signature_from_callable(obj, sigcls=cls, ~~~~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^ follow_wrapper_chains=follow_wrapped, ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ globals=globals, locals=locals, eval_str=eval_str) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/arf/cpython/Lib/inspect.py", line 2605, in _signature_from_callable return _signature_from_builtin(sigcls, obj, ~~~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^ skip_bound_arg=skip_bound_arg) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/arf/cpython/Lib/inspect.py", line 2395, in _signature_from_builtin raise ValueError("no signature found for builtin {!r}".format(func)) ValueError: no signature found for builtin <built-in function getobjects> test_inspect.test_inspect failed (1 error) == Tests result: FAILURE == 1 test failed: test_inspect.test_inspect Total duration: 947 ms Total tests: run=330 Total test files: run=1/1 failed=1 Result: FAILURE ``` ### CPython versions tested on: CPython main branch ### Operating systems tested on: Linux <!-- gh-linked-prs --> ### Linked PRs * gh-117922 <!-- /gh-linked-prs -->
44890b209ebe2efaf4f57eed04967948547cfa3b
8429b4565deaef7a86bffc0ce58bc0eab1d7ae48
python/cpython
python__cpython-117925
# Documentation says that static methods can be called as regular functions, but I don't see how from the documentation # Documentation The [documentation](https://docs.python.org/3/library/functions.html#staticmethod:~:text=Moreover%2C%20they%20can%20be%20called%20as%20regular%20functions%20(such%20as%20f()).) states that static methods can be called as "regular" functions, but I don't see how that's possible from the documentation. > Moreover, they can be called as regular functions (such as f()). I tried the following example that didn't work: ```python class C: @staticmethod def the_static_method(x): pass the_static_method(5) ``` I get: ``` Traceback (most recent call last): File "....py", line 6, in <module> the_static_method(5) NameError: name 'the_static_method' is not defined. Did you mean: 'staticmethod'? ``` <!-- gh-linked-prs --> ### Linked PRs * gh-117925 * gh-118509 <!-- /gh-linked-prs -->
b3372481b6cae5766330b041c4622c28cee2119f
7d2ffada0a6490e6839697f729bcd80380e9f561
python/cpython
python__cpython-118007
# _PyCompile_CodeGen compiles nested function all the way to code object # Bug report The ``_PyCompile_CodeGen`` test utility is supposed to run only codegen and return an instruction sequence. However, it compiles nested functions all the way to code object. Need to teach it how to return the instruction sequence for the nested function. <!-- gh-linked-prs --> ### Linked PRs * gh-118007 <!-- /gh-linked-prs -->
0aa0fc3d3ca144f979c684552a56a18ed8f558e4
692e902c742f577f9fc8ed81e60ed9dd6c994e1e
python/cpython
python__cpython-117893
# test_peg_generator fails on a PGO+LTO build with clang # Bug report Example on aarch64 Debian Clang LTO + PGO 3.x: https://buildbot.python.org/all/#/builders/1084/builds/4019 ``` 0:02:04 load avg: 5.35 [290/473/1] test_peg_generator failed (22 errors) -- running (2): test_io (32.6 sec), test_socket (32.8 sec) (...) Successfully installed setuptools-67.6.1 wheel-0.43.0 (...) test_ternary_operator (test.test_peg_generator.test_c_parser.TestCParser.test_ternary_operator) ... warning: profile data may be out of date: of 9 functions, 1 has mismatched data that will be ignored [-Wprofile-instr-out-of-date] 1 warning generated. ./parse.o: file not recognized: file format not recognized clang: error: linker command failed with exit code 1 (use -v to see invocation) ERROR ====================================================================== ERROR: test_ternary_operator (test.test_peg_generator.test_c_parser.TestCParser.test_ternary_operator) ---------------------------------------------------------------------- (...) distutils.errors.LinkError: command '/usr/local/bin/clang' failed with exit code 1 ``` <!-- gh-linked-prs --> ### Linked PRs * gh-117893 * gh-117895 <!-- /gh-linked-prs -->
64cd6fc9a6a3c3c19091a1c81cbbe8994583017d
784e076a10e828f383282df8a4b993a1b821f547
python/cpython
python__cpython-117882
# async generator allows concurrent access via async_gen_athrow_throw and async_gen_asend_throw # Bug report ### Bug description: ```python import types import itertools @types.coroutine def _async_yield(v): return (yield v) class MyExc(Exception): pass async def agenfn(): for i in itertools.count(): try: await _async_yield(i) except MyExc: pass return yield agen = agenfn() gen = agen.asend(None) print(f"{gen.send(None)}") gen2 = agen.asend(None) try: print(f"{gen2.throw(MyExc)}") except RuntimeError: print("good") else: print("bad") gen3 = agen.athrow(MyExc) try: print(f"{gen3.throw(MyExc)}") except RuntimeError: print("good") else: print("bad") ``` outputs: ``` 0 1 bad 2 bad ``` should print: ``` 0 good good ``` ### CPython versions tested on: 3.8, 3.9, 3.10, 3.11, 3.12, 3.13, CPython main branch ### Operating systems tested on: Linux see also https://github.com/python/cpython/pull/7468 and https://github.com/python/cpython/issues/74956 <!-- gh-linked-prs --> ### Linked PRs * gh-117882 * gh-118458 <!-- /gh-linked-prs -->
fc7e1aa3c001bbce25973261fba457035719a559
2520eed0a529be3815f70c43e1a5006deeee5596
python/cpython
python__cpython-117932
# test_httpservers: OSError: [Errno 39] Directory not empty ### Bug description: ### Configuration: ```sh ./configure --enable-profiling ``` ### Tests ```python ./python -m test test_httpservers -v ``` Output: ```python == CPython 3.13.0a6+ (heads/main:e01831760e, Apr 14 2024, 23:35:24) [GCC 13.2.1 20230801] == Linux-6.8.4-arch1-1-x86_64-with-glibc2.39 little-endian == Python build: release == cwd: /home/arf/cpython/build/test_python_worker_59354æ == CPU count: 16 == encodings: locale=UTF-8 FS=utf-8 == resources: all test resources are disabled, use -u option to unskip tests Using random seed: 3724174862 0:00:00 load avg: 6.37 Run 1 test sequentially 0:00:00 load avg: 6.37 [1/1] test_httpservers test_close_connection (test.test_httpservers.BaseHTTPRequestHandlerTestCase.test_close_connection) ... ok test_date_time_string (test.test_httpservers.BaseHTTPRequestHandlerTestCase.test_date_time_string) ... ok test_extra_space (test.test_httpservers.BaseHTTPRequestHandlerTestCase.test_extra_space) ... ok test_header_buffering_of_send_error (test.test_httpservers.BaseHTTPRequestHandlerTestCase.test_header_buffering_of_send_error) ... ok test_header_buffering_of_send_header (test.test_httpservers.BaseHTTPRequestHandlerTestCase.test_header_buffering_of_send_header) ... ok test_header_buffering_of_send_response_only (test.test_httpservers.BaseHTTPRequestHandlerTestCase.test_header_buffering_of_send_response_only) ... ok test_header_length (test.test_httpservers.BaseHTTPRequestHandlerTestCase.test_header_length) ... ok test_header_unbuffered_when_continue (test.test_httpservers.BaseHTTPRequestHandlerTestCase.test_header_unbuffered_when_continue) ... ok test_html_escape_on_error (test.test_httpservers.BaseHTTPRequestHandlerTestCase.test_html_escape_on_error) ... ok test_http_0_9 (test.test_httpservers.BaseHTTPRequestHandlerTestCase.test_http_0_9) ... ok test_http_1_0 (test.test_httpservers.BaseHTTPRequestHandlerTestCase.test_http_1_0) ... ok test_http_1_1 (test.test_httpservers.BaseHTTPRequestHandlerTestCase.test_http_1_1) ... ok test_request_length (test.test_httpservers.BaseHTTPRequestHandlerTestCase.test_request_length) ... ok test_too_many_headers (test.test_httpservers.BaseHTTPRequestHandlerTestCase.test_too_many_headers) ... ok test_unprintable_not_logged (test.test_httpservers.BaseHTTPRequestHandlerTestCase.test_unprintable_not_logged) ... ok test_with_continue_1_0 (test.test_httpservers.BaseHTTPRequestHandlerTestCase.test_with_continue_1_0) ... ok test_with_continue_1_1 (test.test_httpservers.BaseHTTPRequestHandlerTestCase.test_with_continue_1_1) ... ok test_with_continue_rejected (test.test_httpservers.BaseHTTPRequestHandlerTestCase.test_with_continue_rejected) ... ok test_command (test.test_httpservers.BaseHTTPServerTestCase.test_command) ... ok test_error_content_length (test.test_httpservers.BaseHTTPServerTestCase.test_error_content_length) ... ok test_handler (test.test_httpservers.BaseHTTPServerTestCase.test_handler) ... ok test_head_via_send_error (test.test_httpservers.BaseHTTPServerTestCase.test_head_via_send_error) ... ok test_header_close (test.test_httpservers.BaseHTTPServerTestCase.test_header_close) ... ok test_header_keep_alive (test.test_httpservers.BaseHTTPServerTestCase.test_header_keep_alive) ... ok test_internal_key_error (test.test_httpservers.BaseHTTPServerTestCase.test_internal_key_error) ... ok test_latin1_header (test.test_httpservers.BaseHTTPServerTestCase.test_latin1_header) ... ok test_major_version_number_too_long (test.test_httpservers.BaseHTTPServerTestCase.test_major_version_number_too_long) ... ok test_minor_version_number_too_long (test.test_httpservers.BaseHTTPServerTestCase.test_minor_version_number_too_long) ... ok test_request_line_trimming (test.test_httpservers.BaseHTTPServerTestCase.test_request_line_trimming) ... ok test_return_custom_status (test.test_httpservers.BaseHTTPServerTestCase.test_return_custom_status) ... ok test_return_explain_error (test.test_httpservers.BaseHTTPServerTestCase.test_return_explain_error) ... ok test_return_header_keep_alive (test.test_httpservers.BaseHTTPServerTestCase.test_return_header_keep_alive) ... ok test_send_blank (test.test_httpservers.BaseHTTPServerTestCase.test_send_blank) ... ok test_send_error (test.test_httpservers.BaseHTTPServerTestCase.test_send_error) ... ok test_version_bogus (test.test_httpservers.BaseHTTPServerTestCase.test_version_bogus) ... ok test_version_digits (test.test_httpservers.BaseHTTPServerTestCase.test_version_digits) ... ok test_version_invalid (test.test_httpservers.BaseHTTPServerTestCase.test_version_invalid) ... ok test_version_none (test.test_httpservers.BaseHTTPServerTestCase.test_version_none) ... ok test_version_none_get (test.test_httpservers.BaseHTTPServerTestCase.test_version_none_get) ... ok test_version_signs_and_underscores (test.test_httpservers.BaseHTTPServerTestCase.test_version_signs_and_underscores) ... ok test_accept (test.test_httpservers.CGIHTTPServerTestCase.test_accept) ... ERROR test_authorization (test.test_httpservers.CGIHTTPServerTestCase.test_authorization) ... ERROR test_cgi_path_in_sub_directories (test.test_httpservers.CGIHTTPServerTestCase.test_cgi_path_in_sub_directories) ... ERROR test_headers_and_content (test.test_httpservers.CGIHTTPServerTestCase.test_headers_and_content) ... ERROR test_invaliduri (test.test_httpservers.CGIHTTPServerTestCase.test_invaliduri) ... ok test_issue19435 (test.test_httpservers.CGIHTTPServerTestCase.test_issue19435) ... ok test_nested_cgi_path_issue21323 (test.test_httpservers.CGIHTTPServerTestCase.test_nested_cgi_path_issue21323) ... ERROR test_no_leading_slash (test.test_httpservers.CGIHTTPServerTestCase.test_no_leading_slash) ... ERROR test_os_environ_is_not_altered (test.test_httpservers.CGIHTTPServerTestCase.test_os_environ_is_not_altered) ... ERROR test_post (test.test_httpservers.CGIHTTPServerTestCase.test_post) ... ERROR test_query_with_continuous_slashes (test.test_httpservers.CGIHTTPServerTestCase.test_query_with_continuous_slashes) ... ERROR test_query_with_multiple_question_mark (test.test_httpservers.CGIHTTPServerTestCase.test_query_with_multiple_question_mark) ... ERROR test_url_collapse_path (test.test_httpservers.CGIHTTPServerTestCase.test_url_collapse_path) ... ok test_urlquote_decoding_in_cgi_check (test.test_httpservers.CGIHTTPServerTestCase.test_urlquote_decoding_in_cgi_check) ... ERROR test_all (test.test_httpservers.MiscTestCase.test_all) ... ok test_err (test.test_httpservers.RequestHandlerLoggingTestCase.test_err) ... ok test_get (test.test_httpservers.RequestHandlerLoggingTestCase.test_get) ... ok test_server_test_ipv4 (test.test_httpservers.ScriptTestCase.test_server_test_ipv4) ... ok test_server_test_ipv6 (test.test_httpservers.ScriptTestCase.test_server_test_ipv6) ... ok test_server_test_localhost (test.test_httpservers.ScriptTestCase.test_server_test_localhost) ... ok test_server_test_unspec (test.test_httpservers.ScriptTestCase.test_server_test_unspec) ... ok test_query_arguments (test.test_httpservers.SimpleHTTPRequestHandlerTestCase.test_query_arguments) ... ok test_start_with_double_slash (test.test_httpservers.SimpleHTTPRequestHandlerTestCase.test_start_with_double_slash) ... ok test_windows_colon (test.test_httpservers.SimpleHTTPRequestHandlerTestCase.test_windows_colon) ... ok test_browser_cache (test.test_httpservers.SimpleHTTPServerTestCase.test_browser_cache) Check that when a request to /test is sent with the request header ... ok test_browser_cache_file_changed (test.test_httpservers.SimpleHTTPServerTestCase.test_browser_cache_file_changed) ... ok test_browser_cache_with_If_None_Match_header (test.test_httpservers.SimpleHTTPServerTestCase.test_browser_cache_with_If_None_Match_header) ... ok test_get (test.test_httpservers.SimpleHTTPServerTestCase.test_get) ... ok test_get_dir_redirect_location_domain_injection_bug (test.test_httpservers.SimpleHTTPServerTestCase.test_get_dir_redirect_location_domain_injection_bug) Ensure //evil.co/..%2f../../X does not put //evil.co/ in Location. ... ok test_head (test.test_httpservers.SimpleHTTPServerTestCase.test_head) ... ok test_html_escape_filename (test.test_httpservers.SimpleHTTPServerTestCase.test_html_escape_filename) ... ok test_invalid_requests (test.test_httpservers.SimpleHTTPServerTestCase.test_invalid_requests) ... ok test_last_modified (test.test_httpservers.SimpleHTTPServerTestCase.test_last_modified) Checks that the datetime returned in Last-Modified response header ... ok test_path_without_leading_slash (test.test_httpservers.SimpleHTTPServerTestCase.test_path_without_leading_slash) ... ok test_undecodable_filename (test.test_httpservers.SimpleHTTPServerTestCase.test_undecodable_filename) ... ok test_undecodable_parameter (test.test_httpservers.SimpleHTTPServerTestCase.test_undecodable_parameter) ... ok ====================================================================== ERROR: test_accept (test.test_httpservers.CGIHTTPServerTestCase.test_accept) ---------------------------------------------------------------------- Traceback (most recent call last): File "/home/arf/cpython/Lib/test/test_httpservers.py", line 818, in tearDown os.rmdir(self.parent_dir) ~~~~~~~~^^^^^^^^^^^^^^^^^ OSError: [Errno 39] Directory not empty: '/tmp/tmp1h1tzv4r' ====================================================================== ERROR: test_authorization (test.test_httpservers.CGIHTTPServerTestCase.test_authorization) ---------------------------------------------------------------------- Traceback (most recent call last): File "/home/arf/cpython/Lib/test/test_httpservers.py", line 818, in tearDown os.rmdir(self.parent_dir) ~~~~~~~~^^^^^^^^^^^^^^^^^ OSError: [Errno 39] Directory not empty: '/tmp/tmpbgszmbsn' ====================================================================== ERROR: test_cgi_path_in_sub_directories (test.test_httpservers.CGIHTTPServerTestCase.test_cgi_path_in_sub_directories) ---------------------------------------------------------------------- Traceback (most recent call last): File "/home/arf/cpython/Lib/test/test_httpservers.py", line 818, in tearDown os.rmdir(self.parent_dir) ~~~~~~~~^^^^^^^^^^^^^^^^^ OSError: [Errno 39] Directory not empty: '/tmp/tmp9y_so6st' ====================================================================== ERROR: test_headers_and_content (test.test_httpservers.CGIHTTPServerTestCase.test_headers_and_content) ---------------------------------------------------------------------- Traceback (most recent call last): File "/home/arf/cpython/Lib/test/test_httpservers.py", line 818, in tearDown os.rmdir(self.parent_dir) ~~~~~~~~^^^^^^^^^^^^^^^^^ OSError: [Errno 39] Directory not empty: '/tmp/tmpz35qy5vx' ====================================================================== ERROR: test_nested_cgi_path_issue21323 (test.test_httpservers.CGIHTTPServerTestCase.test_nested_cgi_path_issue21323) ---------------------------------------------------------------------- Traceback (most recent call last): File "/home/arf/cpython/Lib/test/test_httpservers.py", line 818, in tearDown os.rmdir(self.parent_dir) ~~~~~~~~^^^^^^^^^^^^^^^^^ OSError: [Errno 39] Directory not empty: '/tmp/tmp_cvvyzsz' ====================================================================== ERROR: test_no_leading_slash (test.test_httpservers.CGIHTTPServerTestCase.test_no_leading_slash) ---------------------------------------------------------------------- Traceback (most recent call last): File "/home/arf/cpython/Lib/test/test_httpservers.py", line 818, in tearDown os.rmdir(self.parent_dir) ~~~~~~~~^^^^^^^^^^^^^^^^^ OSError: [Errno 39] Directory not empty: '/tmp/tmpel2n0m1e' ====================================================================== ERROR: test_os_environ_is_not_altered (test.test_httpservers.CGIHTTPServerTestCase.test_os_environ_is_not_altered) ---------------------------------------------------------------------- Traceback (most recent call last): File "/home/arf/cpython/Lib/test/test_httpservers.py", line 818, in tearDown os.rmdir(self.parent_dir) ~~~~~~~~^^^^^^^^^^^^^^^^^ OSError: [Errno 39] Directory not empty: '/tmp/tmpazpth3_5' ====================================================================== ERROR: test_post (test.test_httpservers.CGIHTTPServerTestCase.test_post) ---------------------------------------------------------------------- Traceback (most recent call last): File "/home/arf/cpython/Lib/test/test_httpservers.py", line 818, in tearDown os.rmdir(self.parent_dir) ~~~~~~~~^^^^^^^^^^^^^^^^^ OSError: [Errno 39] Directory not empty: '/tmp/tmpyv1olkj6' ====================================================================== ERROR: test_query_with_continuous_slashes (test.test_httpservers.CGIHTTPServerTestCase.test_query_with_continuous_slashes) ---------------------------------------------------------------------- Traceback (most recent call last): File "/home/arf/cpython/Lib/test/test_httpservers.py", line 818, in tearDown os.rmdir(self.parent_dir) ~~~~~~~~^^^^^^^^^^^^^^^^^ OSError: [Errno 39] Directory not empty: '/tmp/tmpoovm41tl' ====================================================================== ERROR: test_query_with_multiple_question_mark (test.test_httpservers.CGIHTTPServerTestCase.test_query_with_multiple_question_mark) ---------------------------------------------------------------------- Traceback (most recent call last): File "/home/arf/cpython/Lib/test/test_httpservers.py", line 818, in tearDown os.rmdir(self.parent_dir) ~~~~~~~~^^^^^^^^^^^^^^^^^ OSError: [Errno 39] Directory not empty: '/tmp/tmps71xpjbp' ====================================================================== ERROR: test_urlquote_decoding_in_cgi_check (test.test_httpservers.CGIHTTPServerTestCase.test_urlquote_decoding_in_cgi_check) ---------------------------------------------------------------------- Traceback (most recent call last): File "/home/arf/cpython/Lib/test/test_httpservers.py", line 818, in tearDown os.rmdir(self.parent_dir) ~~~~~~~~^^^^^^^^^^^^^^^^^ OSError: [Errno 39] Directory not empty: '/tmp/tmpasfmgl7g' ---------------------------------------------------------------------- Ran 76 tests in 2.008s FAILED (errors=11) test test_httpservers failed test_httpservers failed (11 errors) == Tests result: FAILURE == 1 test failed: test_httpservers Total duration: 2.1 sec Total tests: run=76 Total test files: run=1/1 failed=1 Result: FAILURE ``` ### CPython versions tested on: CPython main branch ### Operating systems tested on: Linux <!-- gh-linked-prs --> ### Linked PRs * gh-117932 * gh-117969 <!-- /gh-linked-prs -->
8429b4565deaef7a86bffc0ce58bc0eab1d7ae48
8515fd79fef1ac16d7848cec5ec1797294cb5366
python/cpython
python__cpython-134868
# asyncgen.athrow() checks args on asyncgen.athrow().send() but should check them on asyncgen.athrow() # Bug report ### Bug description: ```python async def agen(): return yield try: athrow = agen().athrow() athrow.close() except TypeError: print("good") else: print("bad") ``` output: ``` bad ``` ### CPython versions tested on: 3.8, 3.9, 3.10, 3.11, 3.12, 3.13, CPython main branch ### Operating systems tested on: Linux <!-- gh-linked-prs --> ### Linked PRs * gh-134868 <!-- /gh-linked-prs -->
b6237c3602075294a18dec821773429a51fa7e0d
9c72658e49806eae52346a0905c1c176d3d49a2f
python/cpython
python__cpython-117870
# readline module fails to build against recent libedit # Bug report ### Bug description: Building Python 3.13.0a6 configured with `--with-readline=editline` fails like this for me: ```pytb ./Modules/readline.c:1305:21: error: incompatible function pointer types assigning to 'rl_hook_func_t *' (aka 'int (*)(void)') from 'int (const char *, int)' [-Wincompatible-function-pointer-types] rl_startup_hook = on_startup_hook; ^ ~~~~~~~~~~~~~~~ ./Modules/readline.c:1307:23: error: incompatible function pointer types assigning to 'rl_hook_func_t *' (aka 'int (*)(void)') from 'int (const char *, int)' [-Wincompatible-function-pointer-types] rl_pre_input_hook = on_pre_input_hook; ^ ~~~~~~~~~~~~~~~~~ 2 errors generated. ``` It looks like e7e1116 is the cause. No editline header defines `_RL_FUNCTION_TYPEDEF`, but recent versions (in this case 20230828) declare these functions as taking `void` whereas Apple's version declares them the other way. I suspect that older clang versions and gcc may let you off with a warning rather than erroring on this type mismatch. ### CPython versions tested on: 3.13 ### Operating systems tested on: macOS <!-- gh-linked-prs --> ### Linked PRs * gh-117870 <!-- /gh-linked-prs -->
8515fd79fef1ac16d7848cec5ec1797294cb5366
f74e51229c83e3265f905dc15283bfe0ec1a659e
python/cpython
python__cpython-117843
# Shlex docs: improper syntax highlighting in code snippet # Snippet displays improper syntax highlight In docs `shlex` of subtitle `Improved Compatibility with Shells`(https://docs.python.org/3/library/shlex.html#improved-compatibility-with-shells) ,the first snippet doesn't highlight properly.It should be highlighted in python scheme color. <!-- gh-linked-prs --> ### Linked PRs * gh-117843 * gh-117844 <!-- /gh-linked-prs -->
dd724239dd759879b9b6981114db25256449fc42
37a4cbd8727fe392dd5c78aea60a7c37fdbad89a
python/cpython
python__cpython-117869
# Don't call `lookdict_index` in `delitemif_lock_held` The `delitemif_lock_held` function calls `lookdict_index` and passes the returned `hashpos` to `delitem_common`. We should just pass the actual `hash` to `delitem_common`. The existing code is confusing and unnecessary, but it's not exactly a bug. The behavior is fine because it's safe in this case to pretend the hashtable index (`hashpos`) is a hash value. The resulting `size_t i = (size_t)hash & mask` to compute the index from the hash is effectively a no-op. https://github.com/python/cpython/blob/069de14cb948f56b37e507f367b99c5563d3685e/Objects/dictobject.c#L2635-L2645 https://github.com/python/cpython/blob/069de14cb948f56b37e507f367b99c5563d3685e/Objects/dictobject.c#L2502-L2504 https://github.com/python/cpython/blob/069de14cb948f56b37e507f367b99c5563d3685e/Objects/dictobject.c#L995-L1000 <!-- gh-linked-prs --> ### Linked PRs * gh-117869 <!-- /gh-linked-prs -->
7bcc257e97ee080d1128788a12a1731afde26b0a
74b0658e6aa6d304cf1dffeab52a30d706ecce47
python/cpython
python__cpython-117796
# PGO: compiler warnings for source files with no profile data PGO builds do not generated profile data for all source files. This means that for a PGO build, you may end up with compiler warnings for source files with no profile data. Do we want to silence these warnings? If we do, it _may_ be harder to spot if PGO builds fail to include profile data correctly. Spotted while working on #117752. <!-- gh-linked-prs --> ### Linked PRs * gh-117796 * gh-117859 * gh-117912 <!-- /gh-linked-prs -->
ed02eb6aa99ea27f57d0a3c303d8e825d8ef6d9c
644b1e7aac8f048ade4709f248c4d66b85800efc
python/cpython
python__cpython-117798
# Improve `test_descr.test_not_implemented` # Bug report Right now there are several problems that can be attributed to the long history of this test. 1. `operator` name here is not needed: https://github.com/python/cpython/blob/396b831850f0f364d584db4407a5d633f33e571c/Lib/test/test_descr.py#L4604 2. `rname` here is not used: https://github.com/python/cpython/blob/396b831850f0f364d584db4407a5d633f33e571c/Lib/test/test_descr.py#L4629 This means that `__r*__` methods are not tested. They were removed 18 years ago in https://github.com/python/cpython/commit/4886cc331ff158f8ede74878a436adfad205bd2d#diff-6b813570d02ae3544e7e0233ae21d647f8e2dc47486495557e8d16c80460e85b I think that adding them back is a good thing <!-- gh-linked-prs --> ### Linked PRs * gh-117798 * gh-117921 <!-- /gh-linked-prs -->
1a1e013a4a526546c373afd887f2e25eecc984ad
2cc916e14797f34c8ce75213ea4f1e8390049c75
python/cpython
python__cpython-117788
# configure script is run with /bin/sh, but contains GNU bash logic # Bug report ### Bug description: I just build the python 3.13 alpha on Gentoo to have available for testing. While doing so, I discovered the following message from Gentoo's QA framework: ``` * QA Notice: Abnormal configure code * * ./configure: 24692: test: Linux: unexpected operator ``` This happens when the shell itself hits code that isn't valid shell code. ### CPython versions tested on: 3.13 ### Operating systems tested on: Linux <!-- gh-linked-prs --> ### Linked PRs * gh-117788 <!-- /gh-linked-prs -->
fd2bab9d287ef0879568662d4fedeae0a0c61d43
f268e328ed5d7b2df5bdad39691f6e4789a2fcde
python/cpython
python__cpython-117814
# venv with Windows Store base does not have correct sys.path # Bug report Appears to be new in 3.13.0a6, as I've been using it with a5 and it's only after updating that I noticed. Basically, the venv's prefix and site-packages directories are not present in `sys.path` when the base interpreter is from a Store install, and it has the user site. I need to investigate further to figure out exactly why. Test code: ``` > python3 -m venv testenv --without-pip > testenv\Scripts\activate > python -m site # expect "$PWD\testenv\Scripts\Lib\site-packages" in list ``` If anyone wants to help check it out, the Store page for alpha releases is https://www.microsoft.com/store/apps/9PNRBTZXMB4Z <!-- gh-linked-prs --> ### Linked PRs * gh-117814 <!-- /gh-linked-prs -->
4b10e209c76f9f36f8ae2e4d713b3a01591c1856
8942bf41dac49149a77f5396ab086d340de9c009
python/cpython
python__cpython-117785
# Allow CPython to build against cryptography libraries lacking post-handshake authentication # Feature or enhancement ### Proposal: As part of a series of changes [discussed on the python Ideas board](https://discuss.python.org/t/support-building-ssl-and-hashlib-modules-against-aws-lc/44505/6), this issue proposes placing guards around references to [TLSv1.3 post-handshake authentication](https://datatracker.ietf.org/doc/html/rfc8446#section-4.2.6) (PHA). This would CPython to build against cryptography libraries lacking full PHA support. ### Has this already been discussed elsewhere? I have already discussed this feature proposal on Discourse ### Links to previous discussion of this feature: https://discuss.python.org/t/support-building-ssl-and-hashlib-modules-against-aws-lc/44505/9 <!-- gh-linked-prs --> ### Linked PRs * gh-117785 <!-- /gh-linked-prs -->
56a3ce2715509fc8e42ae40ec40ce6a590448da4
8d0cafd6f23777e0e5defa65e3da048db9368ca7
python/cpython
python__cpython-118112
# Temporarily immortalize objects that use deferred reference counting # Feature or enhancement Work is ongoing to implement [deferred reference counting](https://peps.python.org/pep-0703/#deferred-reference-counting) in the free-threaded build. The goal is to avoid many of the scaling bottlenecks caused by reference count contention. However, the work is complex and may not be finished in time for the 3.13 beta release and feature freeze. In the meantime, we should temporarily immortalize objects that would use deferred reference counting in order to avoid bottlenecks that would inhibit scaling. This has some real downsides: objects that are immortalized are never collected so an application that repeatedly creates many of these objects will leak memory. I think the trade-off is still worth it: if we can't scale multithreaded programs, then the free-threaded build is not particularly useful. Once we implement deferred reference counting, we will get rid of this. ### What types of objects use deferred reference counting? * Top-level functions * Descriptors * Modules and their dictionaries * Heap types * Code objects ### Refleak buildbots We want the free-threading refleak buildbots to continue to work and catch leaks. In order to do that, we'll want to: 1) Only do the deferred -> immortal conversion when the first non-main thread is started 2) Disable the conversion at runtime in the refleak tests via an internal API <!-- gh-linked-prs --> ### Linked PRs * gh-118112 <!-- /gh-linked-prs -->
7ccacb220d99662b626c8bc63b00a27eaf604f0c
8d4b756fd31d4d91b55105b1241561e92cc571a3
python/cpython
python__cpython-129254
# inconsistent handling of duplicate ZipFile entries # Bug report ### Bug description: Create a ZIP file with duplicate central directory entries pointing to the same local file header (these can be found in the wild, see e.g. https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=1068705, this is just an easy way to create one for testing). ```python >>> import zipfile >>> with zipfile.ZipFile("foo.zip", "w") as zf: ... info = zipfile.ZipInfo(filename="foo") ... zf.writestr(info, "FOO") ... zf.filelist.append(info) ``` Opening the duplicate entry fails if using the name or the later entry in `infolist()`, but works using the earlier entry (since the later one is considered to overlap with the earlier one, but the earlier one isn't considered to overlap with another entry or the central directory). ```python >>> import zipfile >>> zf = zipfile.ZipFile("foo.zip") >>> zf.infolist()[0] <ZipInfo filename='foo' filemode='?rw-------' file_size=3> >>> zf.infolist()[1] <ZipInfo filename='foo' filemode='?rw-------' file_size=3> >>> zf.open("foo") # fails zipfile.BadZipFile: Overlapped entries: 'foo' (possible zip bomb) >>> zf.open(zf.infolist()[1]) # fails zipfile.BadZipFile: Overlapped entries: 'foo' (possible zip bomb) >>> zf.open(zf.infolist()[0]) # works fine <zipfile.ZipExtFile name='foo' mode='r'> ``` If I modify `NameToInfo` to contain the earlier entry instead, `f.open("foo")` works fine. On the one hand these ZIP files are broken. On the other hand, it would be easy to simply not overwrite existing entries in `NameToInfo`, allowing these files to be opened. And this affects real-world programs trying to open real-world files. So it could be considered a regression caused by #110016). Perhaps a warning would be in order when duplicates are detected; e.g. `unzip` shows an error but does extract the files. ### CPython versions tested on: 3.11, 3.12 ### Operating systems tested on: Linux <!-- gh-linked-prs --> ### Linked PRs * gh-129254 * gh-132263 * gh-132264 <!-- /gh-linked-prs -->
0f04f2456a2ff996cc670342a287928ab5f9b706
ac3c439cdfee8452f2bcceacd67a1f4e423ac3cf
python/cpython
python__cpython-121755
# Make `mocker.patch.dict` documentation clearer on actual behavior # Documentation The docs for [`patch.dict`](https://docs.python.org/3/library/unittest.mock.html#unittest.mock.patch.dict) method is: `Patch a dictionary, or dictionary like object, and restore the dictionary to its original state after the test.` The phrase "to its original state" can be misleading, because this method actually restores a [_copy_ ](https://github.com/python/cpython/blob/671cb22094df5b645f65c383213bfda17c8003c5/Lib/unittest/mock.py#L1917)of the original data. I ran into strange behavior with my tests because of my assumption this method would restore the original values, not copies of them. Let me know if I'm misunderstanding anything :) Thanks all for your hard work! <!-- gh-linked-prs --> ### Linked PRs * gh-121755 <!-- /gh-linked-prs -->
8303d32ff55945c5b38eeeaf1b1811dbcf8aa9be
48042c52a6b59089e7d7dda3c8fed79d646b6f8d
python/cpython
python__cpython-117768
# Add more signatures for builtin functions and methods # Feature or enhancement #117671 adds many signatures for builtin functions and methods. But not all of them need support of multi-signatures. I opened this issue to add signatures for builtin objects which do not require new features. <!-- gh-linked-prs --> ### Linked PRs * gh-117768 * gh-117769 * gh-117770 * gh-117771 * gh-117772 * gh-117773 * gh-117774 * gh-117775 * gh-117776 * gh-117777 * gh-117806 * gh-117813 * gh-117816 <!-- /gh-linked-prs -->
6e05537676da56b37ba0c9dbb459b60422918988
deb921f85173a194afb4386553d85c3f99767ca1
python/cpython
python__cpython-117763
# The trashcan mechanism could be streamlined The trashcan mechanism is over 20 years old and the code could do with a bit of streamlining. There is a lot of indirection and a fair bit of redundancy as it implements its own depth counter. Reworking the code to use the C recursion counter and removing most of the calls can streamline the code considerably. <!-- gh-linked-prs --> ### Linked PRs * gh-117763 <!-- /gh-linked-prs -->
147cd0581e35a10204776029aeaa7fa1901056bc
c917b3e8e113a3e1ffe118e581fac29eaf365191
python/cpython
python__cpython-123266
# Document the new incremental GC # Documentation - [x] Expand the "what's new" section a bit. - [x] Update Doc/library/gc.rst, speciifically how the thresholds work now. - [x] Update the dev guide to explain how incremental collection works. <!-- gh-linked-prs --> ### Linked PRs * gh-123266 * gh-123395 * gh-126695 <!-- /gh-linked-prs -->
f49a91648aac2ad55b2e005ba28fac1c7edca020
460ee5b994335994d4b5186c08f44e775b3e55fa
python/cpython
python__cpython-117801
# Python 3.13.0a6 freethreading on s390x: `test.test_io.CBufferedReaderTest.test_constructor` crash with `Floating point exception` # Bug report ### Bug description: Since https://github.com/python/cpython/issues/114331 was solved, we once again attempted to build Python with freethreading on s390x Fedora Linux. `test.test_io.CBufferedReaderTest.test_constructor` fails. The traceback: ``` test_constructor (test.test_io.CBufferedReaderTest.test_constructor) ... Fatal Python error: Floating point exception Current thread 0x000003ff8aa77b60 (most recent call first): File "/builddir/build/BUILD/Python-3.13.0a6/Lib/unittest/case.py", line 238 in handle File "/builddir/build/BUILD/Python-3.13.0a6/Lib/unittest/case.py", line 795 in assertRaises File "/builddir/build/BUILD/Python-3.13.0a6/Lib/test/test_io.py", line 1710 in test_constructor File "/builddir/build/BUILD/Python-3.13.0a6/Lib/unittest/case.py", line 606 in _callTestMethod File "/builddir/build/BUILD/Python-3.13.0a6/Lib/unittest/case.py", line 651 in run File "/builddir/build/BUILD/Python-3.13.0a6/Lib/unittest/case.py", line 707 in __call__ File "/builddir/build/BUILD/Python-3.13.0a6/Lib/unittest/suite.py", line 122 in run File "/builddir/build/BUILD/Python-3.13.0a6/Lib/unittest/suite.py", line 84 in __call__ File "/builddir/build/BUILD/Python-3.13.0a6/Lib/unittest/suite.py", line 122 in run File "/builddir/build/BUILD/Python-3.13.0a6/Lib/unittest/suite.py", line 84 in __call__ File "/builddir/build/BUILD/Python-3.13.0a6/Lib/unittest/runner.py", line 240 in run File "/builddir/build/BUILD/Python-3.13.0a6/Lib/test/libregrtest/single.py", line 57 in _run_suite File "/builddir/build/BUILD/Python-3.13.0a6/Lib/test/libregrtest/single.py", line 37 in run_unittest File "/builddir/build/BUILD/Python-3.13.0a6/Lib/test/libregrtest/single.py", line 132 in test_func File "/builddir/build/BUILD/Python-3.13.0a6/Lib/test/libregrtest/single.py", line 88 in regrtest_runner File "/builddir/build/BUILD/Python-3.13.0a6/Lib/test/libregrtest/single.py", line 135 in _load_run_test File "/builddir/build/BUILD/Python-3.13.0a6/Lib/test/libregrtest/single.py", line 178 in _runtest_env_changed_exc File "/builddir/build/BUILD/Python-3.13.0a6/Lib/test/libregrtest/single.py", line 278 in _runtest File "/builddir/build/BUILD/Python-3.13.0a6/Lib/test/libregrtest/single.py", line 306 in run_single_test File "/builddir/build/BUILD/Python-3.13.0a6/Lib/test/libregrtest/worker.py", line 77 in worker_process File "/builddir/build/BUILD/Python-3.13.0a6/Lib/test/libregrtest/worker.py", line 112 in main File "/builddir/build/BUILD/Python-3.13.0a6/Lib/test/libregrtest/worker.py", line 116 in <module> File "<frozen runpy>", line 88 in _run_code File "<frozen runpy>", line 198 in _run_module_as_main 1 test failed again: test_io ``` Is this freethreading related? I don't know. Hoping to raise visibility and pointers as where the issue comes from. cc @vstinner ### CPython versions tested on: 3.13 ### Operating systems tested on: Linux <!-- gh-linked-prs --> ### Linked PRs * gh-117801 * gh-117809 * gh-117938 <!-- /gh-linked-prs -->
a9107fe5c0869c01b4c8db7a34c768ee8511505a
7bcc257e97ee080d1128788a12a1731afde26b0a
python/cpython
python__cpython-117789
# Weird PGO build issue on macOS 14.3 - Xcode: 15.3 (22618) - Instruments: 15.3 (64565.111) With ./configure --enable-optimizations, the PGO build itself is a success on my machine, but the status is weird. Following build warnings are emitted at the first build ``` clang: warning: argument unused during compilation: '-fno-semantic-interposition' [-Wunused-command-line-argument] ... ``` During the profiling stage, several tests are not passed as before. ``` # Next, run the profile task to generate the profile information. LLVM_PROFILE_FILE="code-%p.profclangr" ./python.exe -m test --pgo --timeout= Using random seed: 3569111910 Raised RLIMIT_NOFILE: 256 -> 1024 0:00:00 load avg: 4.25 Run 44 tests sequentially 0:00:00 load avg: 4.25 [ 1/44] test_array 0:00:00 load avg: 4.25 [ 2/44] test_base64 0:00:00 load avg: 4.25 [ 3/44] test_binascii -- test_base64 failed (env changed) 0:00:00 load avg: 4.25 [ 4/44] test_binop 0:00:00 load avg: 4.25 [ 5/44] test_bisect 0:00:00 load avg: 4.25 [ 6/44] test_bytes 0:00:02 load avg: 4.25 [ 7/44] test_bz2 -- test_bytes failed (env changed) 0:00:02 load avg: 4.25 [ 8/44] test_cmath 0:00:02 load avg: 4.25 [ 9/44] test_codecs 0:00:03 load avg: 4.25 [10/44] test_collections 0:00:03 load avg: 4.25 [11/44] test_complex 0:00:03 load avg: 4.25 [12/44] test_dataclasses 0:00:04 load avg: 4.25 [13/44] test_datetime 0:00:06 load avg: 4.63 [14/44] test_decimal ------------------------------------ NOTICE ------------------------------------ test_decimal may generate "malloc can't allocate region" warnings on macOS systems. This behavior is known. Do not report a bug unless tests are also failing. See https://github.com/python/cpython/issues/85100 -------------------------------------------------------------------------------- 0:00:08 load avg: 4.63 [15/44] test_difflib 0:00:09 load avg: 4.63 [16/44] test_embed 0:00:12 load avg: 4.50 [17/44] test_float -- test_embed failed (env changed) 0:00:12 load avg: 4.50 [18/44] test_fstring 0:00:13 load avg: 4.50 [19/44] test_functools 0:00:14 load avg: 4.50 [20/44] test_generators 0:00:14 load avg: 4.50 [21/44] test_hashlib 0:00:14 load avg: 4.62 [22/44] test_heapq 0:00:15 load avg: 4.62 [23/44] test_int 0:00:15 load avg: 4.62 [24/44] test_itertools 0:00:16 load avg: 4.62 [25/44] test_json -- test_itertools failed (env changed) 0:00:18 load avg: 4.62 [26/44] test_long -- test_json failed (env changed) 0:00:19 load avg: 4.41 [27/44] test_lzma 0:00:20 load avg: 4.41 [28/44] test_math 0:00:21 load avg: 4.41 [29/44] test_memoryview 0:00:21 load avg: 4.41 [30/44] test_operator 0:00:21 load avg: 4.41 [31/44] test_ordered_dict 0:00:21 load avg: 4.41 [32/44] test_patma 0:00:22 load avg: 4.41 [33/44] test_pickle 0:00:24 load avg: 4.41 [34/44] test_pprint 0:00:24 load avg: 4.41 [35/44] test_re 0:00:24 load avg: 4.38 [36/44] test_set -- test_re failed (env changed) 0:00:26 load avg: 4.38 [37/44] test_sqlite3 0:00:26 load avg: 4.38 [38/44] test_statistics -- test_sqlite3 failed (env changed) 0:00:29 load avg: 4.11 [39/44] test_str 0:00:30 load avg: 4.11 [40/44] test_struct -- test_str failed (env changed) 0:00:30 load avg: 4.11 [41/44] test_tabnanny -- test_struct failed (env changed) 0:00:30 load avg: 4.11 [42/44] test_time -- test_tabnanny failed (env changed) 0:00:32 load avg: 4.11 [43/44] test_xml_etree 0:00:33 load avg: 4.11 [44/44] test_xml_etree_c Total duration: 33.8 sec Total tests: run=9,203 skipped=182 Total test files: run=44/44 env_changed=10 Result: SUCCESS ``` Also, writing profile files is in a weird status. ``` /usr/bin/xcrun llvm-profdata merge -output=code.profclangd *.profclangr LLVM Profile Error: Failed to write file "code-75991.profclangr": No such file or directory LLVM Profile Error: Failed to write file "code-75991.profclangr": No such file or directory LLVM Profile Error: Failed to write file "code-75991.profclangr": No such file or directory LLVM Profile Error: Failed to write file "code-75991.profclangr": No such file or directory LLVM Profile Error: Failed to write file "code-75991.profclangr": No such file or directory LLVM Profile Error: Failed to write file "code-75991.profclangr": No such file or directory LLVM Profile Error: Failed to write file "code-75991.profclangr": No such file or directory LLVM Profile Error: Failed to write file "code-75991.profclangr": No such file or directory LLVM Profile Error: Failed to write file "code-75991.profclangr": No such file or directory LLVM Profile Error: Failed to write file "code-75991.profclangr": No such file or directory LLVM Profile Error: Failed to write file "code-75991.profclangr": No such file or directory ``` cc @ned-deily @ronaldoussoren @erlend-aasland <!-- gh-linked-prs --> ### Linked PRs * gh-117789 * gh-117790 * gh-117795 * gh-117800 * gh-117803 * gh-117805 <!-- /gh-linked-prs --> ### Retargeted PRs: * gh-117796
49fc1414b52b31f6ad0408775d160ec0559c33bb
396b831850f0f364d584db4407a5d633f33e571c
python/cpython
python__cpython-117808
# _PyObject_StoreInstanceAttribute assertion fails in setattr after __dict__.clear() # Bug report ### Bug description: On current main (edit: debug build), ```python class C: def __init__(self): self.__dict__.clear() obj = C() obj.foo = None ``` fails with: ``` python: Objects/dictobject.c:6704: _PyObject_StoreInstanceAttribute: Assertion 'dict->ma_values == values' failed. ``` ### CPython versions tested on: CPython main branch ### Operating systems tested on: Linux ### Notes Originally reported here: https://discuss.python.org/t/python-3-12-3-and-3-13-0a6-released/50601/2 <!-- gh-linked-prs --> ### Linked PRs * gh-117808 <!-- /gh-linked-prs -->
784e076a10e828f383282df8a4b993a1b821f547
7d0be7aea569b3bc9a3936501d7d32af87c70e73
python/cpython
python__cpython-117740
# Stale GIL glossary definition # Documentation The definition of the GIL in the glossary is stale as a result of https://github.com/python/cpython/pull/116338 in Python 3.13. Specifically: > Past efforts to create a “free-threaded” interpreter (one which locks shared data at a much finer granularity) have not been successful because performance suffered in the common single-processor case. It is believed that overcoming this performance issue would make the implementation much more complicated and therefore costlier to maintain. <!-- gh-linked-prs --> ### Linked PRs * gh-117740 <!-- /gh-linked-prs -->
a97650912e0d17b15fea70dd114577630635d326
4ad8f090cce03c24fd4279ec8198a099b2d0cf97
python/cpython
python__cpython-117728
# Speed up `pathlib.Path.iterdir()` by using `os.scandir()` We should be able to call `os.scandir()` from `pathlib.Path.iterdir()` and construct results based on the `os.DirEntry.path` string. Currently we call `os.listdir()` and `_make_child_relpath()`, which returns a fully parsed/normalized string; particularly, it sets `_str`, `_drv`, `_root` and `_tail_cached`. It's probably not worth the expense of setting `_drv`, `_root` and `_tail_cached` - they're only useful when paths are subsequently deconstructed with `PurePath` methods, which isn't particularly common. It _is_ worth setting `_str`, and happily `os.DirEntry.path` provides a string that's very nearly normalized to pathlib's standards. Also discussed here: https://discuss.python.org/t/is-there-a-pathlib-equivalent-of-os-scandir/46626/21 <!-- gh-linked-prs --> ### Linked PRs * gh-117728 <!-- /gh-linked-prs -->
30f0643e36d2c9a5849c76ca0b27b748448d0567
0eb52f5f266d9e0a662f28a4d2dfef8c746cf96e
python/cpython
python__cpython-117723
# 3.13.0a6 breaks asyncio.Stream.readuntil with bytearray separator # Bug report ### Bug description: As discussed in #81322, the change I made in #16429 breaks asyncio.Stream.readuntil when used with iterable buffer-object types other than `bytes` (such as `bytearray`) because they're incorrectly interpreted as an iterable of separators. I've got a patch ready; I'm just filing this bug to be able to reference it. ### CPython versions tested on: 3.13, CPython main branch ### Operating systems tested on: Linux <!-- gh-linked-prs --> ### Linked PRs * gh-117723 <!-- /gh-linked-prs -->
01a51f949475f1590eb5899f3002304060501ab2
898f6de63fd5285006ee0f4993aeb8ed3e8f97f9
python/cpython
python__cpython-125110
# `_thread.lock.release()` is not thread-safe in free-threaded builds # Bug report ### Bug description: The implementation of `_thread.lock.release()` manipulates the `locked` field in a thread-unsafe way (this may be called by a thread that does not hold the lock) in free-threaded builds: https://github.com/python/cpython/blob/630df37116b1c5b381984c547ef9d23792ceb464/Modules/_threadmodule.c#L813-L825 We're choosing to punt on this for 3.13 since this should only be problematic for contended unlocks. We can revisit this for 3.13 if it turns out to be an issue in practice. Post 3.13 we would like to change the underlying lock to be a `PyMutex` and replace the implementation with something like: ``` static PyObject * lock_PyThread_release_lock(lockobject *self, PyObject *Py_UNUSED(ignored)) { /* Sanity check: the lock must be locked */ if (_PyMutex_TryUnlock(&self->mutex) < 0) { PyErr_SetString(ThreadError, "release unlocked lock"); return NULL; } Py_RETURN_NONE; } ``` ### CPython versions tested on: 3.13 ### Operating systems tested on: Linux <!-- gh-linked-prs --> ### Linked PRs * gh-125110 * gh-125116 <!-- /gh-linked-prs -->
fca552993da32044165223eec2297b6aaaac60ad
c203955f3b433e06118d00a2fe7215546a0b7fe6
python/cpython
python__cpython-117906
# closing async_generator_athrow on an async generator that suppresses GeneratorExit does not raise RuntimeError # Bug report ### Bug description: when running `agen.aclose().close()` it should throw GeneratorExit into the coroutine and raise RuntimeError from `.close()`: ```python import types @types.coroutine def _async_yield(v): return (yield v) async def agenfn(): try: yield 1 finally: try: await _async_yield(2) except GeneratorExit: print("generator exit") await _async_yield(3) agen = agenfn() try: anext(agen).send(None) except StopIteration as e: print(e.value) try: agen.aclose().close() except RuntimeError: print("good") else: print("bad") ``` prints: ``` 1 bad Exception ignored in: <async_generator object agenfn at 0x7e0c43b09620> RuntimeError: async generator ignored GeneratorExit ``` ### CPython versions tested on: 3.11, 3.12, 3.13, CPython main branch ### Operating systems tested on: Linux <!-- gh-linked-prs --> ### Linked PRs * gh-117906 * gh-118663 <!-- /gh-linked-prs -->
e5c699280deac076cddfef37c8af917a550f6ac3
1ff626ebda465931ff3e4922e8e87d586eb6244c
python/cpython
python__cpython-117712
# test_makefile_test_folders fails if `test/wheeldata` is empty # Bug report ### Bug description: In Fedora, when packaging Python, we remove the `.whl` files from `Lib/test/wheeldata/`. The directory is left empty. `test_makefile` has started to skip empty directories as of alpha 6 (https://github.com/python/cpython/pull/117190). It doesn't include them in the list of `used` which are then checked for equality with `unique_test_dirs`. This causes the test to fail in our environment. The test could be more robust and account for `sysconfig.get_config_var('WHEEL_PKG_DIR')`. If this is present, then it shouldn't expect that `test/wheeldata` is among the tested directories. (https://github.com/python/cpython/pull/105056) Traceback for completness: ```pytb test_makefile_test_folders (test.test_tools.test_makefile.TestMakefile.test_makefile_test_folders) ... FAIL ====================================================================== FAIL: test_makefile_test_folders (test.test_tools.test_makefile.TestMakefile.test_makefile_test_folders) ---------------------------------------------------------------------- Traceback (most recent call last): File "/builddir/build/BUILD/Python-3.13.0a6/Lib/test/test_tools/test_makefile.py", line 72, in test_makefile_test_folders self.assertSetEqual(unique_test_dirs, set(used)) ~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ AssertionError: Items in the first set but not the second: 'test/wheeldata' ---------------------------------------------------------------------- Ran 1 test in 0.033s FAILED (failures=1) ``` ### CPython versions tested on: 3.13 ### Operating systems tested on: Linux <!-- gh-linked-prs --> ### Linked PRs * gh-117712 * gh-117748 * gh-117749 <!-- /gh-linked-prs -->
d4963871b03cc76fe7d9648d022d12007585beae
02f1385f8ad6bf45376377c41f106b386d3a7eb0
python/cpython
python__cpython-117695
# Improve tests for PyEval_EvalCodeEx In the discussion for #116637 I noticed that the wrapper for `PyEval_EvalCodeEx` unnecessary constricts types of some arguments, preventing testing handling of wrong arguments in `PyEval_EvalCodeEx`. The proposed PR removes these restrictions and adds many tests for `PyEval_EvalCodeEx` with different types and values. It also add tests for custom locals, which were not tested before. <!-- gh-linked-prs --> ### Linked PRs * gh-117695 * gh-117884 <!-- /gh-linked-prs -->
57bdb75975ff90f95248c59fda34345f3bfff3c4
a9107fe5c0869c01b4c8db7a34c768ee8511505a
python/cpython
python__cpython-117699
# doctest fails to collect tests from a C function that has been wrapped # Bug report ### Bug description: `doctest.DocTestFinder` is now failing to collect examples from functions that are defined in C and then wrapped. It still works just fine with functions that are defined in C but that are _not_ wrapped. This bug was introduced by https://github.com/python/cpython/pull/115440. It breaks doctests for Numpy ufuncs in pytest-doctestplus (see https://github.com/scientific-python/pytest-doctestplus/pull/248). I have placed reproducer code in this Gist: https://gist.github.com/lpsinger/65e59728555dc2096af88d394e2d4a6b. To reproduce, retrieve the code and run the following commands: ``` pip install -e . python test.py ``` The script test.py fails with this error message: ``` $ python test.py Traceback (most recent call last): File "/Users/lpsinger/src/doctest-func-without-code/test.py", line 14, in <module> assert len(finder.find(bar.hello)[0].examples) == 1 ^^^^^^^^^^^^^^^^^^^^^^ File "/opt/local/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/doctest.py", line 942, in find self._find(tests, obj, name, module, source_lines, globs, {}) File "/opt/local/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/doctest.py", line 1004, in _find test = self._get_test(obj, name, module, globs, source_lines) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/opt/local/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/doctest.py", line 1072, in _get_test lineno = self._find_lineno(obj, source_lines) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/opt/local/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/doctest.py", line 1121, in _find_lineno obj = inspect.unwrap(obj).__code__ ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ AttributeError: 'builtin_function_or_method' object has no attribute '__code__'. Did you mean: '__call__'? ``` ### CPython versions tested on: 3.9 ### Operating systems tested on: macOS <!-- gh-linked-prs --> ### Linked PRs * gh-117699 * gh-117708 <!-- /gh-linked-prs -->
4bb7d121bc0a3fd00a3c72cd915b5dd8fac5616e
f90ff0367271ea474b4ce3c8e2643cb51d188c18
python/cpython
python__cpython-117872
# `tarfile` deprecation warning for PEP-706 should set a stacklevel # Bug report ### Bug description: I just saw the following deprecation warning in the GitHub Actions log for a project I contribute to: ``` /opt/hostedtoolcache/Python/3.12.2/x64/lib/python3.12/tarfile.py:2221: DeprecationWarning: Python 3.14 will, by default, filter extracted tar archives and reject files or modify their metadata. Use the filter argument to control this behavior. warnings.warn( ``` Unfortunately, the warning doesn't tell me anything about which line in the project is triggering the warning. The warning points to the line in `tarfile.py` inside the function that has the deprecation warning, rather than the line in my project that's calling the `tarfile` function in the depreacted way. We should set a stacklevel here to rectify this (I think stacklevel should be set to `3`?): https://github.com/python/cpython/blob/d5f1139c79525b4e7e4e8ad8c3e5fb831bbc3f28/Lib/tarfile.py#L2245-L2250 Cc. @encukou as the author and implementer of PEP-706 ### CPython versions tested on: 3.12 ### Operating systems tested on: Linux <!-- gh-linked-prs --> ### Linked PRs * gh-117872 * gh-117930 <!-- /gh-linked-prs -->
cff0a2db00b6379f60fe273a9782f71773d0a4cb
c520bf9bdf77d43c3d5d95bd08e856759a2abc86
python/cpython
python__cpython-117720
# `ThreadPoolExecutorTest.test_no_stale_references` may hang with GIL disabled In https://github.com/python/cpython/pull/114824, I modified `test_no_stale_references` so that it passes in the `--disable-gil` build. Unfortunately, that change was not sufficient and the test may still hang once the GIL is actually disabled. ### Relevant code: https://github.com/python/cpython/blob/a25c02eaf01abc7ca79efdbcda986b9cc2787b6c/Lib/test/test_concurrent_futures/executor.py#L84-L87 https://github.com/python/cpython/blob/a25c02eaf01abc7ca79efdbcda986b9cc2787b6c/Lib/test/test_concurrent_futures/executor.py#L103 The problem is due to the combination of two issues: * Due to biased reference counting, the destructor for `my_object` is usually called on the main thread asynchronously (by the eval breaker logic) * The destructor may be called somewhere in the implementation of `my_object_collected.wait()`. The `my_object_collected.wait()` implementation holds some of the same locks that `my_object_collected.set()` also needs. This can lead to deadlock if the timing is unlucky: the `my_object_collected.set()` call from the weakref callback tries to acquire locks already held by the current thread and deadlocks. <!-- gh-linked-prs --> ### Linked PRs * gh-117720 <!-- /gh-linked-prs -->
520cf2170ea08730e142d591e311b7ab8a6afe63
106e9ddc435372f3977432d76d0b1cb46ac72c5f
python/cpython
python__cpython-117690
# Improve the performance of ntpath.expanduser() # Feature or enhancement ### Proposal: In `ntpath.expanduser()`, `_get_bothseps()` is called in every loop iteration. It should be assigned instead: ```diff if isinstance(path, bytes): + seps = b'\\/' tilde = b'~' else: + seps = '\\/' tilde = '~' if not path.startswith(tilde): return path i, n = 1, len(path) -while i < n and path[i] not in _get_bothseps(path): +while i < n and path[i] not in seps: i += 1 ``` ### Has this already been discussed elsewhere? This is a minor feature, which does not need previous discussion elsewhere ### Links to previous discussion of this feature: - #117634 <!-- gh-linked-prs --> ### Linked PRs * gh-117690 <!-- /gh-linked-prs -->
f90ff0367271ea474b4ce3c8e2643cb51d188c18
0d42ac9474f857633d00b414c0715f4efa73f1ca
python/cpython
python__cpython-117685
# `test_code.test_free_different_thread` is flaky when the GIL is disabled This test case checks that freeing a code object on a different thread then where the co_extra was set is safe. However, the test make some assumptions about when destructors are called that aren't always true in the free-threaded build: https://github.com/python/cpython/blob/fa58e75a8605146a89ef72b58b4529669ac48366/Lib/test/test_code.py#L855-L875 In particular, the assertion `self.test.assertEqual(LAST_FREED, 500)` in the `ThreadTest` occasionally fails because `LAST_FREED` is still `None`. The underlying issue is that biased reference counting can delay the calling of the code object's destructor. Normally, the `gc_collect()` calls are sufficient to ensure that the code object is collected. They sort of are -- the code object is being freed -- but it happens concurrently in the main thread and may not be finished by the time `ThreadTest` calls the `self.test.assertEqual(LAST_FREED, 500)`. The timeline I've seen when debugging this is: 1) The main thread starts `ThreadTest` 2) `ThreadTest` deletes the final reference to `f`. The total reference count is now zero, but it's represented as `ob_ref_local=1`, `ob_ref_shared=-1`, so `TestThread` enqueues it to be merged by the main thread. 3) The main thread merges the reference count fields and starts to call the code object's destructor 4) `ThreadTest` calls `gc_collect()` and then `self.test.assertEqual(LAST_FREED, 500)`, which fails ... 5) The main thread finishes calling the code object's destructor, which sets `LAST_FREED` to 500. <!-- gh-linked-prs --> ### Linked PRs * gh-117685 <!-- /gh-linked-prs -->
df0f3a738f8bd414e0a3164ad65f71acfa83c085
acf69e09c66f8473399fabab36b81f56496528a6
python/cpython
python__cpython-117629
# Give _PyInstructionSequence a python interface and use it in compiler tests Expose ``_PyInstructionSequence`` as a PyObject, and then the compiler tests would be able to work with it directly. By giving this object functions to add a label/instruction, the logic of translating a convenient python representation to a C sequence does not need to be duplicated in the test harness, the process of constructing an instruction sequence from tests becomes the same as the one used in codegen, and re-uses the implementation of the instruction sequence data structure. <!-- gh-linked-prs --> ### Linked PRs * gh-117629 * gh-118326 <!-- /gh-linked-prs -->
c179c0e6cbb4d1e981fffd43f207f5b1aa5388e5
ae8dfd2761e4a45afe0adada0f91f371dd121bb8
python/cpython
python__cpython-117664
# [Enum] _simple_enum does not handle complex aliases # Bug report ### Bug description: If an enum's `__new__` takes multiple arguments, and only one of those arguments is the member value, `_simple_enum` fails (it treats all the arguments as a single tuple value). ### CPython versions tested on: 3.13 ### Operating systems tested on: Linux <!-- gh-linked-prs --> ### Linked PRs * gh-117664 <!-- /gh-linked-prs -->
e5521bcca916c63866f0aa1c4dfb3a315d6ada73
d5f1139c79525b4e7e4e8ad8c3e5fb831bbc3f28
python/cpython
python__cpython-117659
# Fix `check_dump_traceback_threads` in free-threaded build The [`check_dump_traceback_threads`](https://github.com/python/cpython/blob/ac45766673b181ace8fbafe36c89bde910968f0e/Lib/test/test_faulthandler.py#L534-L591) function checks the result when faulthandler dumps the traceback involving more than one thread. With the GIL, the waiting thread is always (or almost always) in the `self.stop.wait()` call, but in the free-threaded build the waiting thread might still be in the `self.running.set()` call. https://github.com/python/cpython/blob/ac45766673b181ace8fbafe36c89bde910968f0e/Lib/test/test_faulthandler.py#L560-L562 <!-- gh-linked-prs --> ### Linked PRs * gh-117659 <!-- /gh-linked-prs -->
6edde8a91c753dba03c92315b7585209931c704b
fa58e75a8605146a89ef72b58b4529669ac48366
python/cpython
python__cpython-117668
# ``test_strptime`` raises a DeprecationWarning # Bug report ### Bug description: ```pytb ./python -m test -v test_strptime == CPython 3.13.0a5+ (heads/main:ac45766673, Apr 8 2024, 23:14:32) [GCC 9.4.0] == Linux-5.10.16.3-microsoft-standard-WSL2-x86_64-with-glibc2.31 little-endian == Python build: debug == cwd: /home/eclips4/CLionProjects/cpython/build/test_python_worker_28987æ == CPU count: 16 == encodings: locale=UTF-8 FS=utf-8 == resources: all test resources are disabled, use -u option to unskip tests Using random seed: 3103978630 0:00:00 load avg: 29.90 Run 1 test sequentially 0:00:00 load avg: 29.90 [1/1] test_strptime test_TimeRE_recreation_locale (test.test_strptime.CacheTests.test_TimeRE_recreation_locale) ... sys:1: DeprecationWarning: Parsing dates involving a day of month without a year specified is ambiguious and fails to parse leap day. The default behavior will change in Python 3.15 to either always raise an exception or to use a different default year (TBD). To avoid trouble, add a specific year to the input & format. See https://github.com/python/cpython/issues/70647. skipped 'test needs de_DE.UTF8 locale' ``` ### CPython versions tested on: CPython main branch ### Operating systems tested on: Linux <!-- gh-linked-prs --> ### Linked PRs * gh-117668 * gh-118956 <!-- /gh-linked-prs -->
abead548af0172dabba13da8bacf2da3c02d4927
cd4cfa6ed2fd5f866c7be339f1d3cf56aa4d2bad
python/cpython
python__cpython-117651
# Disable importing legacy single-phase init extensions within subinterpreters in `--disable-gil` build # Feature or enhancement When importing a single-phase init extension in a subinterpreter, Python will make a [shallow copy](https://github.com/python/cpython/blob/775912a51d6847b0e4fe415fa91f2e0b06a3c43c/Python/import.c#L1180-L1201) of the module's dictionary, which can share (non-immortal) objects between interpreters. This does not work properly in the `--disable-gil` build, and we should disable it by raising an `ImportError`, at least for now. We can investigate how to support this in the future. There are currently some unit tests for this case. Those tests pass, but that's mostly because they are simple and small changes to things like the GC will cause them to crash. The underlying problems are not directly related to the GIL, but rather because the GC and our mimalloc integration in the `--disable-gil` build assume that non-immortal objects are isolated by interpreter: - The GC assumes that all tracked objects reachable via `tp_traverse` are also reachable from the per-interpreter mimalloc heaps. Violating this assumption can cause flags to be set to an inconsistent state. - The mimalloc pool of abandoned segments is per-interpreter. If a non-immortal object outlives its creating interpreter, this can cause use-after-free problems. <!-- gh-linked-prs --> ### Linked PRs * gh-117651 * gh-117780 <!-- /gh-linked-prs -->
25f6ff5d3e92305659db62e7f7545f823f0dbd05
39d381f91e93559011587d764c1895ee30efb741
python/cpython
python__cpython-117654
# Improve performance of os.join by replacing map with a direct method call # Feature or enhancement ### Proposal: We can improve performance of `os.join` by changing ``` for b in map(os.fspath, p): ``` into ``` for w in p: b=os.fspath(w) ``` The `map` generator takes time to create and the application of the method to each element also takes some time. A quick benchmark ``` main: 385 ns ± 11.5 ns per loop (mean ± std. dev. of 7 runs, 1,000,000 loops each) map replaced: 328 ns ± 9.8 ns per loop (mean ± std. dev. of 7 runs, 1,000,000 loops each) ``` The idea still needs to be tested on other platforms and with longer sequences. ### Has this already been discussed elsewhere? This is a minor feature, which does not need previous discussion elsewhere ### Links to previous discussion of this feature: _No response_ <!-- gh-linked-prs --> ### Linked PRs * gh-117654 * gh-117693 * gh-117697 <!-- /gh-linked-prs -->
99852d9e65aef11fed4bb7bd064e2218220f1ac9
19a22020676a599e1c92a24f841196645ddd9895
python/cpython
python__cpython-117646
# test_dynamic fails on wasm32 WASI 8Core 3.x buildbot Example of recent failing build: https://buildbot.python.org/all/#/builders/1344/builds/1372 build: ``` 0:00:42 load avg: 4.04 [ 50/473/1] test_dynamic worker non-zero exit code (Exit code 134) test_cannot_change_globals_or_builtins_with_eval (test.test_dynamic.RebindBuiltinsTests.test_cannot_change_globals_or_builtins_with_eval) ... ok test_cannot_change_globals_or_builtins_with_exec (test.test_dynamic.RebindBuiltinsTests.test_cannot_change_globals_or_builtins_with_exec) ... ok test_cannot_replace_builtins_dict_between_calls (test.test_dynamic.RebindBuiltinsTests.test_cannot_replace_builtins_dict_between_calls) ... ok test_cannot_replace_builtins_dict_while_active (test.test_dynamic.RebindBuiltinsTests.test_cannot_replace_builtins_dict_while_active) ... ok test_eval_gives_lambda_custom_globals (test.test_dynamic.RebindBuiltinsTests.test_eval_gives_lambda_custom_globals) ... ok test_globals_shadow_builtins (test.test_dynamic.RebindBuiltinsTests.test_globals_shadow_builtins) ... ok test_load_global_specialization_failure_keeps_oparg (test.test_dynamic.RebindBuiltinsTests.test_load_global_specialization_failure_keeps_oparg) ... Error: failed to run main module `python.wasm` Caused by: 0: failed to invoke command default 1: error while executing at wasm backtrace: 0: 0x1af790 - <unknown>!compiler_visit_expr 1: 0x1afaf9 - <unknown>!compiler_visit_expr 2: 0x1afaf9 - <unknown>!compiler_visit_expr 3: 0x1afaf9 - <unknown>!compiler_visit_expr (...) 339: 0x1afaf9 - <unknown>!compiler_visit_expr 340: 0x1afe9e - <unknown>!compiler_visit_expr 341: 0x1a8f50 - <unknown>!compiler_codegen 342: 0x1a8873 - <unknown>!_PyAST_Compile 343: 0x20f198 - <unknown>!run_mod 344: 0x20f8b2 - <unknown>!PyRun_StringFlags 345: 0x1876d4 - <unknown>!builtin_eval 346: 0xb92f8 - <unknown>!cfunction_vectorcall_FASTCALL 347: 0x60876 - <unknown>!PyObject_Vectorcall 348: 0x1a125a - <unknown>!_PyEval_EvalFrameDefault 349: 0x1a1d21 - <unknown>!_PyEval_Vector (...) 407: 0x233859 - <unknown>!Py_BytesMain 408: 0x5967 - <unknown>!main 409: 0x3ca4e3 - <unknown>!__main_void 410: 0x5940 - <unknown>!_start note: using the `WASMTIME_BACKTRACE_DETAILS=1` environment variable may show more debugging information 2: memory fault at wasm address 0x1000005b0 in linear memory of size 0xa00000 3: wasm trap: out of bounds memory access ``` test.pythoninfo: ``` _testcapi.LONG_MAX: 2147483647 _testcapi.Py_C_RECURSION_LIMIT: 500 build.NDEBUG: ignore assertions (macro defined) build.Py_DEBUG: No (sys.gettotalrefcount() missing) sysconfig[HOSTRUNNER]: wasmtime run --wasm max-wasm-stack=8388608 --wasi preview2 --env PYTHONPATH=/$(shell realpath --relative-to /opt/buildbot/kushaldas-wasm/3.x.kushaldas-wasi.wasi.nondebug/build/build_oot/host/../.. /opt/buildbot/kushaldas-wasm/3.x.kushaldas-wasi.wasi.nondebug/build/build_oot/host)/$(shell cat pybuilddir.txt):/Lib --dir ../..::/ sysconfig[OPT]: -DNDEBUG -g -O3 -Wall sysconfig[PY_CFLAGS]: -fno-strict-overflow -Wsign-compare -Wunreachable-code -DNDEBUG -g -O3 -Wall sysconfig[PY_CFLAGS_NODIST]: -std=c11 -Wextra -Wno-unused-parameter -Wno-int-conversion -Wno-missing-field-initializers -Wstrict-prototypes -Werror=implicit-function-declaration -fvisibility=hidden -I../../Include/internal -I../../Include/internal/mimalloc sysconfig[PY_CORE_LDFLAGS]: -z stack-size=524288 -Wl,--stack-first -Wl,--initial-memory=10485760 sysconfig[PY_LDFLAGS_NODIST]: -z stack-size=524288 -Wl,--stack-first -Wl,--initial-memory=10485760 sysconfig[PY_STDMODULE_CFLAGS]: -fno-strict-overflow -Wsign-compare -Wunreachable-code -DNDEBUG -g -O3 -Wall -std=c11 -Wextra -Wno-unused-parameter -Wno-int-conversion -Wno-missing-field-initializers -Wstrict-prototypes -Werror=implicit-function-declaration -fvisibility=hidden -I../../Include/internal -I../../Include/internal/mimalloc -IObjects -IInclude -IPython -I. -I../../Include ``` Python is **built** with 512 KiB stack and **run** with 8 MiB stack and Py_C_RECURSION_LIMIT=500. <!-- gh-linked-prs --> ### Linked PRs * gh-117646 * gh-117674 <!-- /gh-linked-prs -->
ac45766673b181ace8fbafe36c89bde910968f0e
ed785c08993467461711c56eb5e6f88331062cca
python/cpython
python__cpython-117643
# PEP 737 implementation errors # Bug report For some reasons gh-111696 implemented `%T#` and `%N#` instead of `%#T` and `%#N`. It does not only contradict PEP 737, but does not match the general principles of prinf-like format, and can cause problems in future. Tests, which were added in gh-111696, were removed in gh-116417, so now this feature is not tested. <!-- gh-linked-prs --> ### Linked PRs * gh-117643 <!-- /gh-linked-prs -->
24a2bd048115efae799b0a9c5dd9fbb7a0806978
1a6594f66166206b08f24c3ba633c85f86f99a56
python/cpython
python__cpython-117652
# Use set comprehension for `posixpath.commonpath()` # Feature or enhancement ### Proposal: We can use a set comprehension to check if no absolute and relative paths are mixed: ```diff -try: - isabs, = set(p[:1] == sep for p in paths) -except ValueError: - raise ValueError("Can't mix absolute and relative paths") from None +if len({p.startswith(sep) for p in paths}) != 1: + raise ValueError("Can't mix absolute and relative paths") ``` ```diff -prefix = sep if isabs else sep[:0] +prefix = sep if paths[0].startswith(sep) else sep[:0] ``` This is faster and more readable. ### Has this already been discussed elsewhere? This is a minor feature, which does not need previous discussion elsewhere ### Links to previous discussion of this feature: - #117610 <!-- gh-linked-prs --> ### Linked PRs * gh-117652 <!-- /gh-linked-prs -->
b848b944bb4730ab4dcaeb15b0b1713c3f68ec7d
6078f2033ea15a16cf52fe8d644a95a3be72d2e3
python/cpython
python__cpython-117638
# Remove redundant type check in `os.path.join()` # Feature or enhancement ### Proposal: These type checks were introduced a long time ago in https://github.com/python/cpython/commit/5bfc03f430ab13ed84c2c30f2c87e9800b5670a4 and became redundant in https://github.com/python/cpython/commit/3f9183b5aca568867f37c38501fca63911580c66 with the addition of `os.fspath()`. They can be safely removed, slightly speeding up `os.path.join()` in the process: ```diff def join(path, *paths): path = os.fspath(path) if isinstance(path, bytes): sep = b'\\' seps = b'\\/' colon_seps = b':\\/' else: sep = '\\' seps = '\\/' colon_seps = ':\\/' try: - if not paths: - path[:0] + sep #23780: Ensure compatible data type even if p is null. ``` ```diff def join(a, *p): """Join two or more pathname components, inserting '/' as needed. If any component is an absolute path, all previous path components will be discarded. An empty last part will result in a path that ends with a separator.""" a = os.fspath(a) sep = _get_sep(a) path = a try: - if not p: - path[:0] + sep #23780: Ensure compatible data type even if p is null. ``` I also noticed we're concatenating `b` to an empty string in `posixpath.join()` in case `path` has a length of 0. Which we can fix quite easily: ```diff -if b.startswith(sep): +if b.startswith(sep) or not path: path = b -elif not path or path.endswith(sep): +elif path.endswith(sep): path += b ``` ### Has this already been discussed elsewhere? This is a minor feature, which does not need previous discussion elsewhere ### Links to previous discussion of this feature: - #117610 <!-- gh-linked-prs --> ### Linked PRs * gh-117638 <!-- /gh-linked-prs -->
9ee94d139197c0df8f4e096957576d124ad31c8e
e01831760e3c7cb9cdba78b048c8052808a3a663
python/cpython
python__cpython-117619
# Improve the `filename` argument of `pdb`'s `break` command # Feature or enhancement ### Proposal: Currently in the docs we say `b(reak)` command can take a `filename`, but we are very vague about it. It only mentioned the file will be searched in `sys.path`. The actual implementation currently allows: * absolute path * relative path (including a path with `/`, so not only "filename") * module name (only if `module.py` is in `sys.path`) As you can tell, we allow module name like `pprint:100`, but not package.module like `multiprocessing.queue:100`, however, `multiprocessing/queue.py:100` works. This is inconsistent by itself, let alone the lack of description. I'm making a PR to clean this up a bit. This PR will improve the `lookupmodule` function to support `package.module`, and clean up both code and docs. ### Has this already been discussed elsewhere? This is a minor feature, which does not need previous discussion elsewhere ### Links to previous discussion of this feature: _No response_ <!-- gh-linked-prs --> ### Linked PRs * gh-117619 <!-- /gh-linked-prs -->
d7ac427a796a3f869b813dac37b030889b56cd3b
4a5ad8469af9a6fc0ec1355eb203cc22bb4321d5
python/cpython
python__cpython-117781
# Argument Clinic: Unsafe code generation with `defining_class` and no slash # Bug report ### Bug description: When using the Argument Clinic to implement a function with `METH_METHOD` calling convention, the generated code can cause a crash if `defining_class` is used without slashing. For example, `datetime.now()` in `Modules/_datetimemodule.c` produces: ```diff /*[clinic input] @classmethod datetime.datetime.now + cls: defining_class tz: object = None Timezone object. Returns new datetime object representing current time local to tz. If no tz is specified, uses local timezone. [clinic start generated code]*/ static PyObject * datetime_datetime_now_impl(PyTypeObject *type, ... ``` ``` >python -m test test_datetime -m test_tzinfo_now Running Debug|x64 interpreter... Using random seed: 389474023 0:00:00 Run 1 test sequentially 0:00:00 [1/1] test_datetime Windows fatal exception: access violation Thread 0x00001298 (most recent call first): File "C:\cp\Lib\test\libregrtest\win_utils.py", line 47 in _update_load ``` No crash if `NUM_KEYWORDS` below is 1 in `_datetimemodule.c.h`: ```c static PyObject * datetime_datetime_now(PyTypeObject *type, PyTypeObject *cls, ... { ... #define NUM_KEYWORDS 2 ``` ### CPython versions tested on: 3.13, CPython main branch ### Operating systems tested on: Windows <!-- gh-linked-prs --> ### Linked PRs * gh-117781 * gh-117896 * gh-117939 * gh-117950 <!-- /gh-linked-prs -->
c520bf9bdf77d43c3d5d95bd08e856759a2abc86
1316692e8c7c1e1f3b6639e51804f9db5ed892ea
python/cpython
python__cpython-117608
# Speedup `os.path.relpath()` # Feature or enhancement ### Proposal: Currently both implemenatations of `relpath()` are a bit inefficient. So they could use some optimisations: 1. We don't need to apply `normpath()` before `abspath()`, it already normalises it: ```diff -start_abs = abspath(normpath(start)) -path_abs = abspath(normpath(path)) +start_abs = abspath(start) +path_abs = abspath(path) ``` 2. We don't need to filter the segments, we just need to check if `*_rest` is empty: ```diff -start_list = [x for x in start_rest.split(sep) if x] -path_list = [x for x in path_rest.split(sep) if x] +start_list = start_rest.split(sep) if start_rest else [] +path_list = path_rest.split(sep) if path_rest else [] ``` 3. We can use `str.join()` instead of `os.path.join()`: ```diff -return join(*rel_list) +return sep.join(rel_list) ``` ### Has this already been discussed elsewhere? This is a minor feature, which does not need previous discussion elsewhere ### Links to previous discussion of this feature: _No response_ <!-- gh-linked-prs --> ### Linked PRs * gh-117608 <!-- /gh-linked-prs -->
a7711a2a4e5cf16b34fc284085da724a8c2c06dd
424438b11ec90110054f720bfa6ea67d644cc2ec
python/cpython
python__cpython-117670
# Terrible user experience if `test_exceptions.ExceptionTests.test_recursion_normalizing_infinite_exception` fails # Bug report ### Bug description: If `test_exceptions.ExceptionTests.test_recursion_normalizing_infinite_exception` fails, a _huge_ amount of output is printed to the terminal, which is a pretty terrible user experience. The full output is here: [exception_tb.txt](https://github.com/python/cpython/files/14898010/exception_tb.txt). To reproduce, run `FORCE_COLOR ./python.exe -m unittest -v test.test_exceptions.ExceptionTests.test_recursion_normalizing_infinite_exception`. (Note that this issue _isn't_ about the test failing when `FORCE_COLOR` is set. #117605 is about that. This issue is about the poor user experience _if_ the test fails.) ### CPython versions tested on: CPython main branch ### Operating systems tested on: macOS <!-- gh-linked-prs --> ### Linked PRs * gh-117670 * gh-117745 <!-- /gh-linked-prs -->
02f1385f8ad6bf45376377c41f106b386d3a7eb0
993c3cca16ed00a0bfe467f7f26ac4f5f6dfb24c