repo
stringclasses
1 value
instance_id
stringlengths
20
22
problem_statement
stringlengths
126
60.8k
merge_commit
stringlengths
40
40
base_commit
stringlengths
40
40
python/cpython
python__cpython-134987
# `test_perf_profiler` fails under Windows Subsystem for Linux # Bug report The culprit is that for some reason calling `perf` from the WSL raises a `PermissionError`, rather than `FileNotFoundError` or `CalledProcessError` when `perf` is not available. Example output: ``` $ which perf # not present $ ./python -m test test_perf_profiler -v == CPython 3.15.0a0 (heads/main:3704171415c, May 31 2025, 12:20:03) [Clang 20.1.6 (++20250528122018+47addd4540b4-1~exp1~20250528002033.124)] == Linux-6.6.87.1-microsoft-standard-WSL2-x86_64-with-glibc2.39 little-endian == Python build: debug == cwd: /home/emma/cpython/build/test_python_worker_7822æ == CPU count: 32 == encodings: locale=UTF-8 FS=utf-8 == resources: all test resources are disabled, use -u option to unskip tests Using random seed: 1838771651 0:00:00 load avg: 0.65 Run 1 test sequentially in a single process 0:00:00 load avg: 0.65 [1/1] test_perf_profiler test test_perf_profiler crashed -- Traceback (most recent call last): File "/home/emma/cpython/Lib/test/libregrtest/single.py", line 210, in _runtest_env_changed_exc _load_run_test(result, runtests) ~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^ File "/home/emma/cpython/Lib/test/libregrtest/single.py", line 155, in _load_run_test test_mod = importlib.import_module(module_name) File "/home/emma/cpython/Lib/importlib/__init__.py", line 88, in import_module return _bootstrap._gcd_import(name[level:], package, level) ~~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "<frozen importlib._bootstrap>", line 1398, in _gcd_import File "<frozen importlib._bootstrap>", line 1371, in _find_and_load File "<frozen importlib._bootstrap>", line 1342, in _find_and_load_unlocked File "<frozen importlib._bootstrap>", line 938, in _load_unlocked File "<frozen importlib._bootstrap_external>", line 762, in exec_module File "<frozen importlib._bootstrap>", line 491, in _call_with_frames_removed File "/home/emma/cpython/Lib/test/test_perf_profiler.py", line 522, in <module> _is_perf_version_at_least(6, 6), "perf command may not work due to a perf bug" ~~~~~~~~~~~~~~~~~~~~~~~~~^^^^^^ File "/home/emma/cpython/Lib/test/test_perf_profiler.py", line 510, in _is_perf_version_at_least output = subprocess.check_output(["perf", "--version"], text=True) File "/home/emma/cpython/Lib/subprocess.py", line 472, in check_output return run(*popenargs, stdout=PIPE, timeout=timeout, check=True, ~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ **kwargs).stdout ^^^^^^^^^ File "/home/emma/cpython/Lib/subprocess.py", line 554, in run with Popen(*popenargs, **kwargs) as process: ~~~~~^^^^^^^^^^^^^^^^^^^^^^ File "/home/emma/cpython/Lib/subprocess.py", line 1038, in __init__ self._execute_child(args, executable, preexec_fn, close_fds, ~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ pass_fds, cwd, env, ^^^^^^^^^^^^^^^^^^^ ...<5 lines>... gid, gids, uid, umask, ^^^^^^^^^^^^^^^^^^^^^^ start_new_session, process_group) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/emma/cpython/Lib/subprocess.py", line 1970, in _execute_child raise child_exception_type(errno_num, err_msg, err_filename) PermissionError: [Errno 13] Permission denied: 'perf' 0:00:00 load avg: 0.65 [1/1/1] test_perf_profiler failed (uncaught exception) == Tests result: FAILURE == 1 test failed: test_perf_profiler Total duration: 270 ms Total tests: run=0 Total test files: run=1/1 failed=1 Result: FAILURE ``` <!-- gh-linked-prs --> ### Linked PRs * gh-134987 * gh-135841 * gh-135842 <!-- /gh-linked-prs -->
6ab842fce50a6125797bcddfc4a4b2622aa6c6a9
0d9d48959e050b66cb37a333940ebf4dc2a74e15
python/cpython
python__cpython-134979
# Deprecate support for `string` named-parameter in hash functions constructors # Feature or enhancement This is a follow-up to https://github.com/python/cpython/issues/134696. After c6e63d9d351f6d952000ec3bf84b3a7607989f92, it's now possible to use the following forms: - `hashlib.new(name, data)` - `hashlib.new(name, data=...)` - `hashlib.new(name, string=...)` and, taking MD5 as an example, `hashlib.md5(data)`, `hashlib.md5(data=...)` and `hashlib.md5(string=...)`. In the [docs](https://docs.python.org/3/library/hashlib.html#hashlib.new ) we only expose the following signatures: ``` hashlib.new(name, [data, ]*, usedforsecurity=True) hashlib.md5([data, ]*, usedforsecurity=True) ``` So, it would make sense to remove support for `string=...`. Ideally, I want to make it positional-only, but I don't think it's worth the shot and it's too much for a breaking change. Note that we could also deprecate *data* itself because originally, PEP-247 and PEP-452 were using `string` and not `data`, but this is probably confusing as the PEP says: > Although the parameter is called ‘string’, hashing objects operate on 8-bit data only. Both ‘key’ and ‘string’ must be a bytes-like object (bytes, bytearray…). I assume that the docs decided to use 'data' to remove this ambiguity as 'data' is more common for bytes-like objects. cc @gpshead <!-- gh-linked-prs --> ### Linked PRs * gh-134979 <!-- /gh-linked-prs -->
ee65ebdb50005655d75aca1618d3994a7b7ed869
ac7511062bf8e16ad489b17990d99abd3b4351f5
python/cpython
python__cpython-134977
# Add notes about `s[i]` in `Common Sequence Operations` # Documentation | Operation | Result | Notes | |-----------|-----------------------------|-------| | `s[i]` | *i*th item of *s*, origin 0 | (3) | 3. If *i* or *j* is negative, the index is relative to the end of sequence *s*: `len(s) + i` or `len(s) + j` is substituted. But note that `-0` is still `0`. We can add a note: If the sequence is empty or *i* is outside the sequence range, `IndexError` will be raised. <!-- gh-linked-prs --> ### Linked PRs * gh-134977 * gh-135258 * gh-135259 <!-- /gh-linked-prs -->
158e5162bfaa8a49178ce2c3f2455c3e03b60157
1cb716387255a7bdab5b580bcf8ac1b6fa32cc41
python/cpython
python__cpython-134971
# argparse: Unexpanded replacements in "unknown action" exception # Bug report ### Bug description: https://github.com/python/cpython/blob/ad39f017881e0bd8ffd809755ebf76380b928ad3/Lib/argparse.py#L1536-L1537 There is a missing `f` there, so the replacements don't happen. Noticed accidentally because it broke snakeoil's test suite. ### CPython versions tested on: 3.14, CPython main branch ### Operating systems tested on: Linux <!-- gh-linked-prs --> ### Linked PRs * gh-134971 * gh-134991 <!-- /gh-linked-prs -->
965c48056633d3f4b41520c8cd07f0275f00fb4c
cebae977a63f32c3c03d14c040df3cea55b8f585
python/cpython
python__cpython-134955
# test_subprocess can timeout on systems with excessive max file descriptors # Bug report ### Bug description: On some systems, `os.sysconf("SC_OPEN_MAX")` may return a very large value (i.e. 2**30), leading to the subprocess test timing out (or run forever). ### CPython versions tested on: CPython main branch, 3.14, 3.13, 3.12 ### Operating systems tested on: Linux <!-- gh-linked-prs --> ### Linked PRs * gh-134955 * gh-134980 * gh-134981 <!-- /gh-linked-prs -->
f58873e4b2b7aad8e3a08a6188c6eb08d0a3001b
5507eff19c757a908a2ff29dfe423e35595fda00
python/cpython
python__cpython-135010
# Add set_pledged_input_size to ZstdCompressor # Feature or enhancement ### Proposal: pyzstd's ZstdCompressor class had a method `_set_pledged_input_size`, which allowed users to set the amount of data they were going to write into a frame so it would be written into the frame header. We should support this use case in `compresison.zstd`. I don't want to add a private API that is unsafe or only for advanced users, so I want to sketch out an implementation that could be used in general and catch incorrect usage: 1) Update ZstdCompressor's struct to include two `unsigned long long` members `current_frame_size` and `pledged_size`, both initialized to `ZSTD_CONTENTSIZE_UNKNOWN` 2) add `set_pledged_size`, the main difference from the pyzstd implementation is that it will update `pledged_size` 3) modify ZstdCompressor's `compress()` and `flush()` to track how much data is being written to the compressor, written into `current_frame_size`. If the mode is `FLUSH_FRAME` then after writing, check that `current_frame_size == pledged_size`, otherwise raise a `ZstdError` to indicate the failure. Reset `pledged_size` and `current_frame_size`. I think the one drawback of the above is it will notify the user if something goes wrong but if they are streaming compressed data elsewhere they could still send garbage if they use the API wrong. But that's inherently not something we can really fix. An open question I have is should we check `current_frame_size <= pledged_size` at the end of writing when the mode isn't `FLUSH_FRAME`? I think probably yes? cc @Rogdham, I'd be interested in your thoughts. ### Has this already been discussed elsewhere? I have already discussed this feature proposal on Discourse ### Links to previous discussion of this feature: https://discuss.python.org/t/pep-784-adding-zstandard-to-the-standard-library/87377/143 <!-- gh-linked-prs --> ### Linked PRs * gh-135010 * gh-135173 <!-- /gh-linked-prs -->
4b44b3409ac026e7f13054a3daa18ab7ee14d85c
3d396ab7591d544ac8bc1fb49615b4e867ca1c83
python/cpython
python__cpython-134924
# Update MSVC PGO options The linker options we currently use for PGO on Windows are deprecated and should be updated to [`/GENPROFILE`](https://learn.microsoft.com/en-us/cpp/build/reference/genprofile-fastgenprofile-generate-profiling-instrumented-build?view=msvc-170) and [`/USEPROFILE`](https://learn.microsoft.com/en-us/cpp/build/reference/useprofile?view=msvc-170). No idea whether this will affect the optimization of the results, but it's general goodness anyway to make sure we don't suddenly stop working if/when the compiler drops the old option entirely. <!-- gh-linked-prs --> ### Linked PRs * gh-134924 * gh-134950 * gh-134951 <!-- /gh-linked-prs -->
8865b4f95b32097099d252111669b88ec7c1eb7f
310c8cd5e5dcb0fb9509e08c0d5cf32075416878
python/cpython
python__cpython-134919
# Fix and improve doctest's documentation When working on #108885 I found several issues in the `doctest` documentation. * `doctets.failureException` does not exist. The referred exception is `unittest.TestCase.failureException`. * Several copy-paste and markup errors. * Example for float formatting is outdated since Python 3.0. repr() and str() for floats now have the same precision and do not depend on the platform C library. <!-- gh-linked-prs --> ### Linked PRs * gh-134919 * gh-134966 * gh-134967 <!-- /gh-linked-prs -->
3c66e5976669a599adfb260514c03815b1a9e4e9
68784fed78aa297f0de0d038742495709185bef5
python/cpython
python__cpython-134910
# Crash when calling `textiowrapper_iternext` and writing to a text file simultaneously in ft build Running this script under free-threading build: ```python import os import tempfile import threading N=2 COUNT=100 def writer(file, barrier): barrier.wait() for _ in range(COUNT): f.write("x") def reader(file, stopping): while not stopping.is_set(): for line in file: assert line == "" stopping = threading.Event() with tempfile.NamedTemporaryFile("w+") as f: reader = threading.Thread(target=reader, args=(f, stopping)) reader.start() barrier = threading.Barrier(N) writers = [threading.Thread(target=writer, args=(f, barrier)) for _ in range(N)] for t in writers: t.start() for t in writers: t.join() stopping.set() reader.join() f.flush() assert(os.stat(f.name).st_size == COUNT * N) ``` ...results in a crash: ``` python: ./Modules/_io/textio.c:1751: _io_TextIOWrapper_write_impl: Assertion `self->pending_bytes_count == 0' failed. Aborted (core dumped) ``` The `textiowrapper_iternext` method is not protected by a critical section and calls `_textiowrapper_readline`, which calls `_textiowrapper_writeflush`, which relies on the GIL or critical section to synchronise access to its internal data. In a free-threading build this means iterating over lines in a text file is not thread-safe and can crash if it races with writes or other operations. It looks like all other entry points that could call `_textiowrapper_writeflush` are protected by the critical section, so the easiest fix is probably to just likewise protect `textiowrapper_iternext`. ### CPython versions tested on: CPython main branch ### Operating systems tested on: Linux ### Output from running 'python -VV' on the command line: _No response_ <!-- gh-linked-prs --> ### Linked PRs * gh-134910 * gh-135039 * gh-135040 <!-- /gh-linked-prs -->
44fb7c361cb24dcf9989a7a1cfee4f6aad5c81aa
055827528fa50c9a7707792f5fe264c4e20c07e9
python/cpython
python__cpython-134907
# Document zstd flag CompressionParameter.content_size_flag # Documentation The initial documentation for `compression.zstd` did not document `CompressionParameter.content_size_flag`, as I wanted to audit which scenarios it would take effect under. This issue is to track adding the docs for parameter. <!-- gh-linked-prs --> ### Linked PRs * gh-134907 * gh-134915 <!-- /gh-linked-prs -->
5f60d0fcccbf6676f5bc924f05452bd5321446f0
381020d41fb1f8b33421f01c609ba0d0edb99764
python/cpython
python__cpython-134892
# Add `PyUnstable_Unicode_GET_CACHED_HASH` # Feature or enhancement ### Proposal: I'd like to add a quick way to maybe get a `str` hash, so that advanced users don't need to reach into the undocumented-but-public `PyASCIIObject->hash`. ```c Py_hash_t PyUnstable_Unicode_GET_CACHED_HASH(PyObject *str) ``` If the hash of *str*, as returned by `PyObject_Hash`, has been cached and is immediately available, return it. Otherwise, return ``-1`` *without* setting an exception. If *str* is not a string (that is, if `PyUnicode_Check(obj)` is false), the behavior is undefined. This function never fails with an exception. Note that there are no guarantees on when a object's hash is cached, and the (non-)existence of a cached hash does not imply that the string has any other properties. ### Has this already been discussed elsewhere? I have already discussed this feature proposal on Discourse ### Links to previous discussion of this feature: https://discuss.python.org/t/82543 <!-- gh-linked-prs --> ### Linked PRs * gh-134892 <!-- /gh-linked-prs -->
e413e2671916ed8f4513af92830f4fb2bc59b1d2
343182853f19a42c0ba8980d3104076a8c7bcfe7
python/cpython
python__cpython-134958
# Python 3.14+: `python: Objects/unicodeobject.c:10387: _PyUnicode_JoinArray: Assertion `res_data == PyUnicode_1BYTE_DATA(res) + kind * PyUnicode_GET_LENGTH(res)' failed.` in sqlglot # Crash report ### What happened? The pure Python code in `sqlglot` package manages to trigger an assertion in CPython: ``` python: Objects/unicodeobject.c:10387: _PyUnicode_JoinArray: Assertion `res_data == PyUnicode_1BYTE_DATA(res) + kind * PyUnicode_GET_LENGTH(res)' failed. Aborted (core dumped) ``` Unfortunately, due to limited this is as far as I've been able to reduce it: ```python from sqlglot import parse_one parse_one("SELECT * FROM taxi ORDER BY 1 OFFSET 0 ROWS FETCH NEXT 3 ROWS ONLY").sql() ``` I can reproduce with 4.14.0b2 and 4109a9c6b33faa0032ffc95d96cd0db482af3ce2, built with `--with-assertions` (but for some reason, doesn't happen if I build `--with-pydebug`), against sqlglot 26.23.0, i.e.: ``` CFLAGS='-O0 -g' ./configure -C --with-assertions make -j$(nproc) ./python -m venv .venv .venv/bin/pip install sqlglot .venv/bin/python -c 'from sqlglot import parse_one; parse_one("SELECT * FROM taxi ORDER BY 1 OFFSET 0 ROWS FETCH NEXT 3 ROWS ONLY").sql()' ``` ``` (gdb) bt #0 0x00007feb24e84dbc in ?? () from /usr/lib64/libc.so.6 #1 0x00007feb24e2c8e6 in raise () from /usr/lib64/libc.so.6 #2 0x00007feb24e1434b in abort () from /usr/lib64/libc.so.6 #3 0x00007feb24e142b5 in ?? () from /usr/lib64/libc.so.6 #4 0x000055e03a36ecf3 in _PyUnicode_JoinArray (separator=0x55e03a899bc8 <_PyRuntime+35496>, items=0x7ffeef0e9278, seqlen=4) at Objects/unicodeobject.c:10387 #5 0x000055e03a423f4a in _PyEval_EvalFrameDefault (tstate=0x55e03a8de070 <_PyRuntime+315216>, frame=0x7feb250e9690, throwflag=0) at Python/generated_cases.c.h:1414 #6 0x000055e03a41a1f2 in _PyEval_EvalFrame (tstate=0x55e03a8de070 <_PyRuntime+315216>, frame=0x7feb250e9120, throwflag=0) at ./Include/internal/pycore_ceval.h:119 #7 0x000055e03a45c7d0 in _PyEval_Vector (tstate=0x55e03a8de070 <_PyRuntime+315216>, func=0x7feb237f54e0, locals=0x0, args=0x7ffeef0e95f0, argcount=2, kwnames=0x0) at Python/ceval.c:1975 #8 0x000055e03a21d64d in _PyFunction_Vectorcall (func=0x7feb237f54e0, stack=0x7ffeef0e95f0, nargsf=2, kwnames=0x0) at Objects/call.c:413 #9 0x000055e03a2213da in _PyObject_VectorcallTstate (tstate=0x55e03a8de070 <_PyRuntime+315216>, callable=0x7feb237f54e0, args=0x7ffeef0e95f0, nargsf=2, kwnames=0x0) at ./Include/internal/pycore_call.h:169 #10 0x000055e03a221f4e in method_vectorcall (method=0x7feb237a7c80, args=0x7feb240d5220, nargsf=1, kwnames=0x0) at Objects/classobject.c:94 #11 0x000055e03a21cfc4 in _PyVectorcall_Call (tstate=0x55e03a8de070 <_PyRuntime+315216>, func=0x55e03a221c5a <method_vectorcall>, callable=0x7feb237a7c80, tuple=0x7feb240d5200, kwargs=0x7feb23810ac0) at Objects/call.c:273 #12 0x000055e03a21d36b in _PyObject_Call (tstate=0x55e03a8de070 <_PyRuntime+315216>, callable=0x7feb237a7c80, args=0x7feb240d5200, kwargs=0x7feb23810ac0) at Objects/call.c:348 #13 0x000055e03a21d446 in PyObject_Call (callable=0x7feb237a7c80, args=0x7feb240d5200, kwargs=0x7feb23810ac0) at Objects/call.c:373 #14 0x000055e03a42a262 in _PyEval_EvalFrameDefault (tstate=0x55e03a8de070 <_PyRuntime+315216>, frame=0x7feb250e9088, throwflag=0) at Python/generated_cases.c.h:2654 #15 0x000055e03a41a1f2 in _PyEval_EvalFrame (tstate=0x55e03a8de070 <_PyRuntime+315216>, frame=0x7feb250e9020, throwflag=0) at ./Include/internal/pycore_ceval.h:119 #16 0x000055e03a45c7d0 in _PyEval_Vector (tstate=0x55e03a8de070 <_PyRuntime+315216>, func=0x7feb240e73d0, locals=0x7feb240f0300, args=0x0, argcount=0, kwnames=0x0) at Python/ceval.c:1975 #17 0x000055e03a41d431 in PyEval_EvalCode (co=0x7feb24112780, globals=0x7feb240f0300, locals=0x7feb240f0300) at Python/ceval.c:866 #18 0x000055e03a51b066 in run_eval_code_obj (tstate=0x55e03a8de070 <_PyRuntime+315216>, co=0x7feb24112780, globals=0x7feb240f0300, locals=0x7feb240f0300) at Python/pythonrun.c:1365 #19 0x000055e03a51b5dc in run_mod (mod=0x55e05c7c3728, filename=0x7feb240f0370, globals=0x7feb240f0300, locals=0x7feb240f0300, flags=0x7ffeef0ed160, arena=0x7feb24d07cb0, interactive_src=0x7feb241259d0, generate_new_source=0) at Python/pythonrun.c:1436 #20 0x000055e03a51ac55 in _PyRun_StringFlagsWithName ( str=0x7feb24125a90 "from sqlglot import parse_one; parse_one(\"SELECT * FROM taxi ORDER BY 1 OFFSET 0 ROWS FETCH NEXT 3 ROWS ONLY\").sql()\n", name=0x7feb240f0370, start=257, globals=0x7feb240f0300, locals=0x7feb240f0300, flags=0x7ffeef0ed160, generate_new_source=0) at Python/pythonrun.c:1259 #21 0x000055e03a518cf4 in _PyRun_SimpleStringFlagsWithName ( command=0x7feb24125a90 "from sqlglot import parse_one; parse_one(\"SELECT * FROM taxi ORDER BY 1 OFFSET 0 ROWS FETCH NEXT 3 ROWS ONLY\").sql()\n", name=0x55e03a6ce96e "<string>", flags=0x7ffeef0ed160) at Python/pythonrun.c:578 #22 0x000055e03a55dec7 in pymain_run_command ( command=0x55e05c6afa60 L"from sqlglot import parse_one; parse_one(\"SELECT * FROM taxi ORDER BY 1 OFFSET 0 ROWS FETCH NEXT 3 ROWS ONLY\").sql()\n") at Modules/main.c:261 #23 0x000055e03a55f2ad in pymain_run_python (exitcode=0x7ffeef0ed254) at Modules/main.c:682 #24 0x000055e03a55f4ae in Py_RunMain () at Modules/main.c:772 #25 0x000055e03a55f569 in pymain_main (args=0x7ffeef0ed2d0) at Modules/main.c:802 #26 0x000055e03a55f631 in Py_BytesMain (argc=3, argv=0x7ffeef0ed438) at Modules/main.c:826 #27 0x000055e03a1849bd in main (argc=3, argv=0x7ffeef0ed438) at ./Programs/python.c:15 (gdb) up 4 #4 0x000055e03a36ecf3 in _PyUnicode_JoinArray (separator=0x55e03a899bc8 <_PyRuntime+35496>, items=0x7ffeef0e9278, seqlen=4) at Objects/unicodeobject.c:10387 10387 assert(res_data == PyUnicode_1BYTE_DATA(res) (gdb) p res_data $1 = (unsigned char *) 0x7feb2381b97c "\340U" (gdb) p * (PyASCIIObject*) res $5 = {ob_base = {{ob_refcnt_full = 1, {ob_refcnt = 1, ob_overflow = 0, ob_flags = 0}}, ob_type = 0x55e03a872040 <PyUnicode_Type>}, length = 23, hash = -1, state = {interned = 0, kind = 1, compact = 1, ascii = 1, statically_allocated = 0}} (gdb) p * (PyUnicodeObject*) res $6 = {_base = {_base = {ob_base = {{ob_refcnt_full = 1, {ob_refcnt = 1, ob_overflow = 0, ob_flags = 0}}, ob_type = 0x55e03a872040 <PyUnicode_Type>}, length = 23, hash = -1, state = {interned = 0, kind = 1, compact = 1, ascii = 1, statically_allocated = 0}}, utf8_length = 5629578988226954784, utf8 = 0x4546203320545845 <error: Cannot access memory at address 0x4546203320545845>}, data = {any = 0x5458454e20484354, latin1 = 0x5458454e20484354 <error: Cannot access memory at address 0x5458454e20484354>, ucs2 = 0x5458454e20484354, ucs4 = 0x5458454e20484354}} (gdb) p * (PyCompactUnicodeObject*) res $7 = {_base = {ob_base = {{ob_refcnt_full = 1, {ob_refcnt = 1, ob_overflow = 0, ob_flags = 0}}, ob_type = 0x55e03a872040 <PyUnicode_Type>}, length = 23, hash = -1, state = {interned = 0, kind = 1, compact = 1, ascii = 1, statically_allocated = 0}}, utf8_length = 5629578988226954784, utf8 = 0x4546203320545845 <error: Cannot access memory at address 0x4546203320545845>} ``` (that's just my guesswork of what to print) ### CPython versions tested on: 3.14, CPython main branch ### Operating systems tested on: Linux ### Output from running 'python -VV' on the command line: Python 3.15.0a0 (heads/main:51910dc5620, May 29 2025, 16:12:29) [GCC 14.3.0] <!-- gh-linked-prs --> ### Linked PRs * gh-134958 * gh-135187 <!-- /gh-linked-prs -->
6b77af257c25d31f1f137e477cb23e63692ddf29
e598eecf4c97509acef517e94053e45db51636fb
python/cpython
python__cpython-134886
# zstd should use Py_XSETREF # Bug report ### Bug description: There is a theoretical crash path in the new zstd module that can be avoided with Py_XSETREF. PR incoming. ### CPython versions tested on: CPython main branch ### Operating systems tested on: _No response_ <!-- gh-linked-prs --> ### Linked PRs * gh-134886 * gh-134922 <!-- /gh-linked-prs -->
45c6c48afc13f9897010e32171a3e02d0624258c
b367e27af9b52528e395f95b277ec7b69e98e287
python/cpython
python__cpython-134897
# "Virtual" iterators cause odd behavior with sys.monitoring # Bug report ### Bug description: Starting with CPython commit f6f4e8a6622d556641799b02aed7ac018d878cdc, the coverage.py test suite fails when using sys.monitoring. A loop seems to execute one time too many: ``` __________________________________________________________ MatchCaseTest.test_match_case_with_wildcard ___________________________________________________________ self = <tests.test_arcs.MatchCaseTest object at 0x11014c3e0> def test_match_case_with_wildcard(self) -> None: self.check_coverage("""\ for command in ["huh", "go home", "go n"]: match command.split(): case ["go", direction] if direction in "nesw": match = f"go: {direction}" case ["go", _]: match = "no go" case x: match = f"default: {x}" print(match) """, branchz="12 1-1 34 35 56 57", branchz_missing="", ) > assert self.stdout() == "default: ['huh']\nno go\ngo: n\n" E assert "default: ['huh']\ndefault: ['huh']\nno go\ngo: n\n" == "default: ['huh']\nno go\ngo: n\n" E E + default: ['huh'] E default: ['huh'] E no go E go: n /Users/ned/coverage/trunk/tests/test_arcs.py:1444: AssertionError ``` The `default: ['huh']` line is printed twice, but only when using sys.monitoring. Under sys.settrace, the test passes. It will take a little work to make a small reproducer if you need it. Other tests fail as well that don't involve match/case, but all failures involve for loops. ### CPython versions tested on: CPython main branch ### Operating systems tested on: macOS <!-- gh-linked-prs --> ### Linked PRs * gh-134897 <!-- /gh-linked-prs -->
ce6a6371a23dc57ed4257eb102ebfb2827477abf
c600310663277e24607890298e6d9bf7e1d4f584
python/cpython
python__cpython-134878
# python -m pdb -p fails when CONFIG_CROSS_MEMORY_ATTACH not set in kernel config # Bug report ### Bug description: It's possible that a Linux kernel is built with the kernel config variable `CONFIG_CROSS_MEMORY_ATTACH` not set. (This is the case for the kernel that comes with Flatcar Container Linux and for CoreOS, the former of which is used for kubernetes nodes by the cloud provider we use). In this case, using `python -m pdb <some python PID>` fails and it looks like this: ```shell root@bc6b29d0894d:/src/cpython# ./python -c $'import time;\nwhile True: time.sleep(1)' & [1] 17106 root@bc6b29d0894d:/src/cpython# ./python -m pdb -p $! OSError: [Errno 38] Function not implemented The above exception was the direct cause of the following exception: OSError: process_vm_readv failed for PID 17106 at address 0xaaaab1f8d038 (size 760, partial read 0 bytes): Function not implemented The above exception was the direct cause of the following exception: Traceback (most recent call last): File "/src/cpython/Lib/runpy.py", line 198, in _run_module_as_main return _run_code(code, main_globals, None, "__main__", mod_spec) File "/src/cpython/Lib/runpy.py", line 88, in _run_code exec(code, run_globals) ~~~~^^^^^^^^^^^^^^^^^^^ File "/src/cpython/Lib/pdb.py", line 3609, in <module> pdb.main() ~~~~~~~~^^ File "/src/cpython/Lib/pdb.py", line 3540, in main attach(opts.pid, opts.commands) ~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^ File "/src/cpython/Lib/pdb.py", line 3424, in attach sys.remote_exec(pid, connect_script.name) ~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^ RuntimeError: Failed to read debug offsets structure from remote process ``` This is a single command to repro on such a system ```shell docker run -i --rm python:3.13 bash << 'EOF' # the python version here is irrelevant, the image just has the right deps newresolv=$(sed $'/nameserver/i\\\nnameserver 1.1.1.1' /etc/resolv.conf); echo "$newresolv" > /etc/resolv.conf # resolv is wrong in my repro VM mkdir src && cd src git clone https://github.com/python/cpython.git --depth=10 # add --branch=3.14 to test 3.14 cd cpython ./configure --with-pydebug && make -j $(nproc) ./python -c $'import time;\nwhile True: time.sleep(1)' & ./python -m pdb -p $! EOF ``` Further relevant version info for the above repro output: ``` root@bc6b29d0894d:/src/cpython# uname -a Linux bc6b29d0894d 6.6.88-flatcar #1 SMP PREEMPT Tue Apr 29 23:08:45 -00 2025 aarch64 GNU/Linux root@bc6b29d0894d:/src/cpython# ./python -VV Python 3.15.0a0 (heads/main:e64395e, May 29 2025, 09:16:22) [GCC 12.2.0] root@bc6b29d0894d:/src/cpython# git show | head -1 commit e64395e8eb8d3a9e35e3e534e87d427ff27ab0a5 root@bc6b29d0894d:/src/cpython# zcat /proc/config.gz | grep CONFIG_CROSS_MEMORY_ATTACH # CONFIG_CROSS_MEMORY_ATTACH is not set ``` Diagnosis: When `CONFIG_CROSS_MEMORY_ATTACH` is not set, the syscall handler for `process_vm_readv` (and also `process_vm_writev`) is not compiled into the kernel and so it returns `ENOSYS` as `errno`. If it's ok, I'd also like to / be happy to offer a PR to fix. Specifically I'd go with the approach that I discovered someone contributed to py-spy, of falling back to the `/proc/[pid]/mem` file in the proc filesystem. I could even open such a PR pretty soon. ### CPython versions tested on: 3.14, 3.15, CPython main branch ### Operating systems tested on: Linux <!-- gh-linked-prs --> ### Linked PRs * gh-134878 * gh-135240 <!-- /gh-linked-prs -->
ac9c3431cc5916a795c42b3e2b965233ceffe6f0
24069fbca861a5904ee7718469919e84828f22e7
python/cpython
python__cpython-134994
# 3.14.0b2 fails to build with strictly C11 compliant compiler # Bug report ### Bug description: With a compiler that implements `atomic_load_explicit` as specified in the C11 standard (such as many older versions of clang), Python 3.14.0b2 fails to build: ``` In file included from Parser/pegen.c:3: In file included from ./Include/internal/pycore_pystate.h:12: In file included from ./Include/internal/pycore_tstate.h:13: In file included from ./Include/internal/pycore_mimalloc.h:45: ./Include/internal/mimalloc/mimalloc/internal.h:640:23: error: address argument to atomic operation must be a pointer to non-const _Atomic type ('const _Atomic(mi_encoded_t) *' invalid) next = (mi_block_t*)mi_atomic_load_relaxed(&block->next); ^ ~~~~~~~~~~~~ ./Include/internal/mimalloc/mimalloc/atomic.h:61:50: note: expanded from macro 'mi_atomic_load_relaxed' #define mi_atomic_load_relaxed(p) mi_atomic(load_explicit)(p,mi_memory_order(relaxed)) ^ ~ ./Include/internal/mimalloc/mimalloc/atomic.h:42:33: note: expanded from macro 'mi_atomic' #define mi_atomic(name) atomic_##name ^ <scratch space>:40:1: note: expanded from here atomic_load_explicit ^ ``` Using const here wasn't allowed until C17: http://www.open-std.org/jtc1/sc22/wg14/www/docs/n2244.htm#dr_459 This is a regression since b1. ### CPython versions tested on: 3.14 ### Operating systems tested on: macOS <!-- gh-linked-prs --> ### Linked PRs * gh-134994 * gh-135053 * gh-135054 <!-- /gh-linked-prs -->
b525e31b7fc50e7a498f8b9b16437cb7b9656f6f
0ac9e17fb47075c9446b99da4dffe4cad993b97a
python/cpython
python__cpython-134858
# Improved error reporting for doctests run with unittest There are several issues with error reports when doctests are wrapped in `DocTestCase`. * When error happens during compilation or execution an example, the traceback contains several lines from `doctest.py`: File ... exec(compile(example.source, filename, "single", ~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ compileflags, True), test.globs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ * When the output differs, the traceback of AssertionError contains other line from `doctest.py`: Traceback (most recent call last): File ... raise self.failureException(self.format_failure(new.getvalue())) * Also, the AssertionError message contains redundant newline character. <!-- gh-linked-prs --> ### Linked PRs * gh-134858 * gh-134903 <!-- /gh-linked-prs -->
cb8a72b301f47e76d93a7fe5b259e9a5758792e1
dafd14146f7ca18932894ea445a2f9f98f2a8b01
python/cpython
python__cpython-134849
# audit_events Sphinx extension appends to 'sources' list infinitely # Bug report My Sphinx builds are taking a long time after a bunch of runs. With @AA-Turner we tracked it down to the AuditEvents extension appending new values in each run to a list that's persisted in the environment pickle. Adam, over to you :) <!-- gh-linked-prs --> ### Linked PRs * gh-134849 * gh-134853 * gh-134854 <!-- /gh-linked-prs -->
b265a7ddeb12b2040d80b471d447ce4c3ff4bb95
4635115c3f1495fa20e553937df37861fffa7054
python/cpython
python__cpython-134844
# Update the outdated list of HTTP status codes in the docs # Documentation In recent years, more missing HTTP status codes have been gradually added to https://github.com/python/cpython/blob/main/Lib%2Fhttp%2F__init__.py#L54, such as https://github.com/python/cpython/commit/61ac612e78e4f2625977406fb6f366e0a644673a, https://github.com/python/cpython/issues/127089 and https://github.com/python/cpython/issues/102247. However, there is a list of status codes in the document that has not been updated. If we check the GitHub blame, the last update was 18 years ago! ![IMG_20250528_204649.jpg](https://github.com/user-attachments/assets/f4d42cdc-8f59-40ed-837a-1c665dc5a7c4) https://github.com/python/cpython/blob/main/Doc%2Fhowto%2Furllib2.rst#L248-L317 I hope a volunteer can complete it. It's a simple issue <!-- gh-linked-prs --> ### Linked PRs * gh-134844 * gh-134984 * gh-134985 <!-- /gh-linked-prs -->
3704171415c1ea6ebbeb2f992758b6565f42e378
f58873e4b2b7aad8e3a08a6188c6eb08d0a3001b
python/cpython
python__cpython-134834
# Detail `del s[i:j]` in `Mutable Sequence Types` # Documentation In [Mutable Sequence Types](https://docs.python.org/3.15/library/stdtypes.html#mutable-sequence-types): The description of `del s[i:j]` is "same as s[i:j] = []". This is not clear enough. We should make it consistent with `del s[i:j:k]` ("removes the elements of s[i:j:k] from the list"). <!-- gh-linked-prs --> ### Linked PRs * gh-134834 * gh-136608 * gh-136609 <!-- /gh-linked-prs -->
609d5adc7cc241da8fe314a64ddd2c8a883ee8b7
46707d241a9eef4713450fa3ccbe490e48c8b722
python/cpython
python__cpython-134818
# Document `RotatingFileHandler.shouldRollover()` and `TimedRotatingFileHandler.shouldRollover()` # Documentation The [RotatingFileHandler](https://docs.python.org/3/library/logging.handlers.html#timedrotatingfilehandler) and [TimedRotatingFileHandler](https://docs.python.org/3/library/logging.handlers.html#timedrotatingfilehandler) do not document the `shouldRollover` method that they both implement. <!-- gh-linked-prs --> ### Linked PRs * gh-134818 * gh-134823 * gh-134824 <!-- /gh-linked-prs -->
7be5916f6dc3db95744b5fec945327d82cce0183
d7256ae4d781932b3b43b162e8425abdb134afa6
python/cpython
python__cpython-134772
# `time_clockid_converter()` selects the wrong type for clockid_t on Cygwin. # Bug report ### Bug description: Building Python from the main branch on Python fails with the following error: ```sh gcc -fno-strict-overflow -Wsign-compare -DNDEBUG -g -O3 -Wall -std=c11 -Wextra -Wno-unused-parameter -Wno-missing-field-initializers -Wstrict-prototypes -Werror=implicit-function-declaration -fvisibility=hidden -I./Include/internal -I./Include/internal/mimalloc -I. -I./Include -DPy_BUILD_CORE_BUILTIN -c ./Modules/timemodule.c -o Modules/timemodule.o In file included from ./Include/Python.h:19, from ./Modules/timemodule.c:3: ./Modules/timemodule.c: In function ‘time_clockid_converter’: ./Include/pymacro.h:118:13: error: static assertion failed: "sizeof(clk_id) == sizeof(*p)" 118 | static_assert((cond), #cond); \ | ^~~~~~~~~~~~~ ./Modules/timemodule.c:203:5: note: in expansion of macro ‘Py_BUILD_ASSERT’ 203 | Py_BUILD_ASSERT(sizeof(clk_id) == sizeof(*p)); | ^~~~~~~~~~~~~~~ make: *** [Makefile:3789: Modules/timemodule.o] Error 1 ``` This is because we use `int clk_id = PyLong_AsInt(obj);` when it should be a `long` on this platform. ### CPython versions tested on: CPython main branch ### Operating systems tested on: Other <!-- gh-linked-prs --> ### Linked PRs * gh-134772 <!-- /gh-linked-prs -->
d96343679fd6137c9d87d1bb120228b162ea0f8c
f49a07b531543dd8a42d90f5b1c89c0312fbf806
python/cpython
python__cpython-134769
# Build failure on main and latest Python 3.14 beta (missing mt_continue_should_break) # Bug report ### Bug description: The `3.14` branch and `main` both fail to build with `--enable-assertions`: ```console $ ./configure --with-assertions $ make [...] gcc -fno-strict-overflow -Wsign-compare -g -O3 -Wall -std=c11 -Wextra -Wno-unused-parameter -Wno-missing-field-initializers -Wstrict-prototypes -Werror=implicit-function-declaration -fvisibility=hidden -I./Include/internal -I./Include/internal/mimalloc -I. -I./Include -fPIC -c ./Modules/_zstd/compressor.c -o Modules/_zstd/compressor.o In file included from ./Include/Python.h:19, from ./Modules/_zstd/compressor.c:15: ./Modules/_zstd/compressor.c: In function ‘compress_mt_continue_lock_held’: ./Modules/_zstd/compressor.c:547:20: error: implicit declaration of function ‘mt_continue_should_break’ [-Wimplicit-function-declaration] 547 | assert(mt_continue_should_break(&in, &out)); | ^~~~~~~~~~~~~~~~~~~~~~~~ make: *** [Makefile:3579: Modules/_zstd/compressor.o] Error 1 ``` I'll send a patch now. ### CPython versions tested on: CPython main branch ### Operating systems tested on: Linux <!-- gh-linked-prs --> ### Linked PRs * gh-134769 * gh-134916 <!-- /gh-linked-prs -->
2f2bee21118adce653ee5bc4eb31d30327465966
5f60d0fcccbf6676f5bc924f05452bd5321446f0
python/cpython
python__cpython-136071
# `UnboundLocalError` in `email.message.Message.get_payload` # Bug report ### Bug description: ```python html_body = part.get_payload(decode=True).decode() File "/usr/local/lib/python3.8/email/message.py", line 282, in get_payload return quopri.decodestring(bpayload) ``` ### CPython versions tested on: CPython main branch ### Operating systems tested on: Linux <!-- gh-linked-prs --> ### Linked PRs * gh-136071 * gh-136579 * gh-136580 * gh-136751 <!-- /gh-linked-prs -->
25335d297b5248922a4c82183bcdf0c0ada8352b
5a20e7972505c6953e7c7f434f42d62cab3795f3
python/cpython
python__cpython-134747
# Change PyThread_allocate_lock() implementation to PyMutex # Feature or enhancement ### Proposal: `PyMutex` is a fast and portable lock implementation. I propose to change the `PyThread_allocate_lock()` implementation to reuse `PyMutex`. ### Has this already been discussed elsewhere? No response given ### Links to previous discussion of this feature: _No response_ <!-- gh-linked-prs --> ### Linked PRs * gh-134747 <!-- /gh-linked-prs -->
ebf6d13567287d04683dab36f52cde7a3c9915e7
45c6c48afc13f9897010e32171a3e02d0624258c
python/cpython
python__cpython-134748
# Abort in `_PyEval_EvalFrameDefault` originating on calling `fcntl.ioctl` # Crash report ### What happened? There are two ways to get aborts starting from `fcntl.ioctl`: one happens in `_PyEval_EvalFrameDefault`, the other results in a `Fatal Python error`. Not sure this is a single issue, so reporting both ways. Here's a MRE for the first abort: ```python from pathlib._os import _ficlone try: _ficlone(None, None) except: _ficlone("A" * 2048, True) ``` It results in this backtrace: ``` /home/danzin/projects/main_gilful_debug/Lib/pathlib/_os.py:50: RuntimeWarning: bool is used as a file descriptor fcntl.ioctl(target_fd, fcntl.FICLONE, source_fd) python: Python/generated_cases.c.h:2269: _PyEval_EvalFrameDefault: Assertion `(res_o != NULL) ^ (_PyErr_Occurred(tstate) != NULL)' failed. Program received signal SIGABRT, Aborted. #0 __pthread_kill_implementation (no_tid=0, signo=6, threadid=140737340064384) at ./nptl/pthread_kill.c:44 #1 __pthread_kill_internal (signo=6, threadid=140737340064384) at ./nptl/pthread_kill.c:78 #2 __GI___pthread_kill (threadid=140737340064384, signo=signo@entry=6) at ./nptl/pthread_kill.c:89 #3 0x00007ffff72f7476 in __GI_raise (sig=sig@entry=6) at ../sysdeps/posix/raise.c:26 #4 0x00007ffff72dd7f3 in __GI_abort () at ./stdlib/abort.c:79 #5 0x00007ffff72dd71b in __assert_fail_base (fmt=0x7ffff7492130 "%s%s%s:%u: %s%sAssertion `%s' failed.\n%n", assertion=0x5555561ea6e0 "(res_o != NULL) ^ (_PyErr_Occurred(tstate) != NULL)", file=0x5555561e9c80 "Python/generated_cases.c.h", line=2269, function=<optimized out>) at ./assert/assert.c:94 #6 0x00007ffff72eee96 in __GI___assert_fail (assertion=assertion@entry=0x5555561ea6e0 "(res_o != NULL) ^ (_PyErr_Occurred(tstate) != NULL)", file=file@entry=0x5555561e9c80 "Python/generated_cases.c.h", line=line@entry=2269, function=function@entry=0x5555561eeee0 <__PRETTY_FUNCTION__.84> "_PyEval_EvalFrameDefault") at ./assert/assert.c:103 #7 0x0000555555d3e31d in _PyEval_EvalFrameDefault (tstate=tstate@entry=0x555556681960 <_PyRuntime+331232>, frame=0x629000005298, frame@entry=0x629000005220, throwflag=throwflag@entry=0) at Python/generated_cases.c.h:2269 #8 0x0000555555d926ac in _PyEval_EvalFrame (throwflag=0, frame=0x629000005220, tstate=0x555556681960 <_PyRuntime+331232>) at ./Include/internal/pycore_ceval.h:119 #9 _PyEval_Vector (tstate=tstate@entry=0x555556681960 <_PyRuntime+331232>, func=func@entry=0x610000040560, locals=locals@entry=0x60800009e5c0, args=args@entry=0x0, argcount=argcount@entry=0, kwnames=kwnames@entry=0x0) at Python/ceval.c:1961 #10 0x0000555555d929b0 in PyEval_EvalCode (co=co@entry=0x614000036a50, globals=globals@entry=0x60800009e5c0, locals=locals@entry=0x60800009e5c0) at Python/ceval.c:853 ``` Here's a MRE for the second abort: ```python import fcntl try: fcntl.ioctl(None, fcntl.FICLONE, None) except: pass fcntl.ioctl(True, fcntl.FICLONE,"A" * 2048) ``` It results in this backtrace: ``` /mnt/c/Users/ddini/crashers/main/pathlib__os-cpu_load-assertion/source2.py:13: RuntimeWarning: bool is used as a file descriptor fcntl.ioctl(True, fcntl.FICLONE,"A" * 2048) Fatal Python error: _Py_CheckFunctionResult: a function returned NULL without setting an exception Python runtime state: initialized SystemError: <built-in function ioctl> returned NULL without setting an exception Current thread 0x00007ffff7294280 [python] (most recent call first): File "/mnt/c/Users/ddini/crashers/main/pathlib__os-cpu_load-assertion/source2.py", line 13 in <module> Program received signal SIGABRT, Aborted. #0 __pthread_kill_implementation (no_tid=0, signo=6, threadid=140737340064384) at ./nptl/pthread_kill.c:44 #1 __pthread_kill_internal (signo=6, threadid=140737340064384) at ./nptl/pthread_kill.c:78 #2 __GI___pthread_kill (threadid=140737340064384, signo=signo@entry=6) at ./nptl/pthread_kill.c:89 #3 0x00007ffff72f7476 in __GI_raise (sig=sig@entry=6) at ../sysdeps/posix/raise.c:26 #4 0x00007ffff72dd7f3 in __GI_abort () at ./stdlib/abort.c:79 #5 0x0000555555ea92d7 in fatal_error_exit (status=<optimized out>) at Python/pylifecycle.c:3150 #6 0x0000555555ebfc0e in fatal_error (fd=fd@entry=2, header=header@entry=1, prefix=prefix@entry=0x5555560fdc60 <__func__.23> "_Py_CheckFunctionResult", msg=msg@entry=0x5555560fc9c0 "a function returned NULL without setting an exception", status=status@entry=-1) at Python/pylifecycle.c:3366 #7 0x0000555555ebfcc5 in _Py_FatalErrorFunc (func=func@entry=0x5555560fdc60 <__func__.23> "_Py_CheckFunctionResult", msg=msg@entry=0x5555560fc9c0 "a function returned NULL without setting an exception") at Python/pylifecycle.c:3382 #8 0x000055555598ceaf in _Py_CheckFunctionResult (tstate=tstate@entry=0x555556681960 <_PyRuntime+331232>, callable=callable@entry=0x6080000cbac0, result=<optimized out>, where=where@entry=0x0) at Objects/call.c:43 #9 0x000055555598ddf5 in _PyObject_VectorcallTstate (tstate=0x555556681960 <_PyRuntime+331232>, callable=0x6080000cbac0, args=0x7fffffffcdc8, nargsf=9223372036854775811, kwnames=0x0) at ./Include/internal/pycore_call.h:170 #10 0x000055555598df3c in PyObject_Vectorcall (callable=callable@entry=0x6080000cbac0, args=args@entry=0x7fffffffcdc8, nargsf=<optimized out>, kwnames=kwnames@entry=0x0) at Objects/call.c:327 #11 0x0000555555d38dbe in _PyEval_EvalFrameDefault (tstate=tstate@entry=0x555556681960 <_PyRuntime+331232>, frame=frame@entry=0x629000005220, throwflag=throwflag@entry=0) at Python/generated_cases.c.h:1619 #12 0x0000555555d926ac in _PyEval_EvalFrame (throwflag=0, frame=0x629000005220, tstate=0x555556681960 <_PyRuntime+331232>) at ./Include/internal/pycore_ceval.h:119 #13 _PyEval_Vector (tstate=tstate@entry=0x555556681960 <_PyRuntime+331232>, func=func@entry=0x610000040560, locals=locals@entry=0x60800009e5c0, args=args@entry=0x0, argcount=argcount@entry=0, kwnames=kwnames@entry=0x0) at Python/ceval.c:1961 #14 0x0000555555d929b0 in PyEval_EvalCode (co=co@entry=0x613000026150, globals=globals@entry=0x60800009e5c0, locals=locals@entry=0x60800009e5c0) at Python/ceval.c:853 #15 0x0000555555ecdfc1 in run_eval_code_obj (tstate=tstate@entry=0x555556681960 <_PyRuntime+331232>, co=co@entry=0x613000026150, globals=globals@entry=0x60800009e5c0, locals=locals@entry=0x60800009e5c0) at Python/pythonrun.c:1365 ``` Found using [fusil](https://github.com/devdanzin/fusil) by @vstinner. ### CPython versions tested on: CPython main branch ### Operating systems tested on: Linux ### Output from running 'python -VV' on the command line: Python 3.15.0a0 (heads/main:56743afe879, May 26 2025, 15:23:40) [GCC 11.4.0] <!-- gh-linked-prs --> ### Linked PRs * gh-134748 * gh-134795 * gh-134798 <!-- /gh-linked-prs -->
9300a596d37d058e6e58d00a2ad70617c863a3de
176b059fa03b3d6abd58c6711bb24111f2245706
python/cpython
python__cpython-134926
# `ast.dump()` elision of empty values should use field types # Bug report ### Bug description: In 3.13 we changed `ast.dump()` to not show empty values (`None` or the empty list) by default. However, this is based purely on the value of individual attributes: ``` >>> ast.dump(ast.Name(id="x")) "Name(id='x', ctx=Load())" >>> ast.dump(ast.Name(id=None)) 'Name(ctx=Load())' >>> ast.dump(ast.Name(id=[])) 'Name(ctx=Load())' ``` Instead, this logic should look at the node's `_field_types` and use the same logic we use to determine whether to allow omitting the argument in calls to the constructor: elide None if the type is a union including None, elide `[]` if the type is a list, and elide `Load()` if the type is an expr_context. ### CPython versions tested on: CPython main branch ### Operating systems tested on: _No response_ <!-- gh-linked-prs --> ### Linked PRs * gh-134926 * gh-134931 * gh-134934 * gh-134936 <!-- /gh-linked-prs -->
cc344e8dd0a6fdc83a032c229f9b3cf53f76a887
ce6a6371a23dc57ed4257eb102ebfb2827477abf
python/cpython
python__cpython-134713
# Clinic signatures of HACL* hash functions are inconsistent with OpenSSL implementation ### Bug description: According to its [docstring](https://github.com/python/cpython/blob/cf8941c60356acdd00055e5583a2d64761c34af4/Modules/clinic/_hashopenssl.c.h#L165), the function `hashlib.shake_128(...).hexdigest()` should accept an argument `length`, either as positional or as keyword argument. However, since Python `3.14`, the function only accepts positional arguments and fails with `TypeError: shake_128.hexdigest() takes no keyword arguments` otherwise. I narrowed the root cause for this behaviour down to the following two commits (I'm not 100% sure though as I don't have any C-background): - https://github.com/python/cpython/commit/1f777396f52a4cf7417f56097f10add8042295f4 - https://github.com/python/cpython/commit/061e50f196373d920c3eaa3718b9d0553914e006 Reproducer: ```python import hashlib hashlib.shake_128(b'').hexdigest(1) # working hashlib.shake_128(b'').hexdigest(length=1) # not working anymore, but was working before ``` For example, on Python `Python 3.13.3 (main, Apr 8 2025, 13:54:08) [Clang 17.0.0 (clang-1700.0.13.3)]` on my mac I can still use the keyword argument, but on `alpine:edge` `Python 3.12.10 (main, May 21 2025, 16:23:36) [GCC 14.2.0] on linux` it is expecting positional arguments only. ### CPython versions tested on: CPython main branch, 3.14, 3.13, 3.12 ### Operating systems tested on: Linux, macOS <!-- gh-linked-prs --> ### Linked PRs * gh-134713 * gh-134968 * gh-134961 * gh-134962 <!-- /gh-linked-prs -->
c6e63d9d351f6d952000ec3bf84b3a7607989f92
4d31d19a1df0a6e658e6a320cde8355f5f6ea27b
python/cpython
python__cpython-134694
# `‘address_of_code_object’ may be used uninitialized in this function [-Wmaybe-uninitialized]` warning in `_remote_debugging_module.c` <img width="994" alt="Image" src="https://github.com/user-attachments/assets/24d8a1f4-2b8d-40eb-abe5-a274fdfc0c3c" /> See https://github.com/python/cpython/blob/cf8941c60356acdd00055e5583a2d64761c34af4/Modules/_remote_debugging_module.c#L1876-L1894 Here it is clear that `*code_object =` might not be executed in several cases. I have a PR ready. <!-- gh-linked-prs --> ### Linked PRs * gh-134694 * gh-134726 <!-- /gh-linked-prs -->
806107d7a2fa9baa76d4025f46fab2c8725963f4
f2ce4bbdfdfa2b658fbeef66f414be2ecf7981dd
python/cpython
python__cpython-134811
# Assertion failed: Python/qsbr.c in _Py_qsbr_poll # Crash report Seen in https://github.com/python/cpython/actions/runs/15239974615/job/42858835898 ``` 0:07:45 load avg: 8.75 [57/60/1] test_free_threading worker non-zero exit code (Exit code -6 (SIGABRT)) -- running (2): test_socket (1 min 55 sec), test_threading (37.6 sec) python: Python/qsbr.c:164: _Bool _Py_qsbr_poll(struct _qsbr_thread_state *, uint64_t): Assertion `((_PyThreadStateImpl *)_PyThreadState_GET())->qsbr == qsbr' failed. Fatal Python error: Aborted <Cannot show all threads while the GIL is disabled> Stack (most recent call first): File "/home/runner/work/cpython/cpython/Lib/test/test_free_threading/test_dict.py", line 185 in writer_func File "/home/runner/work/cpython/cpython/Lib/threading.py", line 1016 in run File "/home/runner/work/cpython/cpython/Lib/threading.py", line 1074 in _bootstrap_inner File "/home/runner/work/cpython/cpython/Lib/threading.py", line 1036 in _bootstrap Current thread's C stack trace (most recent call first): python: Python/qsbr.c:164: _Bool _Py_qsbr_poll(struct _qsbr_thread_state *, uint64_t): Assertion `((_PyThreadStateImpl *)_PyThreadState_GET())->qsbr == qsbr' failed. ``` https://github.com/python/cpython/blob/a32ea456992fedfc9ce61561c88056de3c18cffd/Lib/test/test_free_threading/test_dict.py#L185 https://github.com/python/cpython/blob/a32ea456992fedfc9ce61561c88056de3c18cffd/Python/qsbr.c#L160-L164 <!-- gh-linked-prs --> ### Linked PRs * gh-134811 * gh-134814 <!-- /gh-linked-prs -->
a4d37f88b66bc9a66b2ab277aa66a2a6b20821fa
967f361993c9c97eb3ff3076a409b78ea32938df
python/cpython
python__cpython-134734
# tokenize._all_string_prefixes does not list t-string prefixes # Bug report ### Bug description: `tokenize._all_string_prefixes()` does not include t-string prefixes. ```python $ ./python.bat Running Release|x64 interpreter... Python 3.15.0a0 (heads/main-dirty:7b1a7002312, May 25 2025, 22:44:32) [MSC v.1929 64 bit (AMD64)] on win32 Type "help", "copyright", "credits" or "license" for more information. >>> import tokenize >>> tokenize._all_string_prefixes() {'', 'rf', 'fR', 'rF', 'Br', 'BR', 'f', 'F', 'FR', 'Fr', 'Rf', 'rb', 'rB', 'u', 'U', 'R', 'fr', 'r', 'B', 'RB', 'bR', 'br', 'Rb', 'RF', 'b'} >>> len(_) 25 ``` This also affects `tokenize.endpats` and `tokenize.StringPrefix`. ### CPython versions tested on: 3.15 ### Operating systems tested on: Windows <!-- gh-linked-prs --> ### Linked PRs * gh-134734 * gh-134739 <!-- /gh-linked-prs -->
08c78e02fab4a1c9c075637422d621f9c740959a
c60f39ada625562bff26400f304690c19fe9f504
python/cpython
python__cpython-134750
# asyncio.start_unix_server() cleanup_socket defaulting to true in 3.13 is not documented # Documentation In 3.13 a feature where unix sockets are automatically cleaned up was added in asyncio. It was seen as a compatibility break but was recent enough that we could not do anything specifically about it at the time. We were able to come to the agreement however, that the change, being breaking, needed to be documented but wasn't. Relevant issue: https://github.com/python/cpython/issues/133354 <!-- gh-linked-prs --> ### Linked PRs * gh-134750 * gh-134779 * gh-134780 <!-- /gh-linked-prs -->
92ea1eb38ff97ac046a0031d505c30a51f58a43f
9bf03c6c30c9ee2979061410651c3f6132683640
python/cpython
python__cpython-134665
# asyncio has underscored names in its `__all__` # Bug report ### Bug description: asyncio has several names in its `__all__` that start with an underscore. ``` Python 3.15.0a0 (heads/assignannos:9081715ff82, May 25 2025, 08:33:06) [Clang 15.0.0 (clang-1500.3.9.4)] on darwin Type "help", "copyright", "credits" or "license" for more information. >>> import asyncio >>> [x for x in asyncio.__all__ if x.startswith("_")] ['_AbstractEventLoopPolicy', '_get_event_loop_policy', '_set_event_loop_policy', '_set_running_loop', '_get_running_loop', '_register_task', '_unregister_task', '_enter_task', '_leave_task', '_DefaultEventLoopPolicy'] ``` That feels wrong: `__all__` indicates the public names in the module, and names that start with an underscore are not meant to be public. In Python 3.13 there were fewer: ``` ['_set_running_loop', '_get_running_loop', '_register_task', '_unregister_task', '_enter_task', '_leave_task'] ``` I propose we remove the new additions in 3.14 in both the 3.14 and main branches, and remove the remaining underscored names in 3.15. ### CPython versions tested on: CPython main branch ### Operating systems tested on: _No response_ <!-- gh-linked-prs --> ### Linked PRs * gh-134665 * gh-136455 <!-- /gh-linked-prs -->
797abd1f7fdeb744bf9f683ef844e7279aad3d72
f1dcf3c7bf90961b8d5475154d3f28cfef0a054f
python/cpython
python__cpython-134702
# 3.14t vs 3.13t cuts IOCP performance in half # Bug report ### Bug description: This is about 3.14.0bl vs 3.13.1, free threaded in both cases. Microsoft Windows [Version 10.0.19045.4529] I run and maintain an IOCP server in python 3.13.1t. There are no 3rd party libraries being used. The Problem: Using the exact same code, running 3.14t vs 3.13t cuts the throughput in half. I've made a badly written, working benchmark, extracted/simplified from my IOCP server. Server: ```python from ctypes import windll,create_string_buffer,c_void_p,c_ulong,c_ulonglong,Structure,byref,cast,addressof,POINTER,c_char from ctypes.wintypes import DWORD,HANDLE kernel32 = windll.kernel32 CreateNamedPipeW = kernel32.CreateNamedPipeW CreateIOCompletionPort = kernel32.CreateIoCompletionPort ConnectNamedPipe = kernel32.ConnectNamedPipe GetQueuedCompletionStatusEx = kernel32.GetQueuedCompletionStatusEx; ReadFile = kernel32.ReadFile GLE = kernel32.GetLastError class OVERLAPPED(Structure): _fields_ = (("0", c_void_p),("1", c_void_p),("2", DWORD),("3", DWORD),("4", c_void_p), ("5", c_void_p),("6",c_void_p),("7",c_void_p),("8",c_void_p)) Overlapped = (OVERLAPPED*10)() __Overlapped = byref(Overlapped) IOCP = CreateIOCompletionPort(HANDLE(-1),None,0,4) flag1 = 1 | 1073741824; flag2 = 4 | 2 | 0 | 8 Pipe = CreateNamedPipeW("\\\\.\\pipe\\IOCPBenchMark",flag1,flag2,255,32,0,0, None) if not CreateIOCompletionPort(Pipe,IOCP,1,0): print("ERROR!") ReadBuffer = create_string_buffer(1024) __ReadBuffer = byref(ReadBuffer) OverlapEntries = create_string_buffer(32*128) ove = byref(OverlapEntries); Completed = c_ulong(0) __Completed = byref(Completed) def __IOCPThread(): while True: while not GetQueuedCompletionStatusEx(IOCP, ove, 255, __Completed, 0, False): continue ReadFile(Pipe, __ReadBuffer,32,None,__Overlapped) from threading import Thread Threads = [] for t in range(4): Threads.append(Thread(target=__IOCPThread)) success = ConnectNamedPipe(Pipe, __Overlapped) if not success: if GLE() != 997: print("ERROR 2") while not GetQueuedCompletionStatusEx(IOCP, ove, 255, __Completed, 1, False): continue print("Connected.") ReadFile(Pipe, __ReadBuffer,32,None,__Overlapped) for t in Threads: t.start() from time import sleep while True: sleep(1) ``` Client: ```python from ctypes import windll,c_char_p,byref from ctypes.wintypes import DWORD from time import perf_counter as pfc kernel32 = windll.kernel32 CreateFileW = kernel32.CreateFileW WriteFile = kernel32.WriteFile GLE = kernel32.GetLastError written = DWORD() __written = byref(written) print(GLE()) GENERIC_WRITE = 1073741824 Pipe = kernel32.CreateFileW("\\\\.\\pipe\\IOCPBenchMark",GENERIC_WRITE,0,None,3,0,None) if GLE() == 0: print("Connected.") test = b"test" t = pfc()+1 while True: for Count in range(1000000): if not WriteFile(Pipe, test, 4,__written, None): print("ERROR ",GLE()) if not WriteFile(Pipe, test, 4,__written, None): print("ERROR ",GLE()) if not WriteFile(Pipe, test, 4,__written, None): print("ERROR ",GLE()) if not WriteFile(Pipe, test, 4,__written, None): print("ERROR ",GLE()) if pfc() >= t: t = pfc()+1 print(Count*4) break ``` The server uses 4 threads. if you don't see any output, try reducing the amount. When I use 8 threads (on my 8 core machine), I don't get any output. d'uh. No, SMT-Threads don't count for anything here. Each script runs in its own cmd.exe window. Please be aware that you'll have to kill the server-process manually. I wanted to add a call to "taskkill/F /IM:pytho*", but then realized I might cause someone big trouble with that. `>python3.13t server.py`: Client output: 205536 207128 206764 206504 204768 `>python3.14t server.py`: Client output: 107468 105516 106032 107492 108472 Perplexity suggested I should post this here, because this is a use-case you people might be interested in. Thank you. ### CPython versions tested on: 3.14 ### Operating systems tested on: Windows <!-- gh-linked-prs --> ### Linked PRs * gh-134702 * gh-134742 <!-- /gh-linked-prs -->
3c0525126ef95efe2f578e93db09f3282e3ca08f
56743afe879ca8fc061a852c5c1a311ff590bba1
python/cpython
python__cpython-134650
# Export `zlib.{adler,crc}32_combine` # Feature or enhancement ### Proposal: This is already provided by zlib, and is useful for computing a crc32 in multiple chunks (not linearly). What I currently do is use ctypes, but that doesn't work on all environments (like Windows). I don't think we'll need a pure python fallback, just a wrapper of what's in zlib, equivalent of https://github.com/fastzip/fastzip/blob/9019107a6732fab9004e625b075faab96f29265a/fastzip/_crc32_combine.py#L63-L67 ### Has this already been discussed elsewhere? This is a minor feature, which does not need previous discussion elsewhere ### Links to previous discussion of this feature: _No response_ <!-- gh-linked-prs --> ### Linked PRs * gh-134650 <!-- /gh-linked-prs -->
737b4ba020ecaf4b30d5a4c8f99882ce0001ddd6
8704d6b39139d2b1c3dd871590188fb7deb8aaad
python/cpython
python__cpython-134633
# PEP 739 / `build-details.json`: `c_api.headers` does not include the `pythonX.Y` directory # Bug report ### Bug description: Sorry I didn't notice this while fixing #134455 but `c_api.headers` key contains the path to top-level system include directory, i.e.: ```json { "c_api": { "headers": "/usr/local/include", "pkgconfig_path": "/usr/local/lib/pkgconfig" } } ``` while according to the examples in [PEP 739](https://peps.python.org/pep-0739/#c-api-headers), it should contain the path to the `pythonX.Y` directory: > Examples > > `/usr/include/python3.14` > `include/python3.14` > etc. ### CPython versions tested on: 3.14, CPython main branch ### Operating systems tested on: Linux <!-- gh-linked-prs --> ### Linked PRs * gh-134633 * gh-135605 * gh-135656 * gh-135657 <!-- /gh-linked-prs -->
0d582def34babca7417ece8a9e4e16cc2a752d44
c51f241c97c5bcc8ae6830842db5b00f76d6a592
python/cpython
python__cpython-134598
# Update HOWTO to reflect change in CIBW option # Documentation Python documentation in [‎Doc/howto/free-threading-extensions.rst](https://github.com/python/cpython/blob/main/Doc/howto/free-threading-extensions.rst?plain=1) states at lines 397-399 > [pypa/cibuildwheel](https://github.com/pypa/cibuildwheel) supports the free-threaded build if you set [CIBW_FREE_THREADED_SUPPORT](https://cibuildwheel.pypa.io/en/stable/options/#free-threaded-support). I noticed when running Spinx that the anchor link to "CIBW_FREE_THREADED_SUPPORT" is broken, and realised the reason is that the preferred way of supporting free-threaded build has changed in cibuildwheel since [June 2024](https://web.archive.org/web/20240711102108/https://cibuildwheel.pypa.io/en/stable/options/#free-threaded-support) when this text was [written](https://github.com/python/cpython/commit/02b272b7026b68e70b4a4d9a0ca080904aed374c). The current documentation explains > ### `CIBW_ENABLE` {: #enable} > > Enable building with extra categories of selectors present. > > This option lets you opt-in to non-default builds, like pre-releases and free-threaded Python. These are not included by default to give a nice default for new users, but can be added to the selectors available here. The allowed values are: > > - `cpython-prerelease`: Enables beta versions of Pythons if any are available (May-July, approximately). For backward compatibility, CIBW_PRERELEASE_PYTHONS is also supported until cibuildwheel 3. > - `cpython-freethreading`: [PEP 703](https://www.python.org/dev/peps/pep-0703) introduced variants of CPython that can be built without the Global Interpreter Lock (GIL). Those variants are also known as free-threaded / no-gil. This will enable building these wheels while they are experimental. The build identifiers for those variants have a t suffix in their python_tag (e.g. cp313t-manylinux_x86_64). For backward compatibility, CIBW_FREE_THREADED_SUPPORT is also supported until cibuildwheel 3. In light of this change, I suggest a tiny rewrite to: ``` * `pypa/cibuildwheel <https://github.com/pypa/cibuildwheel>`_ supports the free-threaded build if you set `CIBW_ENABLE to cpython-freethreading <https://cibuildwheel.pypa.io/en/stable/options/#enable>`_ ``` (First issue) ## Related PRs * gh-119877 <!-- gh-linked-prs --> ### Linked PRs * gh-134598 * gh-134622 * gh-134623 <!-- /gh-linked-prs -->
7b1010a57db9405f277139abef4016ba987b75fc
80284b5c5eebd0e603c38322f94a97a2853ceeba
python/cpython
python__cpython-134624
# Add `__attribute__((noreturn,cold))` for `_mi_assert_fail` # Feature or enhancement > [!NOTE] > I already have a PR ready in case this is accepted ### Proposal: Because `_mi_assert_fail` is not marked as a `noreturn` function, the assertion is not understood by CLion and I have some false positives telling me "this ptr is not NULL" while just above there is `mi_assert(ptr != NULL);`. To improve IDE's analysis, I suggest to mark `_mi_assert_fail` with `__attribute__((noreturn))` and `__THROW` for GCC, the same attributes that are put on `__assert_fail`: ```c extern void __assert_fail (const char *__assertion, const char *__file, unsigned int __line, const char *__function) __THROW __attribute__ ((__noreturn__)); ``` cc @colesbury ### Has this already been discussed elsewhere? This is a minor feature, which does not need previous discussion elsewhere. <!-- gh-linked-prs --> ### Linked PRs * gh-134624 <!-- /gh-linked-prs -->
c600310663277e24607890298e6d9bf7e1d4f584
ebf6d13567287d04683dab36f52cde7a3c9915e7
python/cpython
python__cpython-134765
# The devcontainer doesn't include the bits to build the _zstd module # Bug report ### Bug description: A direct build in the devcontainer: ```shell ... Written build/lib.linux-aarch64-3.15/_sysconfigdata_d_linux_aarch64-linux-gnu.py Written build/lib.linux-aarch64-3.15/_sysconfig_vars_d_linux_aarch64-linux-gnu.json The necessary bits to build these optional modules were not found: _zstd To find the necessary bits, look in configure.ac and config.log. Checked 114 modules (35 built-in, 77 shared, 1 n/a on linux-aarch64, 0 disabled, 1 missing, 0 failed on import) ``` ### CPython versions tested on: CPython main branch ### Operating systems tested on: _No response_ <!-- gh-linked-prs --> ### Linked PRs * gh-134765 <!-- /gh-linked-prs -->
9bf03c6c30c9ee2979061410651c3f6132683640
96905bdd273d2e5724d2c1b6b0f95ecb0daeaabe
python/cpython
python__cpython-134603
# untokenize() round-trip fails for t-strings (with only type + string) # Bug report ### Bug description: Found when investigating [test.test_tokenize.TestRoundtrip.test_random_files](https://github.com/python/cpython/actions/runs/15209215387/job/42779254060?pr=134577#step:22:490) failing on #134577 ```python def test(code): tokens = list(tokenize.tokenize(iter([code]).__next__)) from5 = tokenize.untokenize(tokens) print("from5 ", from5, eval(from5)) tokens2 = [tok[:2] for tok in tokens] from2 = tokenize.untokenize(tokens2) print("from2 ", from2, eval(from2)) ``` ```pycon >>> test(b't"{ {} }"') from5 b't"{ {} }"' Template(strings=('', ''), interpolations=(Interpolation({}, ' {}', None, ''),)) from2 b't"{{}}"' Template(strings=('{}',), interpolations=()) >>> test(b'f"{ {} }"') from5 b'f"{ {} }"' {} from2 b'f"{ {} }"' {} ``` From what I understand, [untokenize](https://docs.python.org/3/library/tokenize.html#tokenize.untokenize) should round-trip correctly even with only the type and string of tokens. ### CPython versions tested on: CPython main branch ### Operating systems tested on: macOS <!-- gh-linked-prs --> ### Linked PRs * gh-134603 * gh-134659 <!-- /gh-linked-prs -->
52509cc94b1a18cb325dbfa7e5f830b32759a903
3e562b394252ff75d9809b7940020a775e4df68b
python/cpython
python__cpython-134581
# Modernizing `difflib.HtmlDiff` for HTML Output # Feature or enhancement ### Proposal: Previously, I added dark mode support to the HTML export feature of `difflib`, and during that process, I uncovered several issues. By opening the browser's developer tools (F12) and inspecting the generated HTML, you can observe a number of warnings, such as: ![Image](https://github.com/user-attachments/assets/f139055d-acf0-4d4f-9e83-2d46577fb5ac) I initially addressed these warnings in the same pull request that added dark mode support. However, we later decided it would be better to split these changes into a separate PR—each PR should serve a single purpose. As a result, the warning fixes were rolled back from that submission. (see https://github.com/python/cpython/pull/129940#discussion_r1948964135) Recently, I resumed working on this issue. I discovered that a large portion of these browser warnings stem from the legacy nature of the generated HTML. For example, the first line of the html still uses: ```html <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> ``` And elements like: ```html <meta http-equiv="Content-Type" content="text/html; charset=%(charset)s" /> ``` These are artifacts from the HTML4 era—a specification that was finalized more than two decades ago. While HTML5 remains [backward-compatible with this syntax](https://www.w3.org/TR/html5-diff#syntax), I believe updating to modern HTML5 conventions is the right move. In 2025, there's no need to worry about browsers lacking HTML5 support. For instance, HTML5 removes the need for lengthy DTD declarations ([see here](https://html.spec.whatwg.org/#a-quick-introduction-to-html)), simplifying the doctype to just `<!DOCTYPE html>`. This also makes the resulting HTML cleaner and easier to maintain. With that in mind, I’m proposing to modernize `difflib.HtmlDiff`, not just updating the HTML structure to HTML5, but also refining the CSS for improved styling. Take the current layout, for example: it’s not easy to distinguish line numbers from content maybe it's not very user-friendly. ![Image](https://github.com/user-attachments/assets/6b876558-a346-4a84-809d-195290051d44) In the revised version, the diff content has been slightly enlarged for better readability. Line numbers are bolded and given more horizontal space, preventing them from blending into the content and causing visual confusion. ![Image](https://github.com/user-attachments/assets/b5858cbc-4231-4ffb-9158-ffae9c57461e) Additionally, the legend section has been visually enhanced—now more intuitive and aesthetically pleasing. And, of course, all browser warnings have been eliminated! Before: ![Image](https://github.com/user-attachments/assets/137b56ad-0349-4696-a050-2ee18fcaf0b4) After: ![Image](https://github.com/user-attachments/assets/8c8753b2-6146-4d8c-892c-672bb51308f1) ### Has this already been discussed elsewhere? This is a minor feature, which does not need previous discussion elsewhere ### Links to previous discussion of this feature: _No response_ <!-- gh-linked-prs --> ### Linked PRs * gh-134581 <!-- /gh-linked-prs -->
7ca6d79fa32d7203ee8d64f47b6b3539a027fdea
a4d37f88b66bc9a66b2ab277aa66a2a6b20821fa
python/cpython
python__cpython-134579
# Mark more slow tests I noticed that `test_capi` "freezes" even when ran with `-u-cpu`. So I measured durations of all tests again and found yet few slow tests. They should be marked as requiring the `cpu` resource. <!-- gh-linked-prs --> ### Linked PRs * gh-134579 * gh-134590 * gh-134592 <!-- /gh-linked-prs -->
77eade39f972a4f3d8e9fec00288779f35ceee21
fc0c9c24121c1a62b25b79062286f976699b59e9
python/cpython
python__cpython-134570
# Expose log format to users in assertLogs # Feature or enhancement ### Proposal: ## Use-case Someone works in an organisation with a custom log formatter. They want to make assertions about logs in a particular test and would like the assertions to match the format used in the org. Currently the format is static in assertLogs. ## Example usage ```python format = "[No.1: the larch] %(levelname)s:%(name)s:%(message)s" formatter = logging.Formatter(format) with self.assertLogs(formatter=formatter) as cm: log_foo.info("1") log_foobar.debug("2") self.assertEqual(cm.output, ["[No.1: the larch] INFO:foo:1"]) ``` ## Proposed change [My branch](https://github.com/garry-cairns/cpython/tree/fix-issue-134567) already has my proposed change and tests are passing. Since this would be my first contribution if accepted I'm raising the issue since I don't yet have a feel for what constitutes a "minor" enhancement that doesn't need an issue ref per the developer guide. Having raised this I'll create a PR as per the developer guide. ### Has this already been discussed elsewhere? This is a minor feature, which does not need previous discussion elsewhere ### Links to previous discussion of this feature: _No response_ <!-- gh-linked-prs --> ### Linked PRs * gh-134570 * gh-136630 <!-- /gh-linked-prs -->
51ab66b3d519d3802430b3a31a135d2670e37408
b19c9da401b9e81078103861f55e0762b93453f0
python/cpython
python__cpython-134566
# unittest.doModuleCleanups() swallows all but first exception # Feature or enhancement Currently, `unittest.doModuleCleanups()` swallows all but first exception raised in the cleanup code. With ExceptionGroup we can handle multiple exceptions. <!-- gh-linked-prs --> ### Linked PRs * gh-134566 <!-- /gh-linked-prs -->
393773ae8763202ecf7afff81c4d57bd37c62ff6
77eade39f972a4f3d8e9fec00288779f35ceee21
python/cpython
python__cpython-134599
# Refleaks on free-threaded builds # Bug report ### Bug description: The nogil refleak buildbots have started failing for main and 3.14 with leaks in `test_sys`, `test_capi` and a few others. See e.g. https://buildbot.python.org/#/builders/1714/builds/90 I bisected this to 09e72cf091d03479eddcb3c4526f5c6af56d31a0, cc @ericsnowcurrently ### CPython versions tested on: CPython main branch, 3.14 ### Operating systems tested on: Linux <!-- gh-linked-prs --> ### Linked PRs * gh-134599 * gh-134600 * gh-134686 * gh-134738 <!-- /gh-linked-prs -->
56743afe879ca8fc061a852c5c1a311ff590bba1
08c78e02fab4a1c9c075637422d621f9c740959a
python/cpython
python__cpython-134552
# `-m pdb -p` -- other side cannot read debug script due to too-strict permissions # Bug report ### Bug description: 1. run `python3.14` 2. find its pid 3. `sudo python3.14 -m pdb -p $pid` the `pdb` tab will hang, the other side will display similar to: ``` Python 3.14.0b1 (main, May 8 2025, 08:57:13) [GCC 13.3.0] on linux Type "help", "copyright", "credits" or "license" for more information. >>> Can't open debugger script /tmp/tmpmbdwo7d_: Traceback (most recent call last): File "/usr/lib/python3.14/_pyrepl/unix_console.py", line 422, in wait or bool(self.pollob.poll(timeout)) PermissionError: [Errno 13] Permission denied: '/tmp/tmpmbdwo7d_' ``` the debugger script needs to at least be readable by the unprivileged user to be opened: ``` $ ls -al /tmp/tmpmbdwo7d_ -rw------- 1 root root 190 May 22 16:51 /tmp/tmpmbdwo7d_ ``` https://github.com/python/cpython/blob/742d5b5c5d75eae44c66a43ebfa24a4f286ea8a1/Lib/pdb.py#L3398 I believe a patch similar to this fixes it: ```diff diff --git a/Lib/pdb.py b/Lib/pdb.py index 78ee35f61bb..bb12d1baae8 100644 --- a/Lib/pdb.py +++ b/Lib/pdb.py @@ -75,6 +75,7 @@ import code import glob import json +import stat import token import types import atexit @@ -3418,6 +3419,7 @@ def attach(pid, commands=()): ) ) connect_script.close() + os.chmod(connect_script.name, os.stat(connect_script.name).st_mode | stat.S_IRGRP | stat.S_IROTH) sys.remote_exec(pid, connect_script.name) # TODO Add a timeout? Or don't bother since the user can ^C? ``` ### CPython versions tested on: 3.14 ### Operating systems tested on: Linux <!-- gh-linked-prs --> ### Linked PRs * gh-134552 * gh-134616 <!-- /gh-linked-prs -->
74a9c60f3ee2195e506144c3811090f1334c996b
47f1161d3a2bec52b5b5e952150141709c247da2
python/cpython
python__cpython-135540
# shutil.rmtree - link to example for windows readonly file handling in function docs # Documentation The python docs for the `shutil` module include an example of how to use `rmtree` on a windows folder that contains read-only files: https://docs.python.org/3/library/shutil.html#rmtree-example However, there is no mention of this solution, or even that this is a potential issue on windows, within the documentation for the `shutil.rmtree` function itself: https://docs.python.org/3/library/shutil.html#shutil.rmtree As a result, it is likely that many developers miss this as a potential issue or solution. A simple link to the example, within the `shutil.rmtree` itself, would increase awareness. <!-- gh-linked-prs --> ### Linked PRs * gh-135540 * gh-135691 * gh-135692 <!-- /gh-linked-prs -->
e9b647dd30d22cef465972d898a34c4b1bb6615d
17ac3933c3c860e08f7963cf270116a39a063be7
python/cpython
python__cpython-134487
# _ctypes fails to import on NetBSD due to a missing `alloca` symbol # Bug report ### Bug description: After running `./configure && gmake` on a NetBSD 10 machine, the build finishes with the following warning: ```sh Written build/lib.netbsd-10.0-evbarm-3.15/_sysconfigdata__netbsd10_.py Written build/lib.netbsd-10.0-evbarm-3.15/_sysconfig_vars__netbsd10_.json ./python -E -c 'import sys ; from sysconfig import get_platform ; print("%s-%d.%d" % (get_platform(), *sys.version_info[:2]))' >platform ./python -E ./Tools/build/generate-build-details.py `cat pybuilddir.txt`/build-details.json [ERROR] _ctypes failed to import: /home/collinfunk/cpython/build/lib.netbsd-10.0-evbarm-3.15/_ctypes.cpython-315.so: Undefined PLT symbol "alloca" (symnum = 48) ``` This is due to an incomplete check for alloca in the `_ctypes.h` module. ### CPython versions tested on: CPython main branch ### Operating systems tested on: Other <!-- gh-linked-prs --> ### Linked PRs * gh-134487 <!-- /gh-linked-prs -->
b8f55266bf873bc101c3aab7aa868b909ecbcb99
99a9ab1c64dce26c0f2dce7621b4d7f75da69856
python/cpython
python__cpython-134456
# PEP 739 / `build-details.json`: `c_api.include` is used instead of `c_api.headers` # Bug report ### Bug description: [PEP 739](https://peps.python.org/pep-0739/) specifies that the CPython header directory is included in the [`c_api.headers` key](https://peps.python.org/pep-0739/#c-api-headers). However, it seems that the implemented that landed in #130069 uses `c_api.include` instead: ```json { ... "c_api": { "include": "/usr/include", "pkgconfig_path": "/usr/lib64/pkgconfig" } } ``` I don't see any evidence that the name was changed deliberately. In fact, the code seems to be mixing both `include` and `headers` names: https://github.com/python/cpython/blob/1a07a01014bde23acd2684916ef38dc0cd73c2de/Tools/build/generate-build-details.py#L126 https://github.com/python/cpython/blob/1a07a01014bde23acd2684916ef38dc0cd73c2de/Tools/build/generate-build-details.py#L138-L145 ### CPython versions tested on: 3.14, CPython main branch ### Operating systems tested on: Linux <!-- gh-linked-prs --> ### Linked PRs * gh-134456 * gh-134504 <!-- /gh-linked-prs -->
d706eb9e0f99924b628da4a8afe8e23cff8b801b
458e33018a2f4f4b3d9a2c8f6e70dcce31f34005
python/cpython
python__cpython-134513
# Using a frozen attributes to Exception class # Bug report ### Bug description: `CycleFoundException` in asyncio has frozen attributes, which has a problem related https://github.com/python/cpython/issues/99856 https://github.com/python/cpython/blob/e1f891414b2329414a6160ed246f5f869a218bfd/Lib/asyncio/tools.py#L16-L20 Do I need to fix this without using @dataclass? ### CPython versions tested on: 3.11 ### Operating systems tested on: _No response_ <!-- gh-linked-prs --> ### Linked PRs * gh-134513 * gh-134564 <!-- /gh-linked-prs -->
f9324cb3cb4d9bb9f0aef2e48b8afa895bde4b0d
5804ee7b467d86131be3ff7d569443efb0d0f9fd
python/cpython
python__cpython-134608
# Error in Format Specification Mini-Language concerning `precision_with_grouping` in 3.14 # Documentation On this page: https://docs.python.org/3.14/library/string.html#grammar-token-format-spec-format_spec ```py format_spec: [options][width_and_precision][type] options: [[fill]align][sign]["z"]["#"]["0"] fill: <any character> align: "<" | ">" | "=" | "^" sign: "+" | "-" | " " width_and_precision: [width_with_grouping][precision_with_grouping] width_with_grouping: [width][grouping] precision_with_grouping: "." [precision][grouping] width: digit+ precision: digit+ grouping: "," | "_" type: "b" | "c" | "d" | "e" | "E" | "f" | "F" | "g" | "G" | "n" | "o" | "s" | "x" | "X" | "%" ``` The `precision_with_grouping` syntaxe is wrong, because it accepts `.` without `precision` or `grouping`. But ```py >>> f'{1234.1234:.f}' Traceback (most recent call last): File "<python-input-17>", line 1, in <module> f'{1234.1234:.f}' ^^^^^^^^^^^^^^ ValueError: Format specifier missing precision ``` (The error does not mention grouping either) The right syntax should be ```py precision_with_grouping: "." precision [grouping] | "." grouping ``` <!-- gh-linked-prs --> ### Linked PRs * gh-134608 * gh-135015 <!-- /gh-linked-prs -->
7828d52680907d1661ff6993e540f7026461c390
ee65ebdb50005655d75aca1618d3994a7b7ed869
python/cpython
python__cpython-134415
# `Python/instrumentation.c`: ensure non-NULL `PyLong_FromLong` results when possible https://github.com/python/cpython/blob/c740fe3bd092911d9e474bcc0eed2a009482be9f/Python/instrumentation.c#L2566 Since PyLong_FromLong can return NULL, it is necessary to use Py_XDECREF, as stated in refcount.h. https://github.com/python/cpython/blob/c740fe3bd092911d9e474bcc0eed2a009482be9f/Include/refcount.h#L207 <!-- gh-linked-prs --> ### Linked PRs * gh-134415 * gh-136910 * gh-136911 <!-- /gh-linked-prs -->
cf19b6435d02dd7be11b84a44f4a8a9f1a935b15
1e672935b44e084439527507a865b94a4c1315c3
python/cpython
python__cpython-134514
# Not able to start an instantiated Thread in a child process through multiprocessing.Process in python 3.13 # Bug report ### Bug description: I'm experiencing an issue specifically in python 3.13 regarding Threads in the context of multiprocessing. The following code is working in python < 3.13 ```python from multiprocessing import Process from threading import Thread import time class MyThread(Thread): def __init__(self): Thread.__init__(self) self.__data = '' def run(self): print("Hi from thread") print(self.__data) class Aclass(): def __init__(self): self._t = MyThread() self._t.daemon = True def start(self): self._t.start() print("thread started") if __name__ == '__main__': t = Aclass() p = Process(target = t.start ) p.start() time.sleep(2) ``` After executing the above in python 3.13 I get this output: ``` File "/usr/lib/python3.13/multiprocessing/process.py", line 313, in _bootstrap self.run() ~~~~~~~~^^ File "/usr/lib/python3.13/multiprocessing/process.py", line 108, in run self._target(*self._args, **self._kwargs) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/kali/impacket/master/impacket/tests/SMB_RPC/test_bug_process_thread.py", line 18, in start self._t.start() ~~~~~~~~~~~~~^^ File "/usr/lib/python3.13/threading.py", line 973, in start _start_joinable_thread(self._bootstrap, handle=self._handle, ~~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ daemon=self.daemon) ^^^^^^^^^^^^^^^^^^^ RuntimeError: thread already started ``` I managed to workaround this by delaying the Thread instance initialization, moving the Thread.__init__() call to start() method in the derived class, so everything gets executed in the context of the child process. ### CPython versions tested on: 3.13 ### Operating systems tested on: Linux <!-- gh-linked-prs --> ### Linked PRs * gh-134514 * gh-134596 * gh-134597 <!-- /gh-linked-prs -->
9a2346df861f26d5f8d054ad2f9c37134dee3822
05a19b5e56894fd1e63aff6b38fb23ad7c7b3047
python/cpython
python__cpython-134371
# `logging.Formatter` docstring missing processName attribute. # Documentation The docstring for `logging.Formatter` class is missing the `processName` attribute that is available in `LogRecord` and documented in the official Python documentation. ## Issues The `processName` attribute is documented in the [LogRecord attributes section](https://docs.python.org/3/library/logging.html#logrecord-attributes) of the Python documentation, but it's not listed in the `Formatter` class docstring that enumerates the attributes users can include in format strings. https://github.com/python/cpython/blob/ec39fd2c20323ee9814a1137b1a0819e92efae4e/Doc/library/logging.rst?plain=1#L995-L1075 ### Missing attribute: - `processName` - Process name (if available), added in Python 3.1 ## Suggested Fix Add the `processName` attribute to the docstring with its description: ``` %(processName)s Process name (if available) ``` <!-- gh-linked-prs --> ### Linked PRs * gh-134371 * gh-134404 * gh-134405 <!-- /gh-linked-prs -->
c740fe3bd092911d9e474bcc0eed2a009482be9f
1298511b41ec0f9be925c12f3830e94fe8f7e7dc
python/cpython
python__cpython-134368
# The new `threading.RLock.locked()` method fails # Bug report ### Bug description: ```python import threading def main(): r = threading.RLock() print(f"{r = }") t = threading.Thread(target=r.acquire) t.start() t.join() print(f"{r = }") print(f"{r.locked() = } at {hex(id(r))}") if __name__ == '__main__': main() ``` Output is: ``` r = <unlocked _thread.RLock object owner=0 count=1 at 0x105a98720> r = <locked _thread.RLock object owner=6106329088 count=1 at 0x105a98720> r.locked() = False at 0x105a98720 ``` Error is located at: https://github.com/python/cpython/blob/28625d4f956f8d30671aba1daaac9735932983db/Lib/threading.py#L238-L240 The return instruction must be: `return self._block.locked()`. I can submit a PR quickly. ### CPython versions tested on: CPython main branch, 3.14 ### Operating systems tested on: macOS <!-- gh-linked-prs --> ### Linked PRs * gh-134368 * gh-134510 <!-- /gh-linked-prs -->
3effede97cc13fc0c5ab5dcde26cc319f388e84c
bd4046f4f869039a1a2ebe2d1d18bfbc2a2951b6
python/cpython
python__cpython-134389
# `repr` of `threading.RLock` is erronous with the `_thread` module # Bug report ### Bug description: ```python import threading r = threading.RLock() print(f"{r = }") ``` output is: ``` r = <unlocked _thread.RLock object owner=0 count=1 at 0x105a98720> ``` The `threading.RLock.__repr__` seems to me erronous because when a `threading.RLock` is just created, its `count` attribute should be 0. This error occurs only with the `_thread` module. I can submit a PR. ### CPython versions tested on: CPython main branch ### Operating systems tested on: macOS <!-- gh-linked-prs --> ### Linked PRs * gh-134389 * gh-134528 <!-- /gh-linked-prs -->
fade146cfb1616ad7b3b918bedb86756dedf79e6
4a4ac3ab4d2a34af99af9e948be9cd1257ed4186
python/cpython
python__cpython-134310
# CI testing for pull requests can fail if multiple workloads exist with the same name # Bug report ### Bug description: If more than one PR is opened from branches with the same name, CI testing fails due to mandatory tests being canceled. This is due to the way concurrency groups are named; there is a PR coming with a fix. ### CPython versions tested on: CPython main branch ### Operating systems tested on: _No response_ <!-- gh-linked-prs --> ### Linked PRs * gh-134310 * gh-134484 * gh-134485 <!-- /gh-linked-prs -->
979d81a17905e922d32fb1671f9ed394e0ffbda6
296a66051ede5cc112ca38d17304e518ffb02e23
python/cpython
python__cpython-134284
# Use borrowed references for `LOAD_CONST` # Feature or enhancement ### Proposal: We now support borrowed (tagged) references on the stack. Like `LOAD_FAST_BORROW` we can use borrowed references in `LOAD_CONST` as the value in the constants array is guaranteed to outlive any variables in the frame. ### Has this already been discussed elsewhere? This is a minor feature, which does not need previous discussion elsewhere ### Links to previous discussion of this feature: _No response_ <!-- gh-linked-prs --> ### Linked PRs * gh-134284 <!-- /gh-linked-prs -->
6dcb0fdfe0a2de083f0f1f9a568dd0a19541b863
f695eca60cfc53cf3322323082652037d6d0cfef
python/cpython
python__cpython-134982
# `~bool` deprecation not reported for literals # Bug report ### Bug description: Related to: https://github.com/python/cpython/pull/103487 ```py # literal.py print(~False) print(~True) # var.py a = True print(~a) b = False print(~b) ``` A DeprecationWarning is reported for `var.py`, but not for `literal.py`. ``` $ python -c 'import sys; print(sys.version)' 3.15.0a0 (heads/main:42d03f3, May 19 2025, 23:32:01) [GCC 15.1.1 20250425 (Red Hat 15.1.1-1)] $ python literal.py -1 -2 $ python var.py /tmp/scratch/var.py:2: DeprecationWarning: Bitwise inversion '~' on bool is deprecated and will be removed in Python 3.16. This returns the bitwise inversion of the underlying int object and is usually not what you expect from negating a bool. Use the 'not' operator for boolean negation or ~int(x) if you really want the bitwise inversion of the underlying int. print(~a) -2 /tmp/scratch/var.py:5: DeprecationWarning: Bitwise inversion '~' on bool is deprecated and will be removed in Python 3.16. This returns the bitwise inversion of the underlying int object and is usually not what you expect from negating a bool. Use the 'not' operator for boolean negation or ~int(x) if you really want the bitwise inversion of the underlying int. print(~b) -1 ``` What's interesting is that there's a test for this: https://github.com/python/cpython/blob/42d03f393313d8a228a45dad1d0897ea99f5ec89/Lib/test/test_bool.py#L68-L71 and the DeprecationWarning _is_ raised if `eval("~True")` is used instead of just `~True` ``` $ python -c 'print(~True)' -2 $ python -c 'print(eval("~True"))' <string>:1: DeprecationWarning: Bitwise inversion '~' on bool is deprecated and will be removed in Python 3.16. This returns the bitwise inversion of the underlying int object and is usually not what you expect from negating a bool. Use the 'not' operator for boolean negation or ~int(x) if you really want the bitwise inversion of the underlying int. -2 ``` ### CPython versions tested on: CPython main branch, 3.13 ### Operating systems tested on: Linux <!-- gh-linked-prs --> ### Linked PRs * gh-134982 * gh-136185 <!-- /gh-linked-prs -->
86c3316183a79867e3c666d0830f897e16f0f339
e0d6500b2d9a08beb7b607b846d1eeaa26706667
python/cpython
python__cpython-134263
# Add retries to downloads of Windows dependencies `generate_sbom.py` downloads sources to verify their hashes. Sometimes these downloads flake, like in https://github.com/python/cpython/actions/runs/15118338626/job/42494602073?pr=134253 I've made a PR to add retries with exponential backoff: <!-- gh-linked-prs --> ### Linked PRs * gh-134263 * gh-134460 * gh-134558 * gh-134820 * gh-134865 * gh-134866 * gh-134867 * gh-135365 * gh-135596 * gh-135611 * gh-137468 * gh-137496 * gh-137775 * gh-137779 <!-- /gh-linked-prs -->
0c5a8b0b55238a45b9073d06a10c3a59568cdf3c
b8998fe2d8249565bf30ce6075ed678e1643f2a4
python/cpython
python__cpython-134871
# Python 3.15.0: test_sys.test_getallocatedblocks() fails if run after test_collections.test_odd_sizes() # Bug report ### Bug description: test_sys fails while running the test suite of python 3.15.0 when --with-pydebug and --enable-optimizations are both enabled. I get consistent failure of the test suite, but when I run the test individually, it passes. I do not know if this is expected or not, as I understand that enabling the debugger and optimizations is kind of contradictory. I included the test header, as well as the tests summary for additional info. ``` == CPython 3.15.0a0 (heads/main-dirty:9983c7d4416, May 19 2025, 09:41:00) [GCC 13.3.0] == Linux-6.11.0-25-generic-x86_64-with-glibc2.39 little-endian == Python build: debug PGO == cwd: /home/badger/oss/cpython/build/test_python_worker_177894æ == CPU count: 8 == encodings: locale=UTF-8 FS=utf-8 == resources: all test resources are disabled, use -u option to unskip tests Using random seed: 3038718328 ... 0:32:13 load avg: 1.39 [396/491] test_sys test test_sys failed -- Traceback (most recent call last): File "/home/badger/oss/cpython/Lib/test/test_sys.py", line 1156, in test_getallocatedblocks self.assertLess(a, sys.gettotalrefcount()) ~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^ AssertionError: 698068 not less than 663517 0:32:18 load avg: 1.36 [396/491/1] test_sys failed (1 failure) ... == Tests result: FAILURE == 22 tests skipped: test.test_asyncio.test_windows_events test.test_asyncio.test_windows_utils test.test_gdb.test_backtrace test.test_gdb.test_cfunction test.test_gdb.test_cfunction_full test.test_gdb.test_misc test.test_gdb.test_pretty_print test_android test_apple test_dbm_gnu test_dbm_ndbm test_devpoll test_free_threading test_kqueue test_launcher test_msvcrt test_startfile test_winapi test_winconsoleio test_winreg test_wmi test_zstd 11 tests skipped (resource denied): test_curses test_peg_generator test_pyrepl test_smtpnet test_socketserver test_tkinter test_ttk test_urllib2net test_urllibnet test_winsound test_zipfile64 1 test failed: test_sys 457 tests OK. Total duration: 39 min 17 sec Total tests: run=46,135 failures=1 skipped=2,164 Total test files: run=480/491 failed=1 skipped=22 resource_denied=11 Result: FAILURE ``` I don't suspect my local environment to be the problem, since the test suite passes when I remove the "--enable-optimizations" flag, and just enable the debugger. The variable "a" seems to change each time I run the suite, so I included the random seed. ### Build setup: Tested on Python 3.15.0. ``` Distributor ID: Ubuntu Description: Ubuntu 24.04.2 LTS Release: 24.04 Codename: noble ``` CPU: product: Intel(R) Core(TM) i7-8665U CPU @ 1.90GHz ### Configuration I used ``` ./configure --with-pydebug --enable-optimizations ``` ### CPython versions tested on: 3.15, CPython main branch ### Operating systems tested on: Linux <!-- gh-linked-prs --> ### Linked PRs * gh-134871 * gh-135095 <!-- /gh-linked-prs -->
54ca55978e305ebb099d1b49633211597625bd52
3612d8f51741b11f36f8fb0494d79086bac9390a
python/cpython
python__cpython-134237
# `Tests / Check if generated files are up to date` fails since recent commits # Bug report Link: https://github.com/python/cpython/actions/runs/15115778970/job/42485985691?pr=134230 Output: ```diff diff --git a/Include/internal/pycore_opcode_metadata.h b/Include/internal/pycore_opcode_metadata.h index 7185947..17811d3 100644 --- a/Include/internal/pycore_opcode_metadata.h +++ b/Include/internal/pycore_opcode_metadata.h @@ -1787,8 +1787,6 @@ const uint8_t _PyOpcode_Caches[256] = { extern const uint8_t _PyOpcode_Deopt[256]; #ifdef NEED_OPCODE_METADATA const uint8_t _PyOpcode_Deopt[256] = { - [119] = 119, - [120] = 120, [121] = 121, [122] = 122, [123] = 123, @@ -1796,7 +1794,6 @@ const uint8_t _PyOpcode_Deopt[256] = { [125] = 125, [126] = 126, [127] = 127, - [211] = 211, [212] = 212, [213] = 213, [214] = 214, ``` Refs https://github.com/python/cpython/commit/cc9add695da42defc72e62c5d5389621dac54b2b CC @DinoV I have a PR ready. <!-- gh-linked-prs --> ### Linked PRs * gh-134237 <!-- /gh-linked-prs -->
a36ce264a9b6b0a18b78d3b0d79d2b11a685b801
c45e661226558e997e265cf53ce1419213cc10b7
python/cpython
python__cpython-134277
# PyREPL: autocomplete built-in modules # Feature or enhancement ### Proposal: Currently, built-in modules (i.e. `sys`) are not autocompleted in the new REPL. For instance, typing `import sy<tab>` does not offer `sys` as a completion possibility. This is because we use `pkgutil.iter_modules` under the hood. Since built-in modules have no underlying `.py` file, they don't show up. It would be great if the autocomplete could still work with these modules. One possibility would be looking at `sys.builtin_module_names` and combining it with the results of `pkgutil.iter_modules`. The relevant code is in this class: https://github.com/python/cpython/blob/cc9add695da42defc72e62c5d5389621dac54b2b/Lib/_pyrepl/_module_completer.py#L24-L26 Here's how I think we can implement it: - Find built-in modules by looking at `sys.builtin_module_names` - Merge the results with `ModuleCompleter.global_cache` (might need changing how we represent the modules) IIUC we only have built-in top-level modules, not submodules, so that should simplify the implementation. Please don't pick up this issue, I'd like to reserve it for someone at the PyConUS sprints :) ### Has this already been discussed elsewhere? This is a minor feature, which does not need previous discussion elsewhere ### Links to previous discussion of this feature: _No response_ <!-- gh-linked-prs --> ### Linked PRs * gh-134277 * gh-134285 <!-- /gh-linked-prs -->
8421b03b16a4852a527256cb7cdce2ab2d318548
470941782f74288823b445120f6383914b659f23
python/cpython
python__cpython-134267
# PyREPL: Do not show underscored modules by default during autocompletion # Feature or enhancement ### Proposal: Attribute autocomplete in the new REPL does not show underscored names unless specifically asked for: ```python >>> class Foo: ... _foo = 2 ... foo = 3 ... >>> Foo.<tab> Foo.foo Foo.mro() ``` Note that only `Foo.foo` is offered. To also get `Foo._foo`, we need to write `Foo._<tab>`. We should do the same for the import autocomplete. Currently it just shows all modules/submodules, including those starting with an underscore: ```python >>> from importlib import <tab> _abc _bootstrap_external machinery readers simple _bootstrap abc metadata resources util ``` Please don't pick up this issue, I'd like to reserve it for someone at the PyConUS sprints :) ### Has this already been discussed elsewhere? This is a minor feature, which does not need previous discussion elsewhere ### Links to previous discussion of this feature: _No response_ <!-- gh-linked-prs --> ### Linked PRs * gh-134267 * gh-134388 <!-- /gh-linked-prs -->
a3a3cf6d157948ed64ae837d6310b933a39a2493
c91ad5da9d92eac4718e4da8d53689c3cc24535e
python/cpython
python__cpython-134223
# Incorrect test in `test_pyrepl` # Bug report ### Bug description: In this test: https://github.com/python/cpython/blob/605022aeb69ae19cae1c020a6993ab5c433ce907/Lib/test/test_pyrepl/test_pyrepl.py#L1051-L1063 The second and third assert is not rerunning the `ImportParser` so it's not actually testing anything right now. We need to add this before each call to assert: ```python parser = ImportParser(code) actual = parser.parse() ``` Please don't pick up this issue, I'd like to reserve it for someone at the PyConUS sprints :) ### CPython versions tested on: CPython main branch ### Operating systems tested on: _No response_ <!-- gh-linked-prs --> ### Linked PRs * gh-134223 * gh-134229 <!-- /gh-linked-prs -->
faebf87b3716f7103ee5410456972db36f4b3ada
cc9add695da42defc72e62c5d5389621dac54b2b
python/cpython
python__cpython-134326
# `_curses.window.getch` does not check for interruption signals as `_curses.window.{getkey,get_wch}` do # Bug report ### Bug description: In `_curses.window.{getkey,get_wch}`, we are checking for possible interrupting signals and say: > In no-delay mode, an exception is raised if there is no input However, in `getch`, we say: > In no-delay mode, -1 is returned if there is no input. In particular, I think we should also check for signals and possibly raise an exception if `PyErr_CheckSignals` fails, and otherwise return -1 as documented. (note that `getch` returns ERR in no-delay mode, see https://linux.die.net/man/3/wgetch). ### CPython versions tested on: CPython main branch ### Operating systems tested on: _No response_ <!-- gh-linked-prs --> ### Linked PRs * gh-134326 * gh-134783 * gh-134784 * gh-134646 <!-- /gh-linked-prs -->
51762b6cadb8f316dd783716bc5c168c2e2d07f0
579686d9fb1bccc74c694d569f0a8bf28d9ca85a
python/cpython
python__cpython-134283
# Possible loss of large text data in `_curses.window.{instr,getstr}` # Bug report ### Bug description: `_curses.window.instr` is meant to extract a string of characters between two positions and we have a maximum number of allowed characters which is 1023. However, this limit is *not* enforced, namely we do the following: ```c winnstr(self->win, rtn, Py_MIN(n, 1023)); ``` IOW, we cannot return more than 1023 characters in a single API call. This should be documented and enforced at runtime, so that users may know that they need multiple API calls, or we should allocate heap memory instead (currently the buffer holding the output is allocated on the stack). ### CPython versions tested on: CPython main branch ### Operating systems tested on: _No response_ <!-- gh-linked-prs --> ### Linked PRs * gh-134283 * gh-134391 <!-- /gh-linked-prs -->
aadda87b3d3d99cb9e8c8791bb9715a3f0209195
a3a3cf6d157948ed64ae837d6310b933a39a2493
python/cpython
python__cpython-134325
# Argument Clinic dead code in `_cursesmodule.c` # Bug report ### Bug description: There are some occurrences of `/*[-clinic input]`, such as - `_curses.window.chgat` - `_curses.window.getstr` - `_curses.window.instr` In particular, those are misleading as one could assume that it's being handled by clinic but it's not. I believe it was historically impossible for clinic to do the necessary trick. I will modernize curses so that it either uses clinic in its full or remove the clinic directives. Note: I cannot backport this to 3.13 as the file has been heavily modified in 3.14 to accomodate for heap types instead of static types. ### CPython versions tested on: CPython main branch ### Operating systems tested on: _No response_ <!-- gh-linked-prs --> ### Linked PRs * gh-134325 * gh-134701 <!-- /gh-linked-prs -->
29e81159644cf78d958e30aaef208e22a04a8b25
fb09db1b934a51e52e46dc64d7e5e84fe69e6160
python/cpython
python__cpython-134288
# Document base85 and Ascii85 in the base64 module In the `base64` standard library module, the functions relating to base64, base32, and base16 encoding and decoding are well documented: the documentation links to [RFC 4648](https://datatracker.ietf.org/doc/html/rfc4648.html), which lays out those formats in the usual exhaustive detail of a formal standard, and where our functions deviate from those standards or accept arguments that might cause them to deviate, these are clearly labelled and the deviations explained. The Z85 functions similarly link to [a formal specification](https://rfc.zeromq.org/spec/32/), which, while less exhaustive, is still perfectly clear. For the base85 and Ascii85 functions, however, there is no such clarity. The documentation simply describes them as "de-facto standards", but makes no effort to explain them nor link to any source which does so. Nor is this a case where the standard can be considered common knowledge; the first sentence of the [Wikipedia page](https://en.wikipedia.org/wiki/Ascii85) starts with "Ascii85, also called Base85, is a form of binary-to-text encoding...", implying the two are one-and-the-same, yet the module contains separate functions for them, so clearly they must do different things, but which one implements the encoding described there? Indeed, googling the two terms will provide various _slightly different_ descriptions of the encoding, and there is currently no way to definitively identify which variant the b85 and a85 functions actually implement short of reading the source code. Ideally, the documentation would link to standards which fully specify these encodings. Unfortunately, the Ascii85 specification is buried in the middle of [the Adobe PostScript language reference](https://web.archive.org/web/20161222092741/https://www.adobe.com/products/postscript/pdfs/PLRM.pdf), while the closest thing to a spec for base85 I can find is [RFC-1924](https://www.rfc-editor.org/rfc/rfc1924), which is a) an April Fool's day joke, and b) covers one very specific use-case (encoding IPv6 addresses) whereas the implemented version is much more general. Now, fully specifying these encodings, including the relevant math, would be beyond the scope of Python's documentation. However, the documentation can and should be expanded to at least inform the reader of certain basic core details: - `a85encode`: - Encodes each 4 arbitrary bytes as 5 printable ASCII characters. However, as a special case, a sequence of 4 null bytes encodes to a single 'z', and, optionally, a sequence of 4 0x20 bytes (ASCII space) encodes to a single 'y' - Encoded alphabet is ASCII 33 ('!') through ASCII 117 ('u'), plus 'z' and 'y' as mentioned above; the output may also contain '~' and/or '\n' depending on the `wrapcol` and `adobe` arguments - If `pad` is true, the input is padded with null bytes to make its length a multiple of 4; decoding the resulting value will return the input with padding included. If `pad` is false, the input is still padded - the encoding algorithm only operates on 4 byte words - but the resulting value will be modified in order to indicate the amount of padding required, and decoding it will return the input exactly, with padding omitted. In neither case are any padding characters added to the _output_ (as they would be in base32 or base64). - The minimum line length for `wrapcol` is 2 if `adobe` is true and 1 otherwise; any smaller value, other than 0, will be treated as that minimum. - The newlines added by `wrapcol` will never break up the "<\~" and "\~>" framing markers added by `adobe` - `a85decode`: - If `abobe` is true, then the input _may_ contain the leading "<\~" framing marker, which will be removed if present before decoding the framed value, but it *must* contain the trailing "\~>" framing marker, which will also be removed; if the trailing marker is absent, `ValueError` will be raised. - The check for the framing markers is done _before_ removing whitespace as specified by `ignorechars`. Thus there must not be any leading whitespace before "<\~" or trailing whitespace after "\~>", nor can either marker contain whitespace. - `b85encode`: - Encodes each 4 arbitrary bytes as 5 printable ASCII characters. - Encoded alphabet is "0123456789ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz!#$%&()*+-;<=>?@^_`{|}\~" - If `pad` is true, the input is padded with null bytes to make its length a multiple of 4; decoding the resulting value will return the input with padding included. If `pad` is false, the input is still padded - the encoding algorithm only operates on 4 byte words - but the resulting value will be modified in order to indicate the amount of padding required, and decoding it will return the input exactly, with padding omitted. In neither case are any padding characters added to the _output_ (as they would be in base32 or base64). <!-- gh-linked-prs --> ### Linked PRs * gh-134288 * gh-134297 * gh-134298 <!-- /gh-linked-prs -->
66aaad61037785639aec393be7618cb54b1372dc
46d7c114d8cd25e5135bcdf2ea799eea43a41a67
python/cpython
python__cpython-134174
# concurrent.futures→asyncio state transfer is a bottleneck # Bug report ### Bug description: The current `_copy_future_state` implementation requires multiple method calls and lock acquisitions to retrieve the source future's state: 1. `done()` - acquires lock to check state 2. `cancelled()` - acquires lock again 3. `exception()` - acquires lock to get exception 4. `result()` - acquires lock to get result Each method call involves thread synchronization overhead, making this operation a bottleneck for high-frequency executor dispatches. Our use case involves dispatching a large number of small executor jobs from `asyncio` to a thread pool. These jobs typically involve `open` or `stat` on files that are already cached by the OS, so the actual I/O returns almost instantly. However, we still have to offload them to avoid blocking the event loop, since there's no reliable way to determine in advance whether a read will hit the cache. As a result, the majority of the overhead isn't from the I/O itself, but from the cost of scheduling. Most of the time is spent copying future state, which involves locking. This PR reduces that overhead, which has a meaningful impact at scale. ### CPython versions tested on: 3.13 ### Operating systems tested on: _No response_ <!-- gh-linked-prs --> ### Linked PRs * gh-134174 <!-- /gh-linked-prs -->
53da1e8c8ccbe3161ebc42e8b8b7ebd1ab70e05b
f2de1e6861c27bd498f598efc01600450979b5f9
python/cpython
python__cpython-134183
# Add colorization of exceptions in default `sys.unraisablehook` # Bug report ### Bug description: There are no colored exceptions in atexit module. Like this: ```python import atexit def foo(): raise Exception('foo') atexit.register(foo) ``` ``` Exception ignored in atexit callback <function foo at 0x00000114C7D5C720>: Traceback (most recent call last): File "...", line 3, in foo raise Exception('foo') Exception: foo ``` Python 3.13.3 Windows 10 ### CPython versions tested on: 3.13 ### Operating systems tested on: Windows <!-- gh-linked-prs --> ### Linked PRs * gh-134183 <!-- /gh-linked-prs -->
e8251dc0ae6a85f6a0e427ae64fb0fe847eb3cf8
8943bb722f2f88a95ea6c5ee36bb5d540740d792
python/cpython
python__cpython-134169
# http.server with HTTPS fails to bind IPv6 addresses # Bug report ### Bug description: When HTTPS is enabled, the command-line interface of `http.server` can't bind IPv6 addresses correctly and the `--directory` flag doesn't work. ### bind IPv6 HTTPS server throw an exception when I was trying to bind a IPv6 address, but HTTP server can bind IPv6 address. ```bash # HTTPS $ ./python -m http.server --tls-cert ~/Projects/ssl/localhost.crt --tls-key ~/Projects/ssl/localhost.key -b ::1 # ... TypeError: AF_INET address must be a pair (host, port) # HTTP $ ./python -m http.server -b ::1 Serving HTTP on ::1 port 8000 (http://[::1]:8000/) ... ``` ### `--directory` issue The HTTPS server always uses the work path of current terminal as its root directory and ignores the `--directory` flag: ```console # HTTPS $ ./python -m http.server --tls-cert ~/Projects/ssl/localhost.crt --tls-key ~/Projects/ssl/localhost.key -d ~/test Serving HTTPS on 0.0.0.0 port 8000 (https://0.0.0.0:8000/) ... 127.0.0.1 - - [18/May/2025 13:28:16] "GET / HTTP/1.1" 200 - $ curl -k https://0.0.0.0:8000/ <!DOCTYPE HTML> <html lang="en"> <head> # ... </head> <body> <h1>Directory listing for /</h1> <hr> <ul> <li><a href=".azure-pipelines/">.azure-pipelines/</a></li> <li><a href=".coveragerc">.coveragerc</a></li> <li><a href=".devcontainer/">.devcontainer/</a></li> <li><a href=".editorconfig">.editorconfig</a></li> <li><a href=".git/">.git/</a></li> <li><a href=".gitattributes">.gitattributes</a></li> <li><a href=".github/">.github/</a></li> # ... # Files in my cpython repo's root path </ul> <hr> </body> </html> # HTTP $ ./python -m http.server -d ~/test Serving HTTP on 0.0.0.0 port 8000 (http://0.0.0.0:8000/) ... 127.0.0.1 - - [18/May/2025 13:24:30] "GET / HTTP/1.1" 200 - $ curl http://0.0.0.0:8000/ <!DOCTYPE HTML> <html lang="en"> <head> # ... </head> <body> <h1>Directory listing for /</h1> <hr> <ul> <li><a href="1">1</a></li> <li><a href="2">2</a></li> <li><a href="3">3</a></li> </ul> <hr> </body> </html> ``` ### CPython versions tested on: CPython main branch ### Operating systems tested on: Linux <!-- gh-linked-prs --> ### Linked PRs * gh-134169 * gh-134630 <!-- /gh-linked-prs -->
2fd09b011031f3c00c342b44e02e2817010e507c
5d9c8fe3f6168785cb608dddd3010042f39bb226
python/cpython
python__cpython-134159
# PyREPL doesn't correctly color double braces in f-strings / t-strings # Bug report ### Bug description: Syntax highlighting in PyREPL (added in https://github.com/python/cpython/issues/131507) does not correctly color the double-braces used for escaping braces in f-strings or t-strings: ![Image](https://github.com/user-attachments/assets/978dfdbb-7fc3-48d0-8188-7064a20a00f0) After investigation, this is caused by Python tokenizer producing only 1 brace in this case: ```pycon >>> from tokenize import tokenize >>> from io import BytesIO >>> from pprint import pp >>> >>> pp(list(tokenize(BytesIO(b'f"a{{b"').readline))) [TokenInfo(type=68 (ENCODING), string='utf-8', start=(0, 0), end=(0, 0), line=''), TokenInfo(type=59 (FSTRING_START), string='f"', start=(1, 0), end=(1, 2), line='f"a{{b"'), TokenInfo(type=60 (FSTRING_MIDDLE), string='a{', start=(1, 2), end=(1, 4), line='f"a{{b"'), TokenInfo(type=60 (FSTRING_MIDDLE), string='b', start=(1, 5), end=(1, 6), line='f"a{{b"'), TokenInfo(type=61 (FSTRING_END), string='"', start=(1, 6), end=(1, 7), line='f"a{{b"'), TokenInfo(type=4 (NEWLINE), string='', start=(1, 7), end=(1, 8), line='f"a{{b"'), TokenInfo(type=0 (ENDMARKER), string='', start=(2, 0), end=(2, 0), line='')] ``` So PyREPL think there is only one brace in the original string, and doesn't color the second one. I have a fix for this issue, PR on the way! ### CPython versions tested on: 3.14, CPython main branch ### Operating systems tested on: Windows, macOS <!-- gh-linked-prs --> ### Linked PRs * gh-134159 * gh-134227 <!-- /gh-linked-prs -->
71ea6a6798c5853ed33188865a73b044ede8aba8
b22460c44d1bc597c96d4a3d27ad8373d7952820
python/cpython
python__cpython-134194
# `AttributeError` in `email._header_value_parser.get_address` # Bug report ### Bug description: During fuzzing of Python standard libraries, the following code snippet causes an `AttributeError` with the following message: `AttributeError: 'InvalidHeaderDefect' object has no attribute 'all_defects'`. This occurs in the `all_defects` function at line 140 in `email/_header_value_parser.py`. ``` import email._header_value_parser email._header_value_parser.get_address("!an??:=m==fr2@[C") ``` ### Exception Trace ``` Traceback (most recent call last): File "rep.py", line 3, in <module> email._header_value_parser.get_address("!an??:=m==fr2@[C") File "/usr/lib/python3.12/email/_header_value_parser.py", line 1988, in get_address token, value = get_group(value) ^^^^^^^^^^^^^^^^ File "/usr/lib/python3.12/email/_header_value_parser.py", line 1954, in get_group token, value = get_group_list(value) ^^^^^^^^^^^^^^^^^^^^^ File "/usr/lib/python3.12/email/_header_value_parser.py", line 1926, in get_group_list token, value = get_mailbox_list(value) ^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/lib/python3.12/email/_header_value_parser.py", line 1860, in get_mailbox_list token, value = get_mailbox(value) ^^^^^^^^^^^^^^^^^^ File "/usr/lib/python3.12/email/_header_value_parser.py", line 1822, in get_mailbox for x in token.all_defects): ^^^^^^^^^^^^^^^^^ File "/usr/lib/python3.12/email/_header_value_parser.py", line 140, in all_defects return sum((x.all_defects for x in self), self.defects) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/lib/python3.12/email/_header_value_parser.py", line 140, in <genexpr> return sum((x.all_defects for x in self), self.defects) ^^^^^^^^^^^^^ File "/usr/lib/python3.12/email/_header_value_parser.py", line 140, in all_defects return sum((x.all_defects for x in self), self.defects) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/lib/python3.12/email/_header_value_parser.py", line 140, in <genexpr> return sum((x.all_defects for x in self), self.defects) ^^^^^^^^^^^^^ File "/usr/lib/python3.12/email/_header_value_parser.py", line 140, in all_defects return sum((x.all_defects for x in self), self.defects) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/lib/python3.12/email/_header_value_parser.py", line 140, in <genexpr> return sum((x.all_defects for x in self), self.defects) ^^^^^^^^^^^^^ AttributeError: 'InvalidHeaderDefect' object has no attribute 'all_defects' ``` ### CPython versions tested on: 3.12, 3.11, 3.10, 3.9 ### Operating systems tested on: Linux <!-- gh-linked-prs --> ### Linked PRs * gh-134194 * gh-135191 * gh-135192 <!-- /gh-linked-prs -->
d9cad074d52fe31327429fd81e4d2eeea3dbe35b
1b55e12766d007aea9fcd0966e29ce220b67d28e
python/cpython
python__cpython-134194
# `UnboundLocalError` in `email._header_value_parser.parse_message_id` During fuzzing of Python standard libraries, the following code snippet causes an `UnboundLocalError` with the following message: `UnboundLocalError: cannot access local variable 'pos' where it is not associated with a value'`. This occurs in the `_get_ptext_to_endchars` function at line 1035 in `email/_header_value_parser.py`. ```python import email._header_value_parser email._header_value_parser.parse_message_id("<T@[") ``` ### Exception Trace ```pytb Traceback (most recent call last): File "/usr/lib/python3.12/email/_header_value_parser.py", line 2118, in get_msg_id token, value = get_dot_atom_text(value) ^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/lib/python3.12/email/_header_value_parser.py", line 1344, in get_dot_atom_text raise errors.HeaderParseError("expected atom at a start of " email.errors.HeaderParseError: expected atom at a start of dot-atom-text but found '[' During handling of the above exception, another exception occurred: Traceback (most recent call last): File "rep.py", line 2, in <module> email._header_value_parser.parse_message_id("<T@[") File "/usr/lib/python3.12/email/_header_value_parser.py", line 2149, in parse_message_id token, value = get_msg_id(value) ^^^^^^^^^^^^^^^^^ File "/usr/lib/python3.12/email/_header_value_parser.py", line 2121, in get_msg_id token, value = get_no_fold_literal(value) ^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/lib/python3.12/email/_header_value_parser.py", line 2066, in get_no_fold_literal token, value = get_dtext(value) ^^^^^^^^^^^^^^^^ File "/usr/lib/python3.12/email/_header_value_parser.py", line 1557, in get_dtext ptext, value, had_qp = _get_ptext_to_endchars(value, '[]') ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/lib/python3.12/email/_header_value_parser.py", line 1033, in _get_ptext_to_endchars pos = pos + 1 ^^^ UnboundLocalError: cannot access local variable 'pos' where it is not associated with a value ``` ### CPython versions tested on: 3.12, 3.11, 3.10, 3.9 ### Operating systems tested on: Linux <!-- gh-linked-prs --> ### Linked PRs * gh-134194 * gh-134233 * gh-134677 * gh-134678 <!-- /gh-linked-prs -->
d9cad074d52fe31327429fd81e4d2eeea3dbe35b
1b55e12766d007aea9fcd0966e29ce220b67d28e
python/cpython
python__cpython-134687
# `TypeError: '<' not supported between instances of 'NoneType' and 'int'` raised during call to `email.message_from_file` # Bug report ### Bug description: During fuzzing of Python standard libraries, the following code snippet causes a TypeError with the following message: `TypeError: '<' not supported between instances of 'NoneType' and 'int'`. This occurs in the `decode_params` function at line 419 in `email/utils.py`. ``` import sys import io import email d = io.StringIO(open(sys.argv[1], "r").read()) email.message_from_file(d) ``` ### POC File: https://github.com/FuturesLab/POC/blob/main/py-email/poc-02 ### Exception Trace ``` Traceback (most recent call last): File "rep.py", line 5, in <module> email.message_from_file(d) File "/usr/lib/python3.12/email/__init__.py", line 53, in message_from_file return Parser(*args, **kws).parse(fp) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/lib/python3.12/email/parser.py", line 54, in parse return feedparser.close() ^^^^^^^^^^^^^^^^^^ File "/usr/lib/python3.12/email/feedparser.py", line 185, in close self._call_parse() File "/usr/lib/python3.12/email/feedparser.py", line 178, in _call_parse self._parse() File "/usr/lib/python3.12/email/feedparser.py", line 304, in _parsegen boundary = self._cur.get_boundary() ^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/lib/python3.12/email/message.py", line 861, in get_boundary boundary = self.get_param('boundary', missing) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/lib/python3.12/email/message.py", line 725, in get_param for k, v in self._get_params_preserve(failobj, header): ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/lib/python3.12/email/message.py", line 674, in _get_params_preserve params = utils.decode_params(params) ^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/lib/python3.12/email/utils.py", line 419, in decode_params continuations.sort() TypeError: '<' not supported between instances of 'NoneType' and 'int' ``` ### CPython versions tested on: 3.12, 3.11, 3.10, 3.9 ### Operating systems tested on: Linux <!-- gh-linked-prs --> ### Linked PRs * gh-134687 * gh-135247 * gh-135248 <!-- /gh-linked-prs -->
bcb6b45cb86a2f9f65b6c41f27c36059ba86a50b
d610f11d21241d353b25843f66e51098a5c0ddad
python/cpython
python__cpython-134154
# The `json` module page uses "object" confusingly # Documentation On the json module page, the word "object" can mean two things: a Python object (any Python value) or a JSON object (a thing enclosed in curly braces). That ambiguity could be explained at the top of the page, and then we could clarify the uses of the word throughout the page. This came up when someone was confused that `object_hook` wasn't applying to strings loaded from the JSON. <!-- gh-linked-prs --> ### Linked PRs * gh-134154 * gh-134166 * gh-134167 <!-- /gh-linked-prs -->
fa4e088668d4a41f9be5babe7edd5409290ee92a
009e7b36981fd07f7cca1fdcfcf172ce1584fac7
python/cpython
python__cpython-134145
# Cannot safely Py_EndInterpreter in 3.14b1 # Crash report ### What happened? Stack trace: ``` #0 tstate_delete_common.constprop.0 (tstate=tstate@entry=0x555555666ca0, release_gil=0) at ../Python/pystate.c:1854 #1 0x00007ffff78cd341 in zapthreads (interp=0x7ffff74c8010) at ../Python/pystate.c:1915 #2 PyInterpreterState_Delete (interp=0x7ffff74c8010) at ../Python/pystate.c:1016 #3 0x00005555555552ef in main () at repro.c:16 ``` (note the address passed to tstate_delete_common is definitely corrupt, and not the address of any PyThreadState created in this program) This was triggered by creating a new PyThreeadState for the interpreter, switching to it, deleting an old thread state for the same interpreter, and then calling Py_EndInterpreter (on the new thread state). repro.c: ```c #include <Python.h> int main() { Py_Initialize(); PyThreadState *orig = NULL; PyInterpreterConfig cfg = _PyInterpreterConfig_INIT; Py_NewInterpreterFromConfig(&orig, &cfg); PyThreadState *temp = PyThreadState_New(orig->interp); PyThreadState_Swap(temp); PyThreadState_Clear(orig); PyThreadState_Delete(orig); Py_EndInterpreter(temp); Py_Finalize(); } ``` Compiled with `$ gcc -O1 -ggdb repro.cpp -I/usr/include/python3.14 -lpython3.14` ### CPython versions tested on: 3.14 ### Operating systems tested on: Linux ### Output from running 'python -VV' on the command line: Python 3.14.0b1 (main, May 8 2025, 08:57:13) [GCC 13.3.0] <!-- gh-linked-prs --> ### Linked PRs * gh-134145 * gh-134182 <!-- /gh-linked-prs -->
f2de1e6861c27bd498f598efc01600450979b5f9
0a160bf14c4848f50539e52e2de486c641d122a2
python/cpython
python__cpython-134120
# Segfault from template string iterator # Crash report ### What happened? It's possible to segfault the interpreter by repeatedly calling `next()` on an exhausted template string iterator: ```python template_iter = iter(t"{1}") next(template_iter) try: next(template_iter) except StopIteration: pass next(template_iter) ``` Backtrace: ``` Program received signal SIGSEGV, Segmentation fault. 0x0000555555b5bf79 in _Py_TYPE (ob=0x0) at ./Include/object.h:270 270 return ob->ob_type; #0 0x0000555555b5bf79 in _Py_TYPE (ob=0x0) at ./Include/object.h:270 #1 PyUnicode_GET_LENGTH (op=0x0) at ./Include/cpython/unicodeobject.h:299 #2 templateiter_next (op=<string.templatelib.TemplateIter at remote 0x7fffb5084ed0>) at Objects/templateobject.c:26 #3 0x0000555555d4a21d in builtin_next (self=<optimized out>, args=0x7fffffffafc8, nargs=1) at Python/bltinmodule.c:1644 #4 0x0000555555ad51e6 in cfunction_vectorcall_FASTCALL (func=<built-in method next of module object at remote 0x7fffb425c8e0>, args=0x7fffffffafc8, nargsf=<optimized out>, kwnames=<optimized out>) at Objects/methodobject.c:450 #5 0x00005555559956f0 in _PyObject_VectorcallTstate (tstate=0x555556755e40 <_PyRuntime+362048>, callable=<built-in method next of module object at remote 0x7fffb425c8e0>, args=0x7fffffffafc8, nargsf=9223372036854775809, kwnames=0x0) at ./Include/internal/pycore_call.h:169 #6 0x000055555599584b in PyObject_Vectorcall (callable=callable@entry=<built-in method next of module object at remote 0x7fffb425c8e0>, args=args@entry=0x7fffffffafc8, nargsf=<optimized out>, kwnames=kwnames@entry=0x0) at Objects/call.c:327 #7 0x0000555555d870fb in _PyEval_EvalFrameDefault (tstate=tstate@entry=0x555556755e40 <_PyRuntime+362048>, frame=frame@entry=0x629000005840, throwflag=throwflag@entry=0) at Python/generated_cases.c.h:1619 #8 0x0000555555de915d in _PyEval_EvalFrame (throwflag=0, frame=0x629000005840, tstate=0x555556755e40 <_PyRuntime+362048>) at ./Include/internal/pycore_ceval.h:119 ``` Found using [fusil](https://github.com/devdanzin/fusil) by @vstinner. ### CPython versions tested on: CPython main branch ### Operating systems tested on: Linux ### Output from running 'python -VV' on the command line: Python 3.15.0a0 (heads/main-dirty:fe9f6e829a5, May 13 2025, 11:40:11) [GCC 11.4.0] on linux <!-- gh-linked-prs --> ### Linked PRs * gh-134120 * gh-134153 <!-- /gh-linked-prs -->
fc7f4c36664314393bd4c30355e21bd7aeac524d
84914ad0e5f96f0ca7238f3b4bc7fc4e50b1abb3
python/cpython
python__cpython-134118
# Confusion with "immutable" and "hashable" in Design and History FAQ The page https://docs.python.org/3/faq/design.html#why-are-there-separate-tuple-and-list-data-types says at the end of its last paragraph: > Only **immutable** elements can be used as **dictionary keys.**.. Which is not quite correct, since dictionaries only require its keys to be **hashable** (see https://docs.python.org/3/library/stdtypes.html#mapping-types-dict). Immutability doesn't necessarily imply hashability. Thanks. <!-- gh-linked-prs --> ### Linked PRs * gh-134118 <!-- /gh-linked-prs -->
b1c33294ca5ef8d889f2ec87f9a8fa79741f9f7d
53da1e8c8ccbe3161ebc42e8b8b7ebd1ab70e05b
python/cpython
python__cpython-134110
# Comments incorrectly shown in pydoc output for argparse # Bug report `pydoc` incorrectly shows some comments for `argparse`: ``` | format_help(self) | # ======================= | # Help-formatting methods | # ======================= | | start_section(self, heading) | # ======================== | # Message building methods | # ======================== | ``` Comments immediately preceding the object's source code are used if the object has no docstring. But these comments do not describe the following methods. They are just section headers, separating groups of methods. They should be separated by an empty line from the following code. <!-- gh-linked-prs --> ### Linked PRs * gh-134110 * gh-134112 * gh-134113 <!-- /gh-linked-prs -->
71cf4dd622832848cace358a7f8444243afd2e83
ea2d707bd59963bd4f53407108026930ff12ae56
python/cpython
python__cpython-134117
# Use-After-Free in `PyImport_ImportModuleLevelObject` # Bug report ### Bug description: If you try to import something with a level >= 1 and it somehow fails to put it into sys.modules after importing, you'll get a nice error message letting you know. https://github.com/python/cpython/blob/d94b1e9cac82143048031530e6c51e59f597bccd/Python/import.c#L3857-L3863 However, this error message uses `to_return` which was freed a couple of lines before. Because it's used just after being freed, you can't do anything too malicious with it, but you can crash python by allocating a large enough string and having it be unmapped after being freed so that it's invalid memory when it's accessed. (No crash but triggers ASAN with use-after-free) ```python import sys sys.modules = {f"a.b.c": {}} __import__(f"b.c", {"__package__": "a"}, level=1) ``` (Crash) ```python import sys loooong = "".ljust(0x100000, "b") sys.modules = {f"a.{loooong}.c": {}} __import__(f"{loooong}.c", {"__package__": "a"}, level=1) ``` Fix is to have the decref after it makes the error message. ### CPython versions tested on: 3.12, 3.13, 3.14 ### Operating systems tested on: Windows, Linux <!-- gh-linked-prs --> ### Linked PRs * gh-134117 * gh-134171 * gh-134172 <!-- /gh-linked-prs -->
4e9005d32ff466925f40af410f2ea6bf2329bcf8
fa4e088668d4a41f9be5babe7edd5409290ee92a
python/cpython
python__cpython-134099
# SimpleHTTPRequestHandler incorrectly handles trailing escaped slash # Bug report For path to directory which does not end with slash ("/"), SimpleHTTPRequestHandler returns status MOVED_PERMANENTLY with redirection to new path that ends with slash. Slashes can be percent-encoded, although this is not necessary. But the code that checks for the trailing slash does not take this into account (in two place). <!-- gh-linked-prs --> ### Linked PRs * gh-134099 * gh-134123 * gh-134124 <!-- /gh-linked-prs -->
2f1ecb3bc474a5895dce090cca7b8afe7b560040
fcaf009907fc39d604907315155c1f1de811dd88
python/cpython
python__cpython-134136
# `-X showrefcount` doesn't do per-statement reports in new REPL # Bug report ### Bug description: `-X showrefcount` [should](https://docs.python.org/3/using/cmdline.html#cmdoption-X) “output the total reference count and number of used memory blocks [...] after each statement in the interactive interpreter. This only works on debug builds.” This works with `PYTHON_BASIC_REPL=1`, but with the new REPL the totals are only shown at exit. ### CPython versions tested on: 3.13, 3.14, CPython main branch ### Operating systems tested on: Linux <!-- gh-linked-prs --> ### Linked PRs * gh-134136 * gh-134220 * gh-134221 <!-- /gh-linked-prs -->
c31547a5914db93b8b38c6a5261ef716255f3582
44b73d3cd4466e148460883acf4494124eae8c91
python/cpython
python__cpython-134178
# Remove C variadic constructor signature support for `threading.RLock` # Feature or enhancement Passing any arguments has been deprecated since Python 3.14, as the Python version does not permit any arguments, but the C version allows any number of positional or keyword arguments, ignoring every argument. <!-- gh-linked-prs --> ### Linked PRs * gh-134178 <!-- /gh-linked-prs -->
d6dc33ed8086fbd79f6adaeac4e329f29a13f834
9983c7d4416cac8deb2fded1ec9c7daf786c3a02
python/cpython
python__cpython-134027
# Unable to build if the target macOS version is <10.9 # Bug report ### Bug description: There is an error building the [`main`](https://github.com/python/cpython/tree/main) branch on macOS when targeting versions <10.9 or using outdated SDKs: ``` /usr/bin/clang -std=gnu11 -c -I./Modules/_hacl -I./Modules/_hacl/include -D_BSD_SOURCE -D_DEFAULT_SOURCE -fno-strict-overflow -Wsign-compare -Wunreachable-code -fno-common -dynamic -DNDEBUG -g -O3 -Wall -pipe -Os -arch x86_64 -std=c11 -Wextra -Wno-unused-parameter -Wno-missing-field-initializers -Wstrict-prototypes -Werror=implicit-function-declaration -fvisibility=hidden -I./Include/internal -I./Include/internal/mimalloc -I. -I./Include -I/opt/local/include -o Modules/_hacl/Lib_Memzero0.o ./Modules/_hacl/Lib_Memzero0.c ./Modules/_hacl/Lib_Memzero0.c:59:5: error: implicit declaration of function 'memset_s' is invalid in C99 [-Werror,-Wimplicit-function-declaration] memset_s(dst, len_, 0, len_); ^ ./Modules/_hacl/Lib_Memzero0.c:59:5: note: did you mean 'memset'? /usr/include/string.h:84:7: note: 'memset' declared here void *memset(void *, int, size_t); ^ 1 error generated. ``` Even though Python officially supports macOS >10.13 the bug has already been fixed in the HACL* upstream repository: https://github.com/hacl-star/hacl-star/pull/1042 ### CPython versions tested on: CPython main branch ### Operating systems tested on: macOS <!-- gh-linked-prs --> ### Linked PRs * gh-134027 * gh-134084 <!-- /gh-linked-prs -->
1566c34dc76ec6139e6827fbab6d76e084a63d9d
7a504b3d5da98874536834481539c19ba4a265af
python/cpython
python__cpython-134067
# Segfault calling `sys.remote_exec(0, None)` # Crash report ### What happened? The following code segfaults ASAN builds and aborts/segfaults on non-ASAN debug builds on main: ```python import sys sys.remote_exec(0, None) ``` Backtrace ASAN debug free-threading: ```shell Program received signal SIGSEGV, Segmentation fault. 0x0000555555f86705 in _Py_TYPE (ob=0x0) at ./Include/object.h:270 270 return ob->ob_type; #0 0x0000555555f86705 in _Py_TYPE (ob=0x0) at ./Include/object.h:270 #1 PyBytes_AS_STRING (op=0x0) at ./Include/cpython/bytesobject.h:25 #2 sys_remote_exec_impl (module=module@entry=<module at remote 0x7fffb4259d20>, pid=pid@entry=0, script=<optimized out>) at ./Python/sysmodule.c:2491 #3 0x0000555555f86c65 in sys_remote_exec (module=<optimized out>, args=0x7fffffffafc8, nargs=<optimized out>, kwnames=0x0) at ./Python/clinic/sysmodule.c.h:1614 #4 0x0000555555ad4921 in cfunction_vectorcall_FASTCALL_KEYWORDS (func=<built-in method remote_exec of module object at remote 0x7fffb4259d20>, args=0x7fffffffafc8, nargsf=<optimized out>, kwnames=0x0) at Objects/methodobject.c:466 #5 0x00005555559956f0 in _PyObject_VectorcallTstate (tstate=0x555556755e40 <_PyRuntime+362048>, callable=<built-in method remote_exec of module object at remote 0x7fffb4259d20>, args=0x7fffffffafc8, nargsf=9223372036854775810, kwnames=0x0) at ./Include/internal/pycore_call.h:169 #6 0x000055555599584b in PyObject_Vectorcall (callable=callable@entry=<built-in method remote_exec of module object at remote 0x7fffb4259d20>, args=args@entry=0x7fffffffafc8, nargsf=<optimized out>, kwnames=kwnames@entry=0x0) at Objects/call.c:327 #7 0x0000555555d870fb in _PyEval_EvalFrameDefault (tstate=tstate@entry=0x555556755e40 <_PyRuntime+362048>, frame=frame@entry=0x629000005840, throwflag=throwflag@entry=0) at Python/generated_cases.c.h:1619 #8 0x0000555555de915d in _PyEval_EvalFrame (throwflag=0, frame=0x629000005840, tstate=0x555556755e40 <_PyRuntime+362048>) at ./Include/internal/pycore_ceval.h:119 ``` Backtrace ASAN release gilful: ```shell Program received signal SIGSEGV, Segmentation fault. 0x0000555555977807 in _PyFreeList_PopNoStats (fl=<optimized out>) at ./Include/internal/pycore_freelist.h:79 79 fl->freelist = *(void **)obj; #0 0x0000555555977807 in _PyFreeList_PopNoStats (fl=<optimized out>) at ./Include/internal/pycore_freelist.h:79 #1 clear_freelist (dofree=<optimized out>, is_finalization=<optimized out>, freelist=<optimized out>) at Objects/object.c:903 #2 _PyObject_ClearFreeLists (freelists=0x555556276fc8 <_PyRuntime+101256>, is_finalization=is_finalization@entry=0) at Objects/object.c:950 #3 0x0000555555c45792 in _PyGC_ClearAllFreeLists (interp=<optimized out>) at Python/gc_gil.c:14 #4 0x0000555555c3f950 in gc_collect_full (stats=0x7fffffffd730, tstate=0x5555562ab2f8 <_PyRuntime+315064>) at Python/gc.c:1686 #5 _PyGC_Collect (tstate=<optimized out>, generation=generation@entry=2, reason=reason@entry=_Py_GC_REASON_SHUTDOWN) at Python/gc.c:2041 #6 0x0000555555c43c1e in _PyGC_CollectNoFail (tstate=tstate@entry=0x5555562ab2f8 <_PyRuntime+315064>) at Python/gc.c:2082 #7 0x0000555555ceb5fa in interpreter_clear (interp=<optimized out>, tstate=tstate@entry=0x5555562ab2f8 <_PyRuntime+315064>) at Python/pystate.c:920 #8 0x0000555555cec231 in _PyInterpreterState_Clear (tstate=tstate@entry=0x5555562ab2f8 <_PyRuntime+315064>) at Python/pystate.c:994 #9 0x0000555555cd843a in finalize_interp_clear (tstate=0x5555562ab2f8 <_PyRuntime+315064>) at Python/pylifecycle.c:1904 #10 0x0000555555ce27fb in _Py_Finalize (runtime=0x55555625e440 <_PyRuntime>) at Python/pylifecycle.c:2210 #11 0x0000555555ce2dbd in _Py_Finalize (runtime=0x55555625e440 <_PyRuntime>) at Python/pylifecycle.c:2252 #12 0x0000555555d74884 in Py_RunMain () at Modules/main.c:769 #13 pymain_main (args=0x7fffffffdc90) at Modules/main.c:797 #14 Py_BytesMain (argc=<optimized out>, argv=<optimized out>) at Modules/main.c:821 #15 0x00007ffff72ded90 in __libc_start_call_main (main=main@entry=0x5555556fd530 <main>, argc=argc@entry=2, argv=argv@entry=0x7fffffffdeb8) at ../sysdeps/nptl/libc_start_call_main.h:58 #16 0x00007ffff72dee40 in __libc_start_main_impl (main=0x5555556fd530 <main>, argc=2, argv=0x7fffffffdeb8, init=<optimized out>, fini=<optimized out>, rtld_fini=<optimized out>, stack_end=0x7fffffffdea8) at ../csu/libc-start.c:392 #17 0x000055555573f855 in _start () ``` Backtrace debug: ```shell ./Python/sysmodule.c:2542: _Py_NegativeRefcount: Assertion failed: object has negative ref count Enable tracemalloc to get the memory block allocation traceback object address : 0x7fffffffd4b0 object refcount : 4294956288 object type : 0x55555591a387 Program received signal SIGSEGV, Segmentation fault. __strlen_avx2 () at ../sysdeps/x86_64/multiarch/strlen-avx2.S:74 74 ../sysdeps/x86_64/multiarch/strlen-avx2.S: No such file or directory. #0 __strlen_avx2 () at ../sysdeps/x86_64/multiarch/strlen-avx2.S:74 #1 0x00007ffff7d14d31 in __vfprintf_internal (s=s@entry=0x7fffffffb200, format=0x555555a0d163 "object type name: %s\n", ap=0x7fffffffd340, mode_flags=2) at ./stdio-common/vfprintf-internal.c:1517 #2 0x00007ffff7d15665 in buffered_vfprintf (s=0x7ffff7eb96a0 <_IO_2_1_stderr_>, format=format@entry=0x555555a0d163 "object type name: %s\n", args=args@entry=0x7fffffffd340, mode_flags=mode_flags@entry=2) at ./stdio-common/vfprintf-internal.c:2261 #3 0x00007ffff7d1465e in __vfprintf_internal (s=<optimized out>, format=0x555555a0d163 "object type name: %s\n", ap=ap@entry=0x7fffffffd340, mode_flags=mode_flags@entry=2) at ./stdio-common/vfprintf-internal.c:1236 #4 0x00007ffff7dd2d13 in ___fprintf_chk (fp=<optimized out>, flag=flag@entry=1, format=format@entry=0x555555a0d163 "object type name: %s\n") at ./debug/fprintf_chk.c:33 #5 0x000055555570cc1f in fprintf (__fmt=0x555555a0d163 "object type name: %s\n", __stream=<optimized out>) at /usr/include/x86_64-linux-gnu/bits/stdio2.h:105 #6 _PyObject_Dump (op=op@entry=<unknown at remote 0x7fffffffd4b0>) at Objects/object.c:733 #7 0x000055555570ce97 in _PyObject_AssertFailed (obj=obj@entry=<unknown at remote 0x7fffffffd4b0>, expr=expr@entry=0x0, msg=msg@entry=0x555555a0d1ee "object has negative ref count", file=file@entry=0x555555a855a7 "./Python/sysmodule.c", line=line@entry=2542, function=function@entry=0x555555a0d9f0 <__func__.63> "_Py_NegativeRefcount") at Objects/object.c:3061 #8 0x000055555570cfa3 in _Py_NegativeRefcount (filename=filename@entry=0x555555a855a7 "./Python/sysmodule.c", lineno=lineno@entry=2542, op=op@entry=<unknown at remote 0x7fffffffd4b0>) at Objects/object.c:272 #9 0x000055555591a290 in Py_DECREF (op=<unknown at remote 0x7fffffffd4b0>, lineno=2542, filename=0x555555a855a7 "./Python/sysmodule.c") at ./Include/refcount.h:407 #10 sys_remote_exec_impl (module=module@entry=<module at remote 0x7ffff7bec0b0>, pid=pid@entry=0, script=<optimized out>) at ./Python/sysmodule.c:2542 #11 0x000055555591a387 in sys_remote_exec (module=<module at remote 0x7ffff7bec0b0>, args=0x7fffffffd778, nargs=<optimized out>, kwnames=<optimized out>) at ./Python/clinic/sysmodule.c.h:1614 #12 0x0000555555704e2e in cfunction_vectorcall_FASTCALL_KEYWORDS (func=<built-in method remote_exec of module object at remote 0x7ffff7bec0b0>, args=0x7fffffffd778, nargsf=<optimized out>, kwnames=0x0) at Objects/methodobject.c:466 #13 0x00005555556840c2 in _PyObject_VectorcallTstate (tstate=0x555555c8e4a8 <_PyRuntime+331080>, callable=<built-in method remote_exec of module object at remote 0x7ffff7bec0b0>, args=0x7fffffffd778, nargsf=9223372036854775810, kwnames=0x0) at ./Include/internal/pycore_call.h:169 #14 0x00005555556841e1 in PyObject_Vectorcall (callable=callable@entry=<built-in method remote_exec of module object at remote 0x7ffff7bec0b0>, args=args@entry=0x7fffffffd778, nargsf=<optimized out>, kwnames=kwnames@entry=0x0) at Objects/call.c:327 #15 0x00005555558429a2 in _PyEval_EvalFrameDefault (tstate=tstate@entry=0x555555c8e4a8 <_PyRuntime+331080>, frame=frame@entry=0x7ffff7fb0020, throwflag=throwflag@entry=0) at Python/generated_cases.c.h:1619 #16 0x000055555586f62f in _PyEval_EvalFrame (throwflag=0, frame=0x7ffff7fb0020, tstate=0x555555c8e4a8 <_PyRuntime+331080>) at ./Include/internal/pycore_ceval.h:119 ``` Found using [fusil](https://github.com/devdanzin/fusil) by @vstinner. ### CPython versions tested on: CPython main branch ### Operating systems tested on: Linux ### Output from running 'python -VV' on the command line: Python 3.15.0a0 (heads/main:52a7a22a6b8, May 15 2025, 16:27:02) [GCC 11.4.0] <!-- gh-linked-prs --> ### Linked PRs * gh-134067 * gh-134162 <!-- /gh-linked-prs -->
009e7b36981fd07f7cca1fdcfcf172ce1584fac7
fc7f4c36664314393bd4c30355e21bd7aeac524d
python/cpython
python__cpython-134063
# Excessive hash collisions in IPv4Network and IPv6Network classes # Bug report ### Bug description: While working with `IPv4Network` objects, @cakekoa and I discovered that hash collisions were common. Below is a small script that shows a few examples of different networks that have hash collisions. ```python from ipaddress import IPv4Network, IPv6Network def test_hash_collision(network_1, network_2): # Shows that the networks are not equivalent. assert network_1 != network_2 assert network_1.num_addresses != network_2.num_addresses # Shows a hash collision similar to CVE-2020-14422 assert hash(network_1) == hash(network_2) test_hash_collision(IPv4Network("192.168.1.255/32"), IPv4Network("192.168.1.0/24")) test_hash_collision(IPv4Network("172.24.255.0/24"), IPv4Network("172.24.0.0/16")) test_hash_collision(IPv4Network("192.168.1.87/32"), IPv4Network("192.168.1.86/31")) test_hash_collision( IPv4Network("10.0.0.0/8"), IPv6Network("ffff:ffff:ffff:ffff:ffff:ffff:aff:0/112") ) ``` Upon investigating, we discovered [CVE-2020-14422](https://github.com/python/cpython/pull/21033), which fixed a similar (albeit much more severe) hash collision in the` IPv4Interface` and `IPv6Interface` classes. This CVE was fixed in b98e7790c77a4378ec4b1c71b84138cb930b69b7. The implementation of `_BaseNetwork.__hash()` looks like this on the main branch: ```python def __hash__(self): return hash(int(self.network_address) ^ int(self.netmask)) ``` Based on the [fix for CVE-2020-14422](https://github.com/python/cpython/commit/b98e7790c77a4378ec4b1c71b84138cb930b69b7), the fix for the `_BaseNetwork` class would likely look like: ```diff diff --git a/Lib/ipaddress.py b/Lib/ipaddress.py index 703fa289dda..d8a84f33264 100644 --- a/Lib/ipaddress.py +++ b/Lib/ipaddress.py @@ -729,7 +729,7 @@ def __eq__(self, other): return NotImplemented def __hash__(self): - return hash(int(self.network_address) ^ int(self.netmask)) + return hash((int(self.network_address), int(self.netmask))) def __contains__(self, other): # always false if one is v4 and the other is v6. ``` As this method produces far fewer collisions than what caused CVE-2020-14422, the security impact is likely negligible. @sethmlarson from the PSRT team has given us the green light to publicly submit a fix. ### CPython versions tested on: 3.12 ### Operating systems tested on: Linux, macOS <!-- gh-linked-prs --> ### Linked PRs * gh-134063 * gh-134476 * gh-134477 * gh-134478 * gh-134479 * gh-134480 * gh-134481 <!-- /gh-linked-prs -->
f3fc0c16e08b317cb201cf1073e934e6909f1251
518c95b5529ed3379b5a3065b09f71411efe72fb
python/cpython
python__cpython-134044
# use stackrefs in `_PyObject_GetMethod` and calling APIs # Feature or enhancement Currently the calling APIs such as `PyObject_VectorcallMethod` use `_PyObject_GetMethod` to avoid creating a bound method object, however `_PyObject_GetMethod` increfs and decrefs the object even when the underlying object supports deferred reference counting. This leads to reference counting contention on the object and it doesn't scale well in free threading. This API is heavily used by modules such as asyncio so it is important that `_PyObject_GetMethod` should scale well with threads. https://github.com/python/cpython/blob/54a6875adbc9091909b1473c49595c0cc84dc438/Objects/call.c#L829-L859 Proposal: To take advantage of deferred reference counting, we should add stack ref variant of `_PyObject_GetMethod` and use it in all calling APIs to avoid reference counting contention on the function object. Implementation: - The new stack ref variant `_PyObject_GetMethodStackRef` will use `_PyType_LookupStackRefAndVersion` when looking up method from type cache. - A new stack ref variant of `_PyObject_TryGetInstanceAttributeStackRef` will be added to to be used with `_PyObject_GetMethodStackRef`. - Possibly we can avoid incref and decref when looking up the attribute on instance dict by delaying freeing of object's dictionary and using `_Py_dict_lookup_threadsafe_stackref`. https://github.com/python/cpython/blob/54a6875adbc9091909b1473c49595c0cc84dc438/Objects/object.c#L1632-L1640 <!-- gh-linked-prs --> ### Linked PRs * gh-134044 * gh-136356 * gh-136412 <!-- /gh-linked-prs -->
a380d578737be1cd51e1d1be2b83bbc0b0619e7e
3f9eb55e090a8de80503e565f508f341c5f4c8da
python/cpython
python__cpython-134042
# Some of the newly added path apis in _winapi aren't available on all win api partitions # Bug report ### Bug description: The following apis are missing in some win api partitions (mostly gaming or mobile): - GetLongPathNameW - GetShortPathNameW - NeedCurrentDirectoryForExePathW ### CPython versions tested on: 3.13 ### Operating systems tested on: _No response_ <!-- gh-linked-prs --> ### Linked PRs * gh-134042 <!-- /gh-linked-prs -->
1c4b34c6cb851334c813647a2a7db77ecee20b7a
43410953c26153b48554218cbe35ba5f1167b08e
python/cpython
python__cpython-134077
# Improve `raise from ValueError()` error message # Feature or enhancement https://github.com/python/cpython/issues/133999 gave me an idea. Right now it is: <img width="788" alt="Image" src="https://github.com/user-attachments/assets/59cf42f5-3f51-495f-ac49-7dc6baab401f" /> I suggest making it something similar to: ```python >>> raise from ValueError File "<python-input-1>", line 1 raise from ValueError ^^^^ SyntaxError: invalid syntax, maybe you forgot an exception after `raise` keyword? ``` Wording is not the best at the moment, but we will figure it out :) <!-- gh-linked-prs --> ### Linked PRs * gh-134077 * gh-135204 <!-- /gh-linked-prs -->
0d9ccc87a2198a0c1881ab4b17a24fc7fec62418
a7d41e8aab5211f4ed7f636c41d63adcab0affba
python/cpython
python__cpython-134034
# Replace "starred_list" with standard grammar term "expression_list" in for statement documentation # Documentation In the [compound statements docs](https://docs.python.org/3/reference/compound_stmts.html#the-for-statement), section 8.3 i found: > for_stmt ::= "for" [target_list](https://docs.python.org/3/reference/simple_stmts.html#grammar-token-python-grammar-target_list) "in" starred_list ":" [suite](https://docs.python.org/3/reference/compound_stmts.html#grammar-token-python-grammar-suite) ["else" ":" [suite](https://docs.python.org/3/reference/compound_stmts.html#grammar-token-python-grammar-suite)] However, starred_list does not appear to be a defined production rule elsewhere in the language reference. It seems to act more like a semantic placeholder to indicate that starred_expressions (like *x) are allowed in the iterable. Since the real grammar uses expression_list — which already includes starred_expression — would it be more consistent and accurate to write this instead?: > for_stmt ::= "for" target_list "in" expression_list ":" suite ["else" ":" suite] …and then note in the accompanying text that starred_expression is supported? This might reduce confusion for readers trying to understand or implement the formal grammar. Can I work on that? <!-- gh-linked-prs --> ### Linked PRs * gh-134034 * gh-134424 <!-- /gh-linked-prs -->
4eacf3883dd041c31133ea407204b797a17559b1
c7364f79b2fb01c251e22115875a46a2ec134dcd
python/cpython
python__cpython-134365
# Expose `PyMutex_IsLocked` in the public C API # Feature or enhancement ### Proposal: In my C extension, I wanted to write: ``` assert(PyMutex_IsLocked(&mutex)); ``` to verify that my own code is maintaining its own locking invariants. `PyMutex_IsLocked` exists, but is private: https://github.com/python/cpython/blob/0afbd4e42ac28240b484cabe1710fdb6c572fb1f/Include/internal/pycore_lock.h#L30 Could we make it public and document it, please? ### Has this already been discussed elsewhere? This is a minor feature, which does not need previous discussion elsewhere ### Links to previous discussion of this feature: _No response_ <!-- gh-linked-prs --> ### Linked PRs * gh-134365 * gh-136971 <!-- /gh-linked-prs -->
f41e9c750e6971c165e055374a1014d6afd2d50e
86c3316183a79867e3c666d0830f897e16f0f339
python/cpython
python__cpython-134028
# DBM Module Vacuuming # Feature or enhancement ### Proposal: Good afternoon, The `dbm` module, and by extension `shelve` as well, don't provide any way to reclaim free space when lots of deletions from the database happen. This applies to all of the `dbm` submodules (`dbm.dumb`, `dbm.sqlite`, `dbm.ndbm`, `dbm.gnu`). This can lead to hundreds of GB of wasted space when using them to store complex objects, such as when using them as a persistent cache. Most of the underlying libraries, however, support ways to retrieve space on-demand: - `VACUUM` in sqlite3 - `gdbm_reorganize` for gnu - None for ndbm - None for dumb (but this is simple to implement and I would be happy to contribute: in-place copies used parts of the binary file and updates the index. The advantage is this won’t use more disk space while vacuuming, but if program is interrupted during vacuum, DB will be corrupted (note: this is the case for many `dbm.dumb` operations already) Additionally, I would like to update the documentation to highlight the disadvantages of dbm.dumb. For now they are only comments in the source code and are hidden from developers reading the doc: - Lack of support for any concurrency - Slowness linearly proportional to index size - (This will hopefully be fixed by the PR so it won't be included but otherwise also) never retrieves space of deleted items. ### Has this already been discussed elsewhere? I have already discussed this feature proposal on Discourse ### Links to previous discussion of this feature: https://discuss.python.org/t/dbm-module-add-vacuuming/91507 <!-- gh-linked-prs --> ### Linked PRs * gh-134028 <!-- /gh-linked-prs -->
f806463e16428ea4b379bf547bafa11f43a480ef
b5952371668089299bc8472c1adb9f8a0e69b4a2
python/cpython
python__cpython-134035
# Incorrect syntax error # Bug report ### Bug description: ```python >>> try: 1/0 ... except Exception as exc: raise from exc ... File "<python-input-0>", line 2 except Exception as exc: raise from exc ^^^ SyntaxError: cannot use except statement with name ``` Should be: ```py >>> try: 1/0 ... except: raise from ValueError() ... File "<python-input-2>", line 2 except: raise from ValueError() ^^^^ SyntaxError: invalid syntax ``` ### CPython versions tested on: CPython main branch, 3.14 ### Operating systems tested on: Linux <!-- gh-linked-prs --> ### Linked PRs * gh-134035 * gh-134206 <!-- /gh-linked-prs -->
84914ad0e5f96f0ca7238f3b4bc7fc4e50b1abb3
7a9d46295a497669eaa6e647c33ab71c8cf620a1
python/cpython
python__cpython-133987
# string split documentation unclear # Documentation If *sep* is not specified or is ``None`` and *maxsplit* is ``0``, only leading runs of consecutive whitespace are considered. This is contrary to (or, at least, underspecified in) the current documentation. <!-- gh-linked-prs --> ### Linked PRs * gh-133987 * gh-133992 * gh-133993 <!-- /gh-linked-prs -->
3e23047363f384b7254b7af51afe4e353be94167
6df39765e652ba06267f8e7abae431c37194a23e
python/cpython
python__cpython-133988
# Some operations on managed dict are not safe from memory_order POV # Bug report ### Bug description: First of all, I'm not a threading expert and my understanding of the memory-ordering model may be wrong. So, if I'm wrong I will be happy to fix my knowledge lacoons. I saw some inconsistency (from my sight) of loading and writing of managed dict pointer. Non-atomic loads: 1. `PyObject_VisitManagedDict` https://github.com/python/cpython/blob/9ad0c7b0f14c5fcda6bfae6692c88abb95502d38/Objects/dictobject.c#L7202 2. `PyObject_ClearManagedDict` https://github.com/python/cpython/blob/9ad0c7b0f14c5fcda6bfae6692c88abb95502d38/Objects/dictobject.c#L7462 3. `_PyObject_GetDictPtr` https://github.com/python/cpython/blob/9ad0c7b0f14c5fcda6bfae6692c88abb95502d38/Objects/object.c#L1541 Non-atomic stores: 1. `_PyObject_InitInlineValues` https://github.com/python/cpython/blob/9ad0c7b0f14c5fcda6bfae6692c88abb95502d38/Objects/dictobject.c#L6787-L6791 IIUC mixing of non-atomic loads/stores with atomic ones may lead to data races. `memory_order_acquire` loads: 1. `_PyObject_GetManagedDict` https://github.com/python/cpython/blob/9ad0c7b0f14c5fcda6bfae6692c88abb95502d38/Include/internal/pycore_object.h#L932-L936 `memory_order_release` stores: 1. `_PyObject_MaterializeManagedDict_LockHeld` https://github.com/python/cpython/blob/9ad0c7b0f14c5fcda6bfae6692c88abb95502d38/Objects/dictobject.c#L6827-L6829 2. `store_instance_attr_lock_held` https://github.com/python/cpython/blob/9ad0c7b0f14c5fcda6bfae6692c88abb95502d38/Objects/dictobject.c#L6925-L6927 3. `ensure_managed_dict` https://github.com/python/cpython/blob/9ad0c7b0f14c5fcda6bfae6692c88abb95502d38/Objects/dictobject.c#L7494-L7495 `memory_order_seq_cst` stores: 1. `set_dict_inline_values` https://github.com/python/cpython/blob/9ad0c7b0f14c5fcda6bfae6692c88abb95502d38/Objects/dictobject.c#L7225 2. `try_set_dict_inline_only_or_other_dict` https://github.com/python/cpython/blob/9ad0c7b0f14c5fcda6bfae6692c88abb95502d38/Objects/dictobject.c#L7252-L7253 3. `replace_dict_probably_inline_materialized` https://github.com/python/cpython/blob/9ad0c7b0f14c5fcda6bfae6692c88abb95502d38/Objects/dictobject.c#L7287-L7289 IIUC mixing acquire/release with seq_cst may break total order of seq_cst operations. Mixing with `memory_order_seq_cst` stores 1. `_PyObject_SetManagedDict` https://github.com/python/cpython/blob/9ad0c7b0f14c5fcda6bfae6692c88abb95502d38/Objects/dictobject.c#L7356-L7365 `_PyObject_SetManagedDict` uses non-atomic load but stores with `seq_cst` mode so it is OK (IIUC), but following store without fence may lead to data race. Are my findings valid or am I completely wrong? cc @colesbury @kumaraditya303 ### CPython versions tested on: CPython main branch ### Operating systems tested on: _No response_ <!-- gh-linked-prs --> ### Linked PRs * gh-133988 * gh-134354 <!-- /gh-linked-prs -->
ec39fd2c20323ee9814a1137b1a0819e92efae4e
317c49622397222b7c7fb49837e6b1fd7e82a80d
python/cpython
python__cpython-133832
# Using the public PyUnicodeWriter C API made the json module slower I modified the json module to replace the private _PyUnicodeWriter C API with the public PyUnicodeWriter C API: * https://github.com/python/cpython/commit/6e63c4736beebdf912acd391fc437672ee9d362e * https://github.com/python/cpython/commit/c914212474792312bb125211bae5719650fe2f58 Problem: it made the json module slower. Let's investigate what's going on. <!-- gh-linked-prs --> ### Linked PRs * gh-133832 * gh-133969 * gh-133971 * gh-133973 * gh-134974 * gh-135297 <!-- /gh-linked-prs -->
c81446af1dbf3c84bfd4ed604c245dd40463fd3a
379d0bc95646dfe923e7ea05fb7f1befbd85572d
python/cpython
python__cpython-135347
# Recording and restoring the LC_NUMERIC locale setting with LC_ALL=C.UTF-8 no longer works with Python 3.13.3 # Bug report ### Bug description: We have project where we record the current value of `locale.LC_NUMERIC`, change it to a different value to use a library that needs a particular locale setting, and then restore it to the original value. This stopped working with Python 3.13.3. I suspect this may be a bug, but please correct me if I misunderstood the functionality. Example with Python 3.13.3: ```python $ docker run -it python:3.13.3 Python 3.13.3 (main, May 9 2025, 23:49:05) [GCC 12.2.0] on linux Type "help", "copyright", "credits" or "license" for more information. >>> import os, locale >>> os.environ environ({'PATH': '/usr/local/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin', 'HOSTNAME': '2e43b79a7135', 'TERM': 'xterm', 'GPG_KEY': '7169605F62C751356D054A26A821E680E5FA6305', 'PYTHON_VERSION': '3.13.3', 'PYTHON_SHA256': '40f868bcbdeb8149a3149580bb9bfd407b3321cd48f0be631af955ac92c0e041', 'HOME': '/root', 'LC_CTYPE': 'C.UTF-8'}) >>> os.environ["LC_ALL"] = "C.UTF-8" >>> locale.setlocale(locale.LC_ALL, "") 'C.UTF-8' >>> locale.getlocale(locale.LC_NUMERIC) ('en_US', 'UTF-8') >>> locale.setlocale(locale.LC_NUMERIC, locale.getlocale(locale.LC_NUMERIC)) Traceback (most recent call last): File "<python-input-5>", line 1, in <module> locale.setlocale(locale.LC_NUMERIC, locale.getlocale(locale.LC_NUMERIC)) ~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/local/lib/python3.13/locale.py", line 615, in setlocale return _setlocale(category, locale) locale.Error: unsupported locale setting ``` The same example works fine with earlier versions of Python, for example 3.13.2: ```python $ docker run -it python:3.13.2 Python 3.13.2 (main, Apr 8 2025, 04:27:11) [GCC 12.2.0] on linux Type "help", "copyright", "credits" or "license" for more information. >>> import os, locale >>> os.environ environ({'PATH': '/usr/local/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin', 'HOSTNAME': '64800af765e4', 'TERM': 'xterm', 'GPG_KEY': '7169605F62C751356D054A26A821E680E5FA6305', 'PYTHON_VERSION': '3.13.2', 'PYTHON_SHA256': 'd984bcc57cd67caab26f7def42e523b1c015bbc5dc07836cf4f0b63fa159eb56', 'HOME': '/root', 'LC_CTYPE': 'C.UTF-8'}) >>> os.environ["LC_ALL"] = "C.UTF-8" >>> locale.setlocale(locale.LC_ALL, "") 'C.UTF-8' >>> locale.getlocale(locale.LC_NUMERIC) ('C', 'UTF-8') >>> locale.setlocale(locale.LC_NUMERIC, locale.getlocale(locale.LC_NUMERIC)) 'C.UTF-8' ``` It looks like this may have been caused by the changes in https://github.com/python/cpython/pull/129647. ### CPython versions tested on: 3.13 ### Operating systems tested on: _No response_ <!-- gh-linked-prs --> ### Linked PRs * gh-135347 * gh-135349 * gh-135350 <!-- /gh-linked-prs -->
0f866cbfefd797b4dae25962457c5579bb90dde5
ff2b5f40c2bf5c71255caac8a743c09ba0758c02
python/cpython
python__cpython-133961
# Simplify `typing.evaluate_forward_ref` # Bug report In PEP-749 I added `typing.evaluate_forward_ref` to replace the private `typing.ForwardRef._evaluate`, which is being used by some external users. The current [documentation](https://docs.python.org/3.14/library/typing.html#typing.evaluate_forward_ref) claims these differences from `annotationlib.ForwardRef.evaluate`: 1. Recursively evaluates forward references nested within the type hint. 2. Raises TypeError when it encounters certain objects that are not valid type hints. 3. Replaces type hints that evaluate to None with types.NoneType. 4. Supports the FORWARDREF and STRING formats. (1) is useful and fits well with the typing module; annotationlib can't do this because it requires introspecting into typing-specific objects. (2) I feel is not useful (compare #133959): the type check is not particularly thorough, and it's generally better for callers to allow more objects through that callers can handle on their own terms. (3) is sort of harmless but not particularly useful. (4) is not true any more since I also added support for these formats to `ForwardRef.evaluate`. So I'd like to drop differences 2 through 4, leaving the function focused on recursively evaluating nested ForwardRefs. <!-- gh-linked-prs --> ### Linked PRs * gh-133961 * gh-134663 <!-- /gh-linked-prs -->
57fef27cfc2bdfc1e3a65ef8c8a760198d15b14d
b51b08a0a5fedde4f74e4cc338b8b5ad9656ad50