repo stringclasses 1 value | instance_id stringlengths 20 22 | problem_statement stringlengths 126 60.8k | merge_commit stringlengths 40 40 | base_commit stringlengths 40 40 |
|---|---|---|---|---|
python/cpython | python__cpython-126013 | # Add `__class_getitem__` support to `memoryview`
# Feature or enhancement
### Proposal:
`memoryview` was made generic in typeshed in https://github.com/python/typeshed/pull/12247. Mypy recently received a report (https://github.com/python/mypy/issues/18053) due to it not being subscriptable at runtime:
```python
>>> memoryview[int]
Traceback (most recent call last):
File "<python-input-0>", line 1, in <module>
memoryview[int]
~~~~~~~~~~^^^^^
TypeError: type 'memoryview' is not subscriptable
```
It seems that `memoryview` should be made subscriptable at runtime, like other standard library classes that are generic in typeshed.
Happy to make a PR if there's agreement that this should be added.
### Has this already been discussed elsewhere?
This is a minor feature, which does not need previous discussion elsewhere
### Links to previous discussion of this feature:
_No response_
<!-- gh-linked-prs -->
### Linked PRs
* gh-126013
<!-- /gh-linked-prs -->
| dc76a4ad3c46af3fb70361be10b8a791cd126931 | dad34531298fc0ea91b9000aafdd2ea2fce5e54a |
python/cpython | python__cpython-127680 | # Incorrect handling of `start` and `end` values in `codecs` error handlers
# Crash report
### What happened?
```py
./python -c "import codecs; codecs.xmlcharrefreplace_errors(UnicodeEncodeError('bad', '', 0, 1, 'reason'))"
python: ./Include/cpython/unicodeobject.h:339: PyUnicode_READ_CHAR: Assertion `index >= 0' failed.
Aborted (core dumped)
```
```py
./python -c "import codecs; codecs.backslashreplace_errors(UnicodeDecodeError('utf-8', b'00000', 9, 2, 'reason'))"
Traceback (most recent call last):
File "<string>", line 1, in <module>
SystemError: Negative size passed to PyUnicode_New
```
```py
./python -c "import codecs; codecs.replace_errors(UnicodeTranslateError('000', 1, -7, 'reason'))"
python: Python/codecs.c:743: PyCodec_ReplaceErrors: Assertion `PyUnicode_KIND(res) == PyUnicode_2BYTE_KIND' failed.
Aborted (core dumped)
```
See https://github.com/python/cpython/issues/123378 for the root cause. Since we are still wondering how to fix the getters and setters, I suggest we first fix the crash by adding the checks inside at the handler's level (for now). I'm not sure if the handler itself is handling corner cases correctly as well.
<!-- gh-linked-prs -->
### Linked PRs
* gh-127680
* gh-127674
* gh-127675
* gh-127676
<!-- /gh-linked-prs -->
| cf0b2da1e6947aa15be119582c2017765ab46863 | b23b27bc556857be73ee0f2379441c422b6fee26 |
python/cpython | python__cpython-125986 | # Add free threading scaling microbenchmarks
# Feature or enhancement
I've been using a simple script to help identify and track scaling bottlenecks in the free threading build. The benchmarks consists of patterns that ought to scale well, but haven't in the past, typically due to reference count contention or lock contention.
I think this is generally useful for people working on free-threading and would like to include it under `Tools`, perhaps as `Tools/ftscalingbench/ftscalingbench.py`.
Note that this is not intended to be a general multithreading benchmark suite, nor are the benchmarks intended to be representative of real-world workloads. The benchmarks are only intended to help identify and track scaling bottlenecks that occur in basic usage.
Here is the original script, I've since made some modifications:
https://github.com/colesbury/nogil-micro-benchmarks
<!-- gh-linked-prs -->
### Linked PRs
* gh-125986
* gh-128460
<!-- /gh-linked-prs -->
| 00ea179879726ae221ac7084bdc14752c45ff558 | b5b06349eb71b7cf9e5082e26e6fe0145875f95b |
python/cpython | python__cpython-126003 | # UAF on `fut->fut_{callback,context}0` with evil `__getattribute__` in `_asynciomodule.c`
# Crash report
### What happened?
```python
import asyncio
class EvilLoop:
def call_soon(*args):
# will crash before it actually gets here
print(args)
def get_debug(self):
return False
def __getattribute__(self, name):
global tracker
if name == "call_soon":
fut.remove_done_callback(tracker)
del tracker
print("returning call_soon method after clearing callback0")
return object.__getattribute__(self, name)
class TrackDel:
def __del__(self):
print("deleted", self)
fut = asyncio.Future(loop=EvilLoop())
tracker = TrackDel()
fut.add_done_callback(tracker)
fut.set_result("kaboom")
```
_Originally posted by @Nico-Posada in https://github.com/python/cpython/issues/125970#issuecomment-2438050629_
Not sure I'll be able to work on it today, so anyone's free to take on it.
----
## Traceback
```text
deleted <__main__.TrackDel object at 0x7f4ab660a420>
returning call_soon method after clearing callback0
Python/context.c:534: _PyObject_GC_UNTRACK: Assertion "_PyObject_GC_IS_TRACKED(((PyObject*)(op)))" failed: object not tracked by the garbage collector
Enable tracemalloc to get the memory block allocation traceback
object address : 0x7f4ab64ca4b0
object refcount : 0
object type : 0x9bfc60
object type name: _contextvars.Context
object repr : <refcnt 0 at 0x7f4ab64ca4b0>
Fatal Python error: _PyObject_AssertFailed: _PyObject_AssertFailed
Python runtime state: initialized
TypeError: EvilLoop.call_soon() got an unexpected keyword argument 'context'
```
<!-- gh-linked-prs -->
### Linked PRs
* gh-126003
* gh-126043
* gh-126044
<!-- /gh-linked-prs -->
| f819d4301d7c75f02be1187fda017f0e7b608816 | 80eec52fc813bc7d20478da3114ec6ffd73e7c31 |
python/cpython | python__cpython-125970 | # Evil `call_soon` may cause OOB in `future_schedule_callbacks`
# Crash report
### Bug description:
In `future_schedule_callbacks`, the length of the callback list is assumed to be constant, but an evil `call_soon` can make it change.
PoC:
```py
import asyncio
called_on_fut_callback0 = False
pad = lambda: ...
def evil_call_soon(*args, **kwargs):
global called_on_fut_callback0
if called_on_fut_callback0:
# Called when handling fut->fut_callbacks[0]
# and mutates the length fut->fut_callbacks.
fut.remove_done_callback(int)
fut.remove_done_callback(pad)
else:
called_on_fut_callback0 = True
fake_event_loop = lambda: ...
fake_event_loop.call_soon = evil_call_soon
fake_event_loop.get_debug = lambda: False # suppress traceback
fut = asyncio.Future(loop=fake_event_loop)
fut.add_done_callback(str) # sets fut->fut_callback0
fut.add_done_callback(int) # sets fut->fut_callbacks[0]
fut.add_done_callback(pad) # sets fut->fut_callbacks[1]
fut.add_done_callback(pad) # sets fut->fut_callbacks[2]
fut.set_result("boom")
```
### CPython versions tested on:
CPython main branch
### Operating systems tested on:
_No response_
<!-- gh-linked-prs -->
### Linked PRs
* gh-125970
* gh-125991
* gh-125992
<!-- /gh-linked-prs -->
| c5b99f5c2c5347d66b9da362773969c531fb6c85 | 13844094609cf8265a2eed023e33c7002f3f530d |
python/cpython | python__cpython-125967 | # UAF on `fut->fut_callback0` with evil `__eq__` in `_asynciomodule.c`
# Crash report
### Bug description:
This is an issue just to track the progress of fixing the UAF on `fut->fut_callback0` (see https://github.com/python/cpython/pull/125833#issuecomment-2435463447).
The UAF that could be exploited by clearing `fut._callbacks` won't be triggered anymore since after https://github.com/python/cpython/pull/125922, we will not mutate the internal list itself anymore but it is still be possilbe to mutate `fut->fut_callback0` directly: https://github.com/python/cpython/pull/125833#issuecomment-2429993547.
### CPython versions tested on:
CPython main branch
### Operating systems tested on:
_No response_
<!-- gh-linked-prs -->
### Linked PRs
* gh-125967
* gh-126047
* gh-126048
<!-- /gh-linked-prs -->
| ed5059eeb1aa50b481957307db5a34b937497382 | 0922a4ae0d2803e3a4e9f3d2ccd217364cfd700e |
python/cpython | python__cpython-125958 | # Discrepancy in argument naming for Sphinx docs & help in the cmath module
For example, cmath.sin docs [looks](https://docs.python.org/3.13/library/cmath.html#cmath.sin) like:
https://github.com/python/cpython/blob/2513593303b306cd8273682811d26600651c60e4/Doc/library/cmath.rst?plain=1#L147-L149
while help() shows:
```
>>> help(cmath.sin)
Help on built-in function sin in module cmath:
sin(z, /)
Return the sine of z.
```
Note that help() has the "/" to show, that the parameter name is not a part of API (positional-only parameter), but the sphinx docs miss that part.
I think it's a good idea to sync parameter naming in help and sphinx. Note that the introduction in sphinx docs uses ``z`` to denote complex numbers, just as many textbooks.
Also, per [PDEB decision](https://discuss.python.org/t/editorial-board-decisions/58580), sphinx docs should include a slash to denote positional-only arguments. I propose instead allow positional-or-keyword arguments for functions of this module. In this way we can preserve simple (e.g. ``sin(z)``) function signatures, while preserving "precision and completeness" of reference docs.
<!-- gh-linked-prs -->
### Linked PRs
* gh-125958
* gh-131962
* gh-131963
<!-- /gh-linked-prs -->
| 0a3eb8855ccea52c618db1cf09840c6368a3745f | fe5c4c53e7bc6d780686013eaab17de2237b2176 |
python/cpython | python__cpython-125943 | # Android: stdout is set to `errors="surrogateescape"`
# Bug report
### Bug description:
The Android stdout and stderr tests include an assertion that both streams are set to `errors="backslashreplace"`, which is the most useful setting for redirecting the streams to the Android log. However, this test only passed because the CPython test harness reconfigures stdout – outside of the tests, it's set to `surrogateescape`.
### CPython versions tested on:
3.13
### Operating systems tested on:
_No response_
<!-- gh-linked-prs -->
### Linked PRs
* gh-125943
* gh-125950
<!-- /gh-linked-prs -->
| b08570c90eb9fa2e2ee4429909b14240b7a427d4 | e68d4b08ff13a06a2c2877f63bf856e6bf3c2e77 |
python/cpython | python__cpython-125941 | # Android: support 16 KB pages
# Feature or enhancement
### Proposal:
For details, see https://github.com/chaquo/chaquopy/issues/1171.
### Has this already been discussed elsewhere?
This is a minor feature, which does not need previous discussion elsewhere
### Links to previous discussion of this feature:
_No response_
<!-- gh-linked-prs -->
### Linked PRs
* gh-125941
* gh-125948
<!-- /gh-linked-prs -->
| e68d4b08ff13a06a2c2877f63bf856e6bf3c2e77 | fed501d7247053ce46a2ba512bf0e4bb4f483be6 |
python/cpython | python__cpython-125934 | # Add ARIA labels to improve documentation accessibility
# Documentation
According to the Lighthouse report, select elements do not have labels.

These `select` elements are rendered in the JS in `Doc/tools/static/rtd_switcher.js`.
The issue was originally outlined on the [Discourse](https://discuss.python.org/t/docs-html-render-redesign/48566) (6th point).
<!-- gh-linked-prs -->
### Linked PRs
* gh-125934
* gh-125938
* gh-125939
<!-- /gh-linked-prs -->
| 1306f33c84b2745aa8af5e3e8f680aa80b836c0e | 500f5338a8fe13719478589333fcd296e8e8eb02 |
python/cpython | python__cpython-125989 | # urljoin() undocumented behavior change in Python 3.14.
# Bug report
### Bug description:
Django is tested with the earliest alpha versions. We noticed a behavior change in the `urllib.parse.urljoin()` that is used in a few places in Django, e.g. for staticfiles or build-in storages.
Python 3.14.0a1:
```python
>>> from urllib.parse import urljoin
>>> urljoin("/static/", "admin/img/icon-addlink.svg")
admin/img/icon-addlink.svg
```
Python 3.13 and earlier:
```python
>>> from urllib.parse import urljoin
>>> urljoin("/static/", "admin/img/icon-addlink.svg")
/static/admin/img/icon-addlink.svg
```
Is this an intentional change?
### CPython versions tested on:
3.14
### Operating systems tested on:
Linux
<!-- gh-linked-prs -->
### Linked PRs
* gh-125989
<!-- /gh-linked-prs -->
| dbb6e22cb1f533bba00a61a5b63ec68af9d48836 | 223d3dc554dde45f185f7f465753824c6f698b9b |
python/cpython | python__cpython-125917 | # Allow `functools.reduce`s 'initial' to be a keyword argument
# Feature or enhancement
### Proposal:
`functools.reduce` takes `function` (generally a callable) and `iterable`, with an optional `initial` parameter:
https://docs.python.org/3/library/functools.html#functools.reduce
However, `initial` cannot be passed as a keyword argument, which reduces ;) readability,
```python
from functools import reduce
from operator import sub
>>> reduce(sub, [1, 1, 2, 3, 5, 8], 21)
1
>>> reduce(sub, [1, 1, 2, 3, 5, 8], initial=21)
TypeError: reduce() takes no keyword arguments
```
Allowing `initial` as keyword argument will be more clear, and `initial` could also be passed as a keyword argument while making `partial` functions out of `reduce`.
### Has this already been discussed elsewhere?
I have already discussed this feature proposal on Discourse
### Links to previous discussion of this feature:
https://discuss.python.org/t/remove-positional-only-restriction-on-initial-parameter-of-reduce/68897
<!-- gh-linked-prs -->
### Linked PRs
* gh-125917
* gh-125999
<!-- /gh-linked-prs -->
| abb90ba46c597a1b192027e914ad312dd62d2462 | 6e3bb8a91380ba98d704f2dca8e98923c0abc8a8 |
python/cpython | python__cpython-125935 | # The JIT optimizer doesn't know about `_BINARY_OP_INPLACE_ADD_UNICODE`'s store
# Bug report
Our abstract interpreter should have a case for `_BINARY_OP_INPLACE_ADD_UNICODE`, since this "special" instruction includes a store to a fast local. This can mean that the local being stored to is left with stale information (for instance, when analyzing `a = ""; a += "spam"` the optimizer will incorrectly assume that `a` still has the value `""`).
CC @Fidget-Spinner.
<!-- gh-linked-prs -->
### Linked PRs
* gh-125935
<!-- /gh-linked-prs -->
| b5b06349eb71b7cf9e5082e26e6fe0145875f95b | dcda92f8fcfa70ef48935db0dc468734de897d96 |
python/cpython | python__cpython-125973 | # The JIT doesn't call for `combine_symbol_mask` on the initial trampoline for a trace
# Crash report
`_PyJIT_Compile` doesn't consider [the initial (big) trampoline "group"](https://github.com/python/cpython/blob/c35b33bfb7c491dfbdd40195d70dcfc4618265db/Python/jit.c#L473-L475) when collecting the (little) trampolines needed to compile a given trace.
This doesn't break anything right now, since we don't have the initial "big" trampoline on AArch64, and only emit the "little" trampolines on AArch64. But with the immenent move to LLVM 19, this is [breaking stuff](https://github.com/python/cpython/actions/runs/11472371241/job/31925038475?pr=125499).
Also, our use of the term "trampoline" here is overloaded. Let's call the big trampoline a "shim" from here on out. We can rename `trampoline.c` to `shim.c` and update all of the references to the old name to make it more clear. That should wait until after GH-125499 though, to reduce the amount of conflicts.
CC @diegorusso and @savannahostrowski. Either one of you want to take this?
<!-- gh-linked-prs -->
### Linked PRs
* gh-125973
* gh-126339
<!-- /gh-linked-prs -->
| 7f6e884f3acc860c1cf1b773c9659e8f861263e7 | 417c130ba55ca29e132808a0a500329f73b6ec41 |
python/cpython | python__cpython-125826 | # the link to "Contributing to Docs" redirects to a different URL
On the main python documentation page: https://docs.python.org/3.14/
there is a link to [Contributing to Docs](https://devguide.python.org/docquality/#helping-with-documentation) which links to "https://devguide.python.org/docquality/#helping-with-documentation" which redirects to "https://devguide.python.org/documentation/help-documenting/index.html#helping-with-documentation".
I think the link should directly link to "https://devguide.python.org/documentation/help-documenting/"
<!-- gh-linked-prs -->
### Linked PRs
* gh-125826
* gh-125929
* gh-125930
<!-- /gh-linked-prs -->
| 5003ad5c5ea508f0dde1b374cd8bc6a481ad5c5d | ad6110a93ffa82cae71af6c78692de065d3871b5 |
python/cpython | python__cpython-125901 | # Clean-up use of `@suppress_immortalization`
As a temporary measure, we immortalized a number of objects in the 3.13 free threading build. To work around refleak test failures, we added a `@test.support.suppress_immortalization()` decorator that suppressed the behavior.
* https://github.com/python/cpython/issues/117783
Now that we have deferred reference counting, that behavior is mostly gone and we can get rid of the decorator. We still want to suppress immortalization of code constants in a few places (like `compile()`), but that logic can be simpler and doesn't have to be exposed to Python.
<!-- gh-linked-prs -->
### Linked PRs
* gh-125901
<!-- /gh-linked-prs -->
| 332356b880576a1a00b5dc34f03d7d3995dd4512 | 1306f33c84b2745aa8af5e3e8f680aa80b836c0e |
python/cpython | python__cpython-125907 | # Outdated PyObject_HasAttr documentation (Exception behavior)
# Documentation
In Python 3.13, `PyObject_HasAttr` doc says:
> Exceptions that occur when this calls `__getattr__()` and `__getattribute__()` methods are silently ignored.
But its exception behavior after https://github.com/python/cpython/pull/106674 is to report all exceptions other than `AttributeError`.
<!-- gh-linked-prs -->
### Linked PRs
* gh-125907
* gh-128283
<!-- /gh-linked-prs -->
| 08a0728d6c32986d35edb26872b4512a71ae60f3 | 401bba6b58497ce59e7b45ad33e43ae8c67abcb9 |
python/cpython | python__cpython-125892 | # pdb fails when setting a breakpoint to function names with return-type annotations
# Bug report
### Bug description:
When having return-type annotations on a function:
```python
def foo() -> int:
return 0
```
and trying to establish a breakpoint on the name of the function:
```
(Pdb) break foo
```
PDB raises the following exception (snipped):
```
Traceback (most recent call last):
File "/usr/lib/python3.13/pdb.py", line 1137, in do_break
lineno = int(arg)
ValueError: invalid literal for int() with base 10: 'foo'
During handling of the above exception, another exception occurred:
...
SyntaxError: 'return' outside function
Uncaught exception. Entering post mortem debugging
Running 'cont' or 'step' will restart the program
> /usr/lib/python3.13/dis.py(76)_try_compile()
-> return compile(source, name, 'exec')
```
This also happens with _temporary breakpoins_ `tbreak`. This was tested using Python versions from Debian official repositories. The failure appears in 3.13.0 and was not present in 3.12.6.
Analyzing with @asottile they point out that the following lines: [Lib/pdb.py:121](https://github.com/python/cpython/blob/34653bba644aa5481613f398153757d7357e39ea/Lib/pdb.py#L121), [Lib/pdb.py:141](https://github.com/python/cpython/blob/34653bba644aa5481613f398153757d7357e39ea/Lib/pdb.py#L141) may be causing this behavior. They also provided further probing of the issue:
```
(Pdb) p compile(funcdef, filename, 'exec').co_consts
('return', <code object main at 0x102b73020, file "/private/tmp/y/t3.py", line 1>, None)
(Pdb) p funcdef
'def main() -> int:\n pass\n'
```
> I think it's expecting to find the code object there.
> it's due to the disassembly
```
(Pdb) dis.dis(compile('def f() -> int: pass', filename, 'exec'))
0 RESUME 0
1 LOAD_CONST 0 ('return')
LOAD_NAME 0 (int)
BUILD_TUPLE 2
LOAD_CONST 1 (<code object f at 0x102b73020, file "/private/tmp/y/t3.py", line 1>)
MAKE_FUNCTION
SET_FUNCTION_ATTRIBUTE 4 (annotations)
STORE_NAME 1 (f)
RETURN_CONST 2 (None)
Disassembly of <code object f at 0x102b73020, file "/private/tmp/y/t3.py", line 1>:
1 RESUME 0
RETURN_CONST 0 (None)
```
### CPython versions tested on:
3.13
### Operating systems tested on:
Linux
<!-- gh-linked-prs -->
### Linked PRs
* gh-125892
* gh-125902
* gh-125903
<!-- /gh-linked-prs -->
| 8f2c0f7a03b71485b5635cb47c000e4e8ace8800 | 13c9fa3d64e0653d696daad716703ef05fd5002b |
python/cpython | python__cpython-126319 | # wrong opname in 3.13 dis module documentation
According to python 3.13 changelog there are three new opcodes replacing FORMAT_VALUE - CONVERT_VALUE, FORMAT_SIMPLE and FORMAT_WITH_SPEC.
But in https://docs.python.org/3/library/dis.html there are mentioned CONVERT_VALUE, FORMAT_SIMPLE and FORMAT_SPEC
<!-- gh-linked-prs -->
### Linked PRs
* gh-126319
* gh-126320
<!-- /gh-linked-prs -->
| 914356f4d485e378eb692e57d822b893acc0c0da | f0c6fccd08904787a39269367f09f263d496114c |
python/cpython | python__cpython-125876 | # Running Cython fails with 3.14.0a1 - a list attribute gets changed to `None`
# Bug report
### Bug description:
To be clear - this issue is just about running the Cython itself (i.e. pure Python code). It is not about compiled extension modules.
To reproduce
* Checkout Cython from git: `https://github.com/cython/cython.git`
* `git checkout daed3bce0bf0c6fb9012170cb479f64e8b9532cd` (probably not important, but let's make sure we're all definitely starting from the same point).
* `python3.14 cython.py Cython/Compiler/Parsing.py` - this runs Cython on one of its own files
You get an output that ends with
```pytb
File 'ExprNodes.py', line 8643, in analyse_types: TupleNode(Parsing.py:750:34,
is_sequence_constructor = 1,
result_is_used = True,
use_managed_ref = True)
Compiler crash traceback from this point on:
File "/home/dave/Documents/programming/cython2/Cython/Compiler/ExprNodes.py", line 8643, in analyse_types
if len(self.args) == 0:
~~~^^^^^^^^^^^
TypeError: object of type 'NoneType' has no len()
```
To try to debug it some more I add a constructor to `TupleNode` (in Cython/Compiler/ExprNodes.py at line 8627) with a breakpoint:
```python
def __init__(self, pos, **kw):
if "args" in kw and kw['args'] is None:
breakpoint()
super().__init__(pos, **kw)
```
```python
> <fullpath>/Cython/Compiler/ExprNodes.py(8629)__init__()
-> breakpoint()
(Pdb) print(kw)
{'args': None}
(Pdb) up
> <fullpath>/Cython/Compiler/ExprNodes.py(6123)analyse_types()
-> self.arg_tuple = TupleNode(self.pos, args = self.args)
(Pdb) print(self.args)
[<Cython.Compiler.ExprNodes.NameNode object at 0x7fb636188c80>]
(Pdb)
```
So the the constructor call to `TupleNode` `args` is `None`. But in the function it's being called from it's a list containing a `NameNode` (which is what I think it should be). That's as far as I've got with debugging.
This has apparently been bisected to https://github.com/python/cpython/pull/122620 (but not by me).
### CPython versions tested on:
3.14
### Operating systems tested on:
Linux
<!-- gh-linked-prs -->
### Linked PRs
* gh-125876
<!-- /gh-linked-prs -->
| b61fece8523d0fa6d9cc6ad3fd855a136c34f0cd | c35b33bfb7c491dfbdd40195d70dcfc4618265db |
python/cpython | python__cpython-125993 | # Improve file URI ergonomics in `urllib.request`
# Feature or enhancement
I request that we make [`pathname2url`](https://docs.python.org/3/library/urllib.request.html#urllib.request.pathname2url) and [`url2pathname`](https://docs.python.org/3/library/urllib.request.html#urllib.request.url2pathname) easier to use:
- `pathname2url()` is made to accept an optional *include_scheme* argument that sticks `file:` on the front when true
- `url2pathname()` is made to strip any `file:` prefix from its argument.
I think this would go a long way towards making these functions usable, and allow us to remove the scary "This does not accept/produce a complete URL" warnings from the docs.
<!-- gh-linked-prs -->
### Linked PRs
* gh-125993
* gh-126144
* gh-126145
* gh-127138
* gh-131432
* gh-132378
<!-- /gh-linked-prs -->
| 6742f14dfd3fa8ba8a245efa21a4f723160d93d4 | 802d405ff1233a48004a7c9b302baa76291514d1 |
python/cpython | python__cpython-125898 | # InterpreterPoolExecutor Hangs on Certain Functions
# Bug report
### Bug description:
** I first noticed this when testing the backports <https://github.com/ericsnowcurrently/interpreters/issues/17> on 3.13 on windows and WSL. Issue reproduced on 3.14.0a1+ [34653bb](https://github.com/python/cpython/commit/34653bba644aa5481613f398153757d7357e39ea)
The following code hangs:
```python
from concurrent.futures.interpreter import InterpreterPoolExecutor
def my_func():
return 1+2
with InterpreterPoolExecutor() as executor:
future = executor.submit(my_func)
result = future.result() # Get the result of the function
print(result)
```
Yet, if my_func is defined in another module (or imported, or a builtin), it works fine.
```python
from concurrent.futures.interpreter import InterpreterPoolExecutor
with InterpreterPoolExecutor() as executor:
future = executor.submit(print, "foo")
result = future.result() # Get the result of the function
print(result)
```
#### EDIT: Also hangs if my_func is in another module that imports modules that dont support sub-ints.
Importing a module with the following will also hang:
```py
import numpy
def my_func(x):
print(f"{x=}")
```
#### Note
** [test_interpreter_pool](https://github.com/python/cpython/blob/main/Lib/test/test_concurrent_futures/test_interpreter_pool.py) imports all the used functions, such as `mul`, which won't reproduce this error.
### CPython versions tested on:
CPython main branch
### Operating systems tested on:
Windows
<!-- gh-linked-prs -->
### Linked PRs
* gh-125898
<!-- /gh-linked-prs -->
| 41bd9d959ccdb1095b6662b903bb3cbd2a47087b | 3c4a7fa6178d852ccb73527aaa2d0a5e93022e89 |
python/cpython | python__cpython-125882 | # `gc.get_objects` can corrupt in-progress GC in free threading build
# Bug report
### Background
The free threading GC uses two queue-like data structures to keep track of objects:
* [`struct worklist`](https://github.com/python/cpython/blob/aaed91cabcedc16c089c4b1c9abb1114659a83d3/Python/gc_free_threading.c#L36-L42), which is a singly linked list that repurposes `ob_tid` for the linked list pointer
* [`_PyObjectStack`](https://github.com/python/cpython/blob/aaed91cabcedc16c089c4b1c9abb1114659a83d3/Include/internal/pycore_object_stack.h#L24-L26), which is effectively a dynamically sized array of `PyObject*`. (Implemented using a linked list of fixed size array buffers).
The `struct worklist` data structure is convenient because enqueueing objects doesn't require a memory allocation and so can't fail. However, an object can only be part of one "worklist" at a time, because each object has only one `ob_tid` field.
### Bug
Other threads can run while the GC is [finalizing cyclic garbage](https://github.com/python/cpython/blob/aaed91cabcedc16c089c4b1c9abb1114659a83d3/Python/gc_free_threading.c#L1257-L1263) and [while it's calling `tp_clear()`](https://github.com/python/cpython/blob/aaed91cabcedc16c089c4b1c9abb1114659a83d3/Python/gc_free_threading.c#L1272-L1284) and other clean-up.
During that time, some thread may call `gc.get_objects()`, which can return otherwise "unreachable" objects. The implementation of [`_PyGC_GetObjects`](https://github.com/python/cpython/blob/aaed91cabcedc16c089c4b1c9abb1114659a83d3/Python/gc_free_threading.c#L1514-L1519) temporarily pushes objects to a `struct worklist`, including objects that might already be part of some other worklist, overwriting the linked list pointer. This essentially corrupts the state of the in-progress GC and causes assertion failures.
### Proposed fix
* We should probably exclude objects in the "unreachable" (i.e. `_PyGC_BITS_UNREACHABLE`) from being returned by `gc.get_objects()`
* We should limit the use of `struct worklist` to the actual GC and use `_PyObjectStack` (or some other data structure) in `_PyGC_GetObjects()`. This reduces the risk of bugs causing an object to be added to more than one worklist.
<!-- gh-linked-prs -->
### Linked PRs
* gh-125882
* gh-125921
<!-- /gh-linked-prs -->
| e545ead66ce725aae6fb0ad5d733abe806c19750 | b61fece8523d0fa6d9cc6ad3fd855a136c34f0cd |
python/cpython | python__cpython-125844 | # Improve error messages of `curses` by indicating failed C function
# Feature or enhancement
### Proposal:
The `curses` module raises an exception `curses.error` with a message of the form "XXX() returned ERR" where XXX is generally the name of the C *or* Python function that was just called. Most of the time, XXX and the Python function that was called have the same name and XXX is a real curses C function or macro. However, in some cases, this is not the case and the actual curses function that was called has a different name.
For debugging purposes, I suggest adding an attribute to the exception class, say `.funcname` which holds the name of the macro / function that was called at runtime in the C code. This will help users debug issues. For compatibility reasons, I will *not* change the current error messages since this could break CI in the wild. In addition, I don't expect users to *extract* the function name from the error message (if they want to, they should use this new attribute).
I considered also adding which *Python* function was the bad one, but since the exception is raised from a Python function, I think it's better not to do include an other attribute. If needed, we can always add it later but for now the curses C function name is more important.
### Has this already been discussed elsewhere?
This is a minor feature, which does not need previous discussion elsewhere
### Links to previous discussion of this feature:
_No response_
<!-- gh-linked-prs -->
### Linked PRs
* gh-125844
* gh-134252
<!-- /gh-linked-prs -->
| ee36db550076e5a9185444ffbc53eaf8157ef04c | c31547a5914db93b8b38c6a5261ef716255f3582 |
python/cpython | python__cpython-125896 | # SystemError: <method 'is_done' of '_thread._ThreadHandle' objects> returned a result with an exception set
# Bug report
### Bug description:
There is an unexpected SystemError which is completely outside of user code. I get it with Python 3.13.0 on Windows, but did never see it with Python 3.12.x:
```
Exception ignored on threading shutdown:
Traceback (most recent call last):
File "C:\Program Files\Python313\Lib\threading.py", line 1524, in _shutdown
if _main_thread._handle.is_done() and _is_main_interpreter():
SystemError: <method 'is_done' of '_thread._ThreadHandle' objects> returned a result with an exception set
```
**Reproduction**
The following minimal reproduction script:
```python
import asyncio
import sys
async def run():
p = await asyncio.create_subprocess_exec(*["ssh", "mek@192.168.37.128"])
try:
return await p.wait() # leaving out "return" would help
finally:
if p.returncode is None: # leaving out this "if" statement or the complete "finally" block also would help
p.terminate()
sys.exit(asyncio.run(run()))
```
Run the it and leave the ssh session with `sudo reboot now`. You get the following output (note the last 5 lines):
```
mek@mek-ubuntu:~$ sudo reboot now
[sudo] password for mek:
Broadcast message from root@mek-ubuntu on pts/1 (Tue 2024-10-22 15:55:11 CEST):
The system will reboot now!
mek@mek-ubuntu:~$ Connection to 192.168.37.128 closed by remote host.
Connection to 192.168.37.128 closed.
Exception ignored on threading shutdown:
Traceback (most recent call last):
File "C:\Program Files\Python313\Lib\threading.py", line 1524, in _shutdown
if _main_thread._handle.is_done() and _is_main_interpreter():
SystemError: <method 'is_done' of '_thread._ThreadHandle' objects> returned a result with an exception set
```
There seems to be a strange interaction with async functions, local variables and return values:
- leaving out the "return" keyword at `return await p.wait()` prevents the warning message
- leaving out the "finally" block or just leave it empty with `pass` also omits the warning message
- exiting the SSH session with `exit` also omits the warning message
### CPython versions tested on:
3.13
### Operating systems tested on:
Windows
<!-- gh-linked-prs -->
### Linked PRs
* gh-125896
* gh-125925
<!-- /gh-linked-prs -->
| ad6110a93ffa82cae71af6c78692de065d3871b5 | e545ead66ce725aae6fb0ad5d733abe806c19750 |
python/cpython | python__cpython-126322 | # Outdated post PEP-709 comment in `Python/codegen.c`
# Bug report
### Bug description:
After the implementation of https://peps.python.org/pep-0709/, this comment looks wrong:
https://github.com/python/cpython/blob/57e3c59bb64fc2f8b2845a7e03ab0abb029ccd02/Python/codegen.c#L4079-L4090
### CPython versions tested on:
CPython main branch
### Operating systems tested on:
_No response_
<!-- gh-linked-prs -->
### Linked PRs
* gh-126322
* gh-126344
* gh-126345
* gh-126346
<!-- /gh-linked-prs -->
| 868bfcc02ed42a1042851830b79c6877b7f1c7a8 | bd4be5e67de5f31e9336ba0fdcd545e88d70b954 |
python/cpython | python__cpython-125823 | # Setting `None` to `skip_file_prefixes` of `warnings.warn()` gets error
# Bug report
### Bug description:
[The doc](https://docs.python.org/3/library/warnings.html#warnings.warn) of `warnings.warn()` says `skip_file_prefixes` is `None` by default as shown below:
> warnings.warn(message, category=None, stacklevel=1, source=None, *, skip_file_prefixes=None)
But, setting `None` to `skip_file_prefixes` gets the error as shown below:
```python
import warnings
warnings.warn(message='Warning',
category=None,
stacklevel=1,
source=None,
skip_file_prefixes=None) # Error
# ↑ ↑ ↑ ↑ ↑ ↑ ↑ ↑ ↑ ↑ ↑ ↑
```
> TypeError: warn() argument 'skip_file_prefixes' must be tuple, not None
### CPython versions tested on:
3.13
### Operating systems tested on:
Windows
<!-- gh-linked-prs -->
### Linked PRs
* gh-125823
* gh-126216
* gh-126217
<!-- /gh-linked-prs -->
| d467d9246cbe0ce5dc149c4c74223bb8374ece73 | b3122aa6132de263f389317efe9dcf2b14c6b393 |
python/cpython | python__cpython-125812 | # Remove DeprecationWarnings in test_peg_generator
# Feature or enhancement
### Proposal:
Some DeprecationWarnings are present in ```test.test_peg_generator.test_pegen``` output:
```
-> % (make -j && ./python -m unittest -v test.test_peg_generator.test_pegen) 2>&1 | grep Deprecation
test_locations_in_alt_action_and_group (test.test_peg_generator.test_pegen.TestPegen.test_locations_in_alt_action_and_group) ... <string>:29: DeprecationWarning: Expression.__init__ got an unexpected keyword argument 'lineno'. Support for arbitrary keyword arguments is deprecated and will be removed in Python 3.15.
<string>:29: DeprecationWarning: Expression.__init__ got an unexpected keyword argument 'col_offset'. Support for arbitrary keyword arguments is deprecated and will be removed in Python 3.15.
<string>:29: DeprecationWarning: Expression.__init__ got an unexpected keyword argument 'end_lineno'. Support for arbitrary keyword arguments is deprecated and will be removed in Python 3.15.
<string>:29: DeprecationWarning: Expression.__init__ got an unexpected keyword argument 'end_col_offset'. Support for arbitrary keyword arguments is deprecated and will be removed in Python 3.15.
test_python_expr (test.test_peg_generator.test_pegen.TestPegen.test_python_expr) ... <string>:25: DeprecationWarning: Expression.__init__ got an unexpected keyword argument 'lineno'. Support for arbitrary keyword arguments is deprecated and will be removed in Python 3.15.
<string>:25: DeprecationWarning: Expression.__init__ got an unexpected keyword argument 'col_offset'. Support for arbitrary keyword arguments is deprecated and will be removed in Python 3.15.
```
Those warning can be easily removed.
### Has this already been discussed elsewhere?
This is a minor feature, which does not need previous discussion elsewhere
### Links to previous discussion of this feature:
_No response_
<!-- gh-linked-prs -->
### Linked PRs
* gh-125812
* gh-125831
<!-- /gh-linked-prs -->
| 4efe64aa56e7a9a96b94c0ae0201db8d402a5f53 | 03f9264ecef4b1df5e71586327a04ec3b9331cbe |
python/cpython | python__cpython-125881 | # Add tests to prevent regressions with the combination of `ctypes` and metaclasses.
# Feature or enhancement
### Proposal:
There were [breaking changes to `ctypes` in Python 3.13](https://docs.python.org/3.13/whatsnew/3.13.html#ctypes).
Projects like [`comtypes`](https://github.com/enthought/comtypes/) and [`pyglet`](https://github.com/pyglet/pyglet), which implement functionality by combining `ctypes` and metaclasses, no longer work unless their codebases are updated.
We discussed what changes such projects need to make to their codebases in gh-124520.
I believe the easiest and most effective way to prevent regressions from happening again is to add tests on the `cpython` side.
I wrote a simple test like the one below.
```py
# Lib/test/test_ctypes/test_c_simple_type_meta.py?
import unittest
import ctypes
from ctypes import POINTER, c_void_p
from ._support import PyCSimpleType, _CData, _SimpleCData
class PyCSimpleTypeAsMetaclassTest(unittest.TestCase):
def tearDown(self):
# to not leak references, we must clean _pointer_type_cache
ctypes._reset_cache()
def test_early_return_in_dunder_new_1(self):
# this implementation is used in `IUnknown` of comtypes.
class _ct_meta(type):
def __new__(cls, name, bases, namespace):
self = super().__new__(cls, name, bases, namespace)
if bases == (c_void_p,):
return self
if issubclass(self, _PtrBase):
return self
if bases == (object,):
_ptr_bases = (self, _PtrBase)
else:
_ptr_bases = (self, POINTER(bases[0]))
p = _p_meta(f"POINTER({self.__name__})", _ptr_bases, {})
ctypes._pointer_type_cache[self] = p
return self
class _p_meta(PyCSimpleType, _ct_meta):
pass
class _PtrBase(c_void_p, metaclass=_p_meta):
pass
class _CtBase(object, metaclass=_ct_meta):
pass
class _Sub(_CtBase):
pass
class _Sub2(_Sub):
pass
def test_early_return_in_dunder_new_2(self):
# this implementation is used in `CoClass` of comtypes.
class _ct_meta(type):
def __new__(cls, name, bases, namespace):
self = super().__new__(cls, name, bases, namespace)
if isinstance(self, _p_meta):
return self
p = _p_meta(
f"POINTER({self.__name__})", (self, c_void_p), {}
)
ctypes._pointer_type_cache[self] = p
return self
class _p_meta(PyCSimpleType, _ct_meta):
pass
class _Base(object):
pass
class _Sub(_Base, metaclass=_ct_meta):
pass
class _Sub2(_Sub):
pass
```
If no errors occur when this test is executed, we can assume that no regressions have occurred with the combination of `ctypes` and metaclasses.
However, I feel that this may not be enough. What should we specify as the target for the `assert`?
### Has this already been discussed elsewhere?
No response given
### Links to previous discussion of this feature:
https://github.com/python/cpython/issues/124520
<!-- gh-linked-prs -->
### Linked PRs
* gh-125881
* gh-125987
* gh-125988
* gh-126126
* gh-126275
* gh-126276
<!-- /gh-linked-prs -->
| 13844094609cf8265a2eed023e33c7002f3f530d | 7f6e884f3acc860c1cf1b773c9659e8f861263e7 |
python/cpython | python__cpython-125781 | # Support pickling of super object
# Feature or enhancement
As was noted in https://github.com/python/cpython/issues/125714#issuecomment-2423259145, the `super` object is not pickleable. For example:
```py
import pickle
class X: pass
s = super(X, X())
pickle.dumps(s)
```
Produces a traceback:
```pytb
Traceback (most recent call last):
File "<python-input-0>", line 4, in <module>
pickle.dumps(s)
~~~~~~~~~~~~^^^
_pickle.PicklingError: first argument to __newobj__() must be <class 'super'>, not <class '__main__.X'>
when serializing super object
```
This is because the special methods like `__reduce_ex__()` are looked up in an instance and translated to a lookup in the underlying object.
```pycon
>>> super(X, X()).__reduce_ex__(5)
(<function __newobj__ at 0x7fd9e8a0ad50>, (<class '__main__.X'>,), None, None, None)
```
This cannot be solved by implementing the `__reduce_ex__()` method in the `super` class, because the current behavior is expected when `super()` is used in the `__reduce_ex__()` implementation of some subclass. The `super` class likely should be registered in the global dispatch table.
There may be similar issue with shallow and deep copying.
<!-- gh-linked-prs -->
### Linked PRs
* gh-125781
<!-- /gh-linked-prs -->
| 5ca4e34bc1aab8321911aac6d5b2b9e75ff764d8 | de5a6c7c7d00ac37d66cba9849202b374e9cdfb7 |
python/cpython | python__cpython-125743 | # reword some parts of "Using Python on Unix platforms"
I suggest the following changes to "Using Python on Unix platforms" on Linux section.
https://docs.python.org/3.14/using/unix.html#on-linux
1. "You can easily compile the latest version of Python from source." -> "You can compile the latest version of Python from source." remove the word "easily", because it is not needed.
2. "In the event that Python doesn’t come preinstalled and isn’t in the repositories as well, you can easily make packages for your own distro." -> "In the event that the latest version of Python doesn’t come preinstalled and isn’t in the repositories as well, you can make packages for your own distro." . There are two changes here "Python" -> "the latest version of Python", this is because all Linux distros have python, so the only reason to make your own python package is to get the latest version of Python. The second change is to remove the word "easily" again, because it is unnecessary.
<!-- gh-linked-prs -->
### Linked PRs
* gh-125743
* gh-125793
* gh-125794
<!-- /gh-linked-prs -->
| d67bf2d89ab57f94608d7d2cf949dc4a8749485d | 5b7a872b26a9ba6c93d7c2109559a82d1c1612de |
python/cpython | python__cpython-126326 | # `warnings.simplefilter("once")` and `warnings.warn()` print all occurrences of matching warnings, regardless of location
# Bug report
### Bug description:
With [warnings.simplefilter("module")](https://docs.python.org/3/library/warnings.html#warnings.simplefilter) and [warnings.warn()](https://docs.python.org/3/library/warnings.html#warnings.warn), I ran `main.py` which runs `file1.py`(module) and `file2.py`(module) as shown below:
*Memos:
- [The doc](https://docs.python.org/3/library/warnings.html#the-warnings-filter) says `"module" print the first occurrence of matching warnings for each module where the warning is issued (regardless of line number)`.
- I also used [warnings.filterwarnings("module")](https://docs.python.org/3/library/warnings.html#warnings.filterwarnings).
- I ran it on Windows and Linux.
```
my_project
|-main.py
|-file1.py(module)
└-file2.py(module)
```
`main.py`:
```python
import warnings
warnings.simplefilter("module")
import file1, file2
```
`file1.py`:
```python
import warnings
warnings.warn("Warning 1")
warnings.warn("Warning 2")
warnings.warn("Warning 3")
```
`file2.py`:
```python
import warnings
warnings.warn("Warning 1")
warnings.warn("Warning 2")
warnings.warn("Warning 3")
```
Then, `"module"` print the first occurrence of matching warnings for each module where the warning is issued (regardless of line number)` as shown below:
```
...\my_project\file1.py:3: UserWarning: Warning 1
warnings.warn("Warning 1")
...\my_project\file1.py:4: UserWarning: Warning 2
warnings.warn("Warning 2")
...\my_project\file1.py:6: UserWarning: Warning 3
warnings.warn("Warning 3")
...\my_project\file2.py:3: UserWarning: Warning 1
warnings.warn("Warning 1")
...\my_project\file2.py:4: UserWarning: Warning 2
warnings.warn("Warning 2")
...\my_project\file2.py:6: UserWarning: Warning 3
warnings.warn("Warning 3")
```
Now with `warnings.simplefilter("once")` and `warnings.warn()`, I ran `main.py` which runs `file1.py`(module) and `file2.py`(module) as shown below:
*Memos:
- [The doc](https://docs.python.org/3/library/warnings.html#the-warnings-filter) says `"once" print only the first occurrence of matching warnings, regardless of location`.
- I also used `warnings.filterwarnings("once")`.
- I ran it on Windows and Linux.
```python
import warnings
warnings.simplefilter("once")
import file1, file2
```
But "once" print all occurrences of matching warnings, regardless of location as shown below:
```
...\my_project\file1.py:3: UserWarning: Warning 1
warnings.warn("Warning 1")
...\my_project\file1.py:4: UserWarning: Warning 2
warnings.warn("Warning 2")
...\my_project\file1.py:6: UserWarning: Warning 3
warnings.warn("Warning 3")
...\my_project\file2.py:3: UserWarning: Warning 1
warnings.warn("Warning 1")
...\my_project\file2.py:4: UserWarning: Warning 2
warnings.warn("Warning 2")
...\my_project\file2.py:6: UserWarning: Warning 3
warnings.warn("Warning 3")
```
### CPython versions tested on:
3.11
### Operating systems tested on:
Linux, Windows
<!-- gh-linked-prs -->
### Linked PRs
* gh-126326
* gh-126330
* gh-126331
<!-- /gh-linked-prs -->
| 10eeec2d4ffb6b09a6d925877b6d9ef6aa6bb59d | cfb1b2f0cb999558a30e61a9e1a62fdb7f55f6a4 |
python/cpython | python__cpython-125748 | # Delay deprecated `zipimport.zipimporter.load_module` removal time to 3.15
# Feature or enhancement
### Proposal:
This method have been deprecated since Python 3.10
At https://github.com/python/cpython/blob/main/Lib/zipimport.py#L224-L225
```python
msg = ("zipimport.zipimporter.load_module() is deprecated and slated for "
"removal in Python 3.12; use exec_module() instead")
```
edit: Delay the removal time to 3.15
https://github.com/python/cpython/issues/125746#issuecomment-2425030377
<!-- gh-linked-prs -->
### Linked PRs
* gh-125748
<!-- /gh-linked-prs -->
| 06ac157c53046f4fcad34383ef131f773085f3d5 | a7427f2db937adb4c787754deb4c337f1894fe86 |
python/cpython | python__cpython-125744 | # Update check_generated_files CI to use our published container image
We can do this through https://docs.github.com/en/actions/writing-workflows/choosing-where-your-workflow-runs/running-jobs-in-a-container, and it is less painful to make both image environments the same.
See: https://github.com/python/cpython/pull/122566#pullrequestreview-2379858448
cc @erlend-aasland @hugovk @Damien-Chen
<!-- gh-linked-prs -->
### Linked PRs
* gh-125744
* gh-125759
* gh-125760
* gh-125772
* gh-125779
* gh-130229
<!-- /gh-linked-prs -->
| ed24702bd0f9925908ce48584c31dfad732208b2 | e924bb667a19ee1812d6c7592a37dd37346dda04 |
python/cpython | python__cpython-126176 | # Add turtle module into installion option tcl/tk
# Feature or enhancement
### Proposal:
Just as the title said, add turtle module into the option in the picture:

Because the turtle module is a tool requiring tkinter, why should it added since the user didn't want to install tkinter?
### Has this already been discussed elsewhere?
This is a minor feature, which does not need previous discussion elsewhere
### Links to previous discussion of this feature:
_No response_
<!-- gh-linked-prs -->
### Linked PRs
* gh-126176
<!-- /gh-linked-prs -->
| 88dc84bcf9fef32afa9af0ab41fa467c9733483f | 29cbcbd73bbfd8c953c0b213fb33682c289934ff |
python/cpython | python__cpython-126956 | # [3.13] Crash when generator frame proxies outlive their generator
Edit (2025-06-13): this has been fixed for 3.14+, but it's still an open issue for 3.13.x
----
# Crash report
### What happened?
Checking some frame proxy behaviour at the interactive prompt, I encountered the following crash:
```
$ PYTHON_BASIC_REPL=1 python3.13
Python 3.13.0 (main, Oct 8 2024, 00:00:00) [GCC 14.2.1 20240912 (Red Hat 14.2.1-3)] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import sys
>>> def g():
... a = 1
... yield locals(), sys._getframe().f_locals
...
>>> snapshot, live_locals = next(g())
>>> print(snapshot); print(live_locals)
{'a': 1}
{'a': b'print'}
>>> snapshot, live_locals = next(g())
>>> print(snapshot); print(live_locals)
{'a': 1}
{'a': 'input_trans_stack'}
>>> snapshot, live_locals = next(g())
>>> print(snapshot); print(live_locals)
{'a': 1}
Segmentation fault (core dumped)
```
(Crash was initially seen in the new REPL, the above reproduction in the basic REPL showed it wasn't specific to the new REPL).
Subsequent investigation suggests that the problem relates to the frame proxy outliving the eval loop that created it, as the following script was able to reliably reproduce the crash (as shown below):
```python
import sys
def g():
a = 1
yield locals(), sys._getframe().f_locals
ns = {}
for i in range(10):
exec("snapshot, live_locals = next(g())", locals=ns)
print(ns)
```
```
$ python3.13 ../_misc/gen_locals_exec.py
{'snapshot': {'a': 1}, 'live_locals': {'a': 1}}
{'snapshot': {'a': 1}, 'live_locals': {'a': '_pickle.cpython-313-x86_64-linux-gnu.so'}}
Segmentation fault (core dumped)
```
Changing the code to explicitly keep the generator object alive:
```python
import sys
def g():
a = 1
yield locals(), sys._getframe().f_locals
ns = {}
for i in range(10):
gen = g()
exec("snapshot, live_locals = next(gen)", locals=ns)
print(ns)
```
is sufficient to eliminate the crash:
```
$ python3.13 ../_misc/gen_locals_exec.py
{'snapshot': {'a': 1}, 'live_locals': {'a': 1}}
{'snapshot': {'a': 1}, 'live_locals': {'a': 1}}
{'snapshot': {'a': 1}, 'live_locals': {'a': 1}}
{'snapshot': {'a': 1}, 'live_locals': {'a': 1}}
{'snapshot': {'a': 1}, 'live_locals': {'a': 1}}
{'snapshot': {'a': 1}, 'live_locals': {'a': 1}}
{'snapshot': {'a': 1}, 'live_locals': {'a': 1}}
{'snapshot': {'a': 1}, 'live_locals': {'a': 1}}
{'snapshot': {'a': 1}, 'live_locals': {'a': 1}}
{'snapshot': {'a': 1}, 'live_locals': {'a': 1}}
```
The crash is still eliminated, even when `gen` is created via `exec` rather than creating it in the main eval loop:
```
import sys
def g():
a = 1
yield locals(), sys._getframe().f_locals
ns = {}
for i in range(10):
exec("gen = g()", locals=ns)
exec("snapshot, live_locals = next(gen)", locals=ns)
print(ns)
```
```
$ python3.13 ../_misc/gen_locals_exec.py
{'gen': <generator object g at 0x7f8da7d81b40>, 'snapshot': {'a': 1}, 'live_locals': {'a': 1}}
{'gen': <generator object g at 0x7f8da7d81e40>, 'snapshot': {'a': 1}, 'live_locals': {'a': 1}}
{'gen': <generator object g at 0x7f8da7d81c00>, 'snapshot': {'a': 1}, 'live_locals': {'a': 1}}
{'gen': <generator object g at 0x7f8da7d81b40>, 'snapshot': {'a': 1}, 'live_locals': {'a': 1}}
{'gen': <generator object g at 0x7f8da7d81e40>, 'snapshot': {'a': 1}, 'live_locals': {'a': 1}}
{'gen': <generator object g at 0x7f8da7d81c00>, 'snapshot': {'a': 1}, 'live_locals': {'a': 1}}
{'gen': <generator object g at 0x7f8da7d81b40>, 'snapshot': {'a': 1}, 'live_locals': {'a': 1}}
{'gen': <generator object g at 0x7f8da7d81e40>, 'snapshot': {'a': 1}, 'live_locals': {'a': 1}}
{'gen': <generator object g at 0x7f8da7d81c00>, 'snapshot': {'a': 1}, 'live_locals': {'a': 1}}
{'gen': <generator object g at 0x7f8da7d81b40>, 'snapshot': {'a': 1}, 'live_locals': {'a': 1}}
```
### CPython versions tested on:
3.13, CPython main branch
### Operating systems tested on:
Linux
### Output from running 'python -VV' on the command line:
_No response_
<!-- gh-linked-prs -->
### Linked PRs
* gh-126956
* gh-135453
<!-- /gh-linked-prs -->
| 8e20e42cc63321dacc500d7670bfc225ca04e78b | 24c84d816f2f2ecb76b80328c3f1d8c05ead0b10 |
python/cpython | python__cpython-128922 | # Use the latest Sphinx version
# Documentation
Python's documentation uses the Sphinx toolchain. Originally written for the Python documentation, it is now developed and distributed as an independent project. The two remain closely linked, e.g. when new syntax is added to the Python language. To document this syntax in the best way, Sphinx must support it and Python must require a version of Sphinx that supports the syntax.
Using the new syntax in PEP-695 as an example:
1. The implementation was committed on 16 May (#103764)
2. The first draft of the documentation was written on 19 May (#104642)
3. A feature request was created in Sphinx on 23 May (https://github.com/sphinx-doc/sphinx/issues/11438)
4. Sphinx added support on 23 July (https://github.com/sphinx-doc/sphinx/pull/11444)
5. Sphinx published a release with support on 24 July ([v7.1.0](https://www.sphinx-doc.org/en/master/changes/7.1.html))
7. The Python project can use those features, 19 October [+ 1 year] (#125368)
This is a ~two month window from implementation to support in a release of Sphinx. It took a further 15 months and two feature releases (3.12, 3.13) until these features can be used in Python. Due to this, our documentation is meaningfully worse for readers and programmers. Using older versions of Sphinx mean that we cannot succinctly cover the features and syntax that exist within released versions of Python.
Core developers responsible for these features have [expressed](https://github.com/python/cpython/pull/104642#discussion_r1200799012) [interest](https://github.com/python/cpython/pull/123544#discussion_r1739933935), but have been hampered by our self-imposed restriction of the minimum version of Sphinx that we support.
We adopt these restrictions for the benefit of downstream Linux redistributors, as can be seen in the more recent issues in the summary table below. This has not always been the case. From the adoption of Sphinx in 2007 (8ec7f656134b1230ab23003a94ba3266d7064122 / 116aa62bf54a39697e25f21d6cf6799f7faa1349) until 2014 (f7b2f36f747179cf3dc7a889064f8979e3ad4dae), the latest source-tree checkout of Sphinx at https://svn.python.org/ was used. From then until ~2018 the minimum required Sphinx version (controlled by `needs_sphinx` in `conf.py`) tracked the latest release promptly. This has since ossified.
In a recent informal discussion, several committers supported the idea of removing or relaxing this restriction, allowing the Python documentation to use the latest version(s) of Sphinx. This is the context for this note, which for now **is a proposal**. The status quo will prevail if there is not sufficient support.
As a concrete suggestion, I propose that when evaulating increasing the minimum required version of Sphinx, we no longer consult downstream redistributors' Sphinx versions. The procedure will follow standard Python development processes in all other ways. I expect that the minimum version would be updated if and when a new Sphinx feature is of sufficient benefit to Python. This, though, may be a greater version than a downstream redistributor provides.
We would like to solicit views from representatives of downstream redistributors as to how (in)feasible this proposal would be. My understanding is that Fedora and SUSE have processes whereby a newer version of Sphinx can be used soley for the Python documentation. I do not know how this will impact Debian or Gentoo.
Thank you in advance for your consideration,
Adam
**Table of past changes to ``needs_sphinx``**
| `needs_sphinx` | Issue | PR / Commit | Latest Sphinx [^1] |
| -- | -- | -- | -- |
| 7.2.6 | #125277 | #125368 | 8.1.3 |
| 6.2.1 | #117928 | #121986 | 7.4.6 |
| 4.2 | #109209 | #109210 | 7.2.6 |
| 3.2 | #86986 | #93337 | 5.0.1 |
| 1.8 | #80188 | #11887 | 1.8.4 |
| 1.7 | [doc-sig] | #9423 | 1.8.1 |
| 1.2 | #65630 | 90d76ca7657566825daf8e5b22ab011f70439769 | 1.2.3 |
| 1.1 | #64860 | f7b2f36f747179cf3dc7a889064f8979e3ad4dae | 1.2.1 |
Previous discussion relating to the minumum version:
* #84385
* #87009
* #104818
cc:
* @doko42 @mitya57 / Debian
* @hroncok / Fedora / RHEL
* @mgorny / Gentoo
* @danigm @mcepl / openSUSE
* @AA-Turner @hugovk / CPython
[^1]: At commit date.
[doc-sig]: https://mail.python.org/pipermail/doc-sig/2018-September/004084.html
<!-- gh-linked-prs -->
### Linked PRs
* gh-128922
* gh-129037
* gh-129038
* gh-129039
* gh-129041
* gh-129042
* gh-129277
* gh-129278
* gh-129279
* gh-129306
* gh-130444
* gh-130858
* gh-130859
<!-- /gh-linked-prs -->
| d46b577ec026c2e700a9f920f81cfbf698e53eb6 | bca35f0e782848ae2acdcfbfb000cd4a2af49fbd |
python/cpython | python__cpython-125802 | # test_concurrent_futures.test_interpreter_pool failing
# Bug report
### Bug description:
I've seen 4 kinds of failure which I'm failure sure have the same cause:
* segfault during `WorkerContext.initialize()` (line 137)
* hanging
* weird test failure
* undefined behavior on USAN buildbot
The failures have happened in different test methods. Different failures have happened during the retry. Sometimes the retry passes. In all cases the architecture is AMD64, but across a variety of builders and non-Windows operating systems. The failures have all been on either refleaks buildbots or the USAN buildbot.
FWIW, it looks like `InterpreterPoolExecutor` has only exposed an underlying problem in the _interpqueues module, which means any fix would need to target 3.13 also (and maybe 3.12).
---
Here are the buildbots where I've seen failures:
* AMD64 RHEL8 Refleaks 3.x
* AMD64 FreeBSD Refleaks 3.x
* AMD64 CentOS9 NoGIL Refleaks 3.x
* AMD64 Arch Linux Usan Function 3.x
Here's the failure text:
<details>
<summary>segfault</summary>
```
test_submit (test.test_concurrent_futures.test_interpreter_pool.InterpreterPoolExecutorTest.test_submit) ... Fatal Python error:
Segmentation fault
Current thread 0x00007fb4fbfff700 (most recent call first):
File "/home/buildbot/buildarea/3.x.cstratak-RHEL8-x86_64.refleak/build/Lib/concurrent/futures/interpreter.py", line 137 in initialize
File "/home/buildbot/buildarea/3.x.cstratak-RHEL8-x86_64.refleak/build/Lib/concurrent/futures/thread.py", line 98 in _worker
File "/home/buildbot/buildarea/3.x.cstratak-RHEL8-x86_64.refleak/build/Lib/threading.py", line 992 in run
File "/home/buildbot/buildarea/3.x.cstratak-RHEL8-x86_64.refleak/build/Lib/threading.py", line 1041 in _bootstrap_inner
File "/home/buildbot/buildarea/3.x.cstratak-RHEL8-x86_64.refleak/build/Lib/threading.py", line 1012 in _bootstrap
Thread 0x00007fb521d71240 (most recent call first):
File "/home/buildbot/buildarea/3.x.cstratak-RHEL8-x86_64.refleak/build/Lib/threading.py", line 359 in wait
File "/home/buildbot/buildarea/3.x.cstratak-RHEL8-x86_64.refleak/build/Lib/concurrent/futures/_base.py", line 443 in result
File "/home/buildbot/buildarea/3.x.cstratak-RHEL8-x86_64.refleak/build/Lib/test/test_concurrent_futures/executor.py", line 31 in test_submit
File "/home/buildbot/buildarea/3.x.cstratak-RHEL8-x86_64.refleak/build/Lib/unittest/case.py", line 606 in _callTestMethod
File "/home/buildbot/buildarea/3.x.cstratak-RHEL8-x86_64.refleak/build/Lib/unittest/case.py", line 660 in run
File "/home/buildbot/buildarea/3.x.cstratak-RHEL8-x86_64.refleak/build/Lib/unittest/case.py", line 716 in __call__
File "/home/buildbot/buildarea/3.x.cstratak-RHEL8-x86_64.refleak/build/Lib/unittest/suite.py", line 122 in run
File "/home/buildbot/buildarea/3.x.cstratak-RHEL8-x86_64.refleak/build/Lib/unittest/suite.py", line 84 in __call__
File "/home/buildbot/buildarea/3.x.cstratak-RHEL8-x86_64.refleak/build/Lib/unittest/suite.py", line 122 in run File "/home/buildbot/buildarea/3.x.cstratak-RHEL8-x86_64.refleak/build/Lib/unittest/suite.py", line 84 in __call__
File "/home/buildbot/buildarea/3.x.cstratak-RHEL8-x86_64.refleak/build/Lib/unittest/runner.py", line 240 in run
File "/home/buildbot/buildarea/3.x.cstratak-RHEL8-x86_64.refleak/build/Lib/test/libregrtest/single.py", line 57 in _run_suite
File "/home/buildbot/buildarea/3.x.cstratak-RHEL8-x86_64.refleak/build/Lib/test/libregrtest/single.py", line 37 in run_unittest
File "/home/buildbot/buildarea/3.x.cstratak-RHEL8-x86_64.refleak/build/Lib/test/libregrtest/single.py", line 135 in test_func
File "/home/buildbot/buildarea/3.x.cstratak-RHEL8-x86_64.refleak/build/Lib/test/libregrtest/refleak.py", line 132 in runtest_refleak
File "/home/buildbot/buildarea/3.x.cstratak-RHEL8-x86_64.refleak/build/Lib/test/libregrtest/single.py", line 87 in regrtest_runner
File "/home/buildbot/buildarea/3.x.cstratak-RHEL8-x86_64.refleak/build/Lib/test/libregrtest/single.py", line 138 in _load_run_test
File "/home/buildbot/buildarea/3.x.cstratak-RHEL8-x86_64.refleak/build/Lib/test/libregrtest/single.py", line 181 in _runtest_env_changed_exc
File "/home/buildbot/buildarea/3.x.cstratak-RHEL8-x86_64.refleak/build/Lib/test/libregrtest/single.py", line 281 in _runtest
File "/home/buildbot/buildarea/3.x.cstratak-RHEL8-x86_64.refleak/build/Lib/test/libregrtest/single.py", line 310 in run_single_test
File "/home/buildbot/buildarea/3.x.cstratak-RHEL8-x86_64.refleak/build/Lib/test/libregrtest/worker.py", line 83 in worker_process
File "/home/buildbot/buildarea/3.x.cstratak-RHEL8-x86_64.refleak/build/Lib/test/libregrtest/worker.py", line 118 in main
File "/home/buildbot/buildarea/3.x.cstratak-RHEL8-x86_64.refleak/build/Lib/test/libregrtest/worker.py", line 122 in <module>
File "/home/buildbot/buildarea/3.x.cstratak-RHEL8-x86_64.refleak/build/Lib/runpy.py", line 88 in _run_code
File "/home/buildbot/buildarea/3.x.cstratak-RHEL8-x86_64.refleak/build/Lib/runpy.py", line 198 in _run_module_as_main
```
</details>
<details>
<summary>hang 1</summary>
```
test_submit_exception_in_func (test.test_concurrent_futures.test_interpreter_pool.InterpreterPoolExecutorTest.test_submit_exception_in_func) ... Timeout (3:20:00)!
Thread 0x000000082e546e00 (most recent call first): File "/buildbot/buildarea/3.x.ware-freebsd.refleak/build/Lib/concurrent/futures/interpreter.py", line 190 in run
File "/buildbot/buildarea/3.x.ware-freebsd.refleak/build/Lib/concurrent/futures/thread.py", line 85 in run
File "/buildbot/buildarea/3.x.ware-freebsd.refleak/build/Lib/concurrent/futures/thread.py", line 118 in _worker
File "/buildbot/buildarea/3.x.ware-freebsd.refleak/build/Lib/threading.py", line 992 in run
File "/buildbot/buildarea/3.x.ware-freebsd.refleak/build/Lib/threading.py", line 1041 in _bootstrap_inner
File "/buildbot/buildarea/3.x.ware-freebsd.refleak/build/Lib/threading.py", line 1012 in _bootstrap
Thread 0x0000000825a7c000 (most recent call first):
File "/buildbot/buildarea/3.x.ware-freebsd.refleak/build/Lib/threading.py", line 359 in wait
File "/buildbot/buildarea/3.x.ware-freebsd.refleak/build/Lib/concurrent/futures/_base.py", line 443 in result
File "/buildbot/buildarea/3.x.ware-freebsd.refleak/build/Lib/test/test_concurrent_futures/test_interpreter_pool.py", line 251 in test_submit_exception_in_func
File "/buildbot/buildarea/3.x.ware-freebsd.refleak/build/Lib/unittest/case.py", line 606 in _callTestMethod
File "/buildbot/buildarea/3.x.ware-freebsd.refleak/build/Lib/unittest/case.py", line 660 in run
File "/buildbot/buildarea/3.x.ware-freebsd.refleak/build/Lib/unittest/case.py", line 716 in __call__
File "/buildbot/buildarea/3.x.ware-freebsd.refleak/build/Lib/unittest/suite.py", line 122 in run
File "/buildbot/buildarea/3.x.ware-freebsd.refleak/build/Lib/unittest/suite.py", line 84 in __call__
File "/buildbot/buildarea/3.x.ware-freebsd.refleak/build/Lib/unittest/suite.py", line 122 in run
File "/buildbot/buildarea/3.x.ware-freebsd.refleak/build/Lib/unittest/suite.py", line 84 in __call__
File "/buildbot/buildarea/3.x.ware-freebsd.refleak/build/Lib/unittest/runner.py", line 240 in run
File "/buildbot/buildarea/3.x.ware-freebsd.refleak/build/Lib/test/libregrtest/single.py", line 57 in _run_suite
File "/buildbot/buildarea/3.x.ware-freebsd.refleak/build/Lib/test/libregrtest/single.py", line 37 in run_unittest
File "/buildbot/buildarea/3.x.ware-freebsd.refleak/build/Lib/test/libregrtest/single.py", line 135 in test_func
File "/buildbot/buildarea/3.x.ware-freebsd.refleak/build/Lib/test/libregrtest/refleak.py", line 132 in runtest_refleak
File "/buildbot/buildarea/3.x.ware-freebsd.refleak/build/Lib/test/libregrtest/single.py", line 87 in regrtest_runner
File "/buildbot/buildarea/3.x.ware-freebsd.refleak/build/Lib/test/libregrtest/single.py", line 138 in _load_run_test
File "/buildbot/buildarea/3.x.ware-freebsd.refleak/build/Lib/test/libregrtest/single.py", line 181 in _runtest_env_changed_exc
File "/buildbot/buildarea/3.x.ware-freebsd.refleak/build/Lib/test/libregrtest/single.py", line 281 in _runtest
File "/buildbot/buildarea/3.x.ware-freebsd.refleak/build/Lib/test/libregrtest/single.py", line 310 in run_single_test
File "/buildbot/buildarea/3.x.ware-freebsd.refleak/build/Lib/test/libregrtest/worker.py", line 83 in worker_process
File "/buildbot/buildarea/3.x.ware-freebsd.refleak/build/Lib/test/libregrtest/worker.py", line 118 in main
File "/buildbot/buildarea/3.x.ware-freebsd.refleak/build/Lib/test/libregrtest/worker.py", line 122 in <module>
File "/buildbot/buildarea/3.x.ware-freebsd.refleak/build/Lib/runpy.py", line 88 in _run_code
File "/buildbot/buildarea/3.x.ware-freebsd.refleak/build/Lib/runpy.py", line 198 in _run_module_as_main
```
</details>
<details>
<summary>hang 2</summary>
```
test_shutdown_race_issue12456 (test.test_concurrent_futures.test_interpreter_pool.InterpreterPoolExecutorTest.test_shutdown_race_issue12456) ... Exception in initializer:
RuntimeError: Failed to import encodings module
During handling of the above exception, another exception occurred:
interpreters.Interpreter Error: sub-interpreter creation failed
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/buildbot/buildarea/3.x.itamaro-centos-aws.refleak.nogil/build/Lib/concurrent/futures/thread.py", line 98, in _worker
ctx.initialize()
~~~~~~~~~~~~~~^^
File "/home/buildbot/buildarea/3.x.itamaro-centos-aws.refleak.nogil/build/Lib/concurrent/futures/interpreter.py", line 131, in initialize
self.interpid = _interpreters.create(reqrefs=True)
~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^
interpreters.InterpreterError: interpreter creation failed
Timeout (0:45:00)!
Thread 0x00007fe3febe2640 (most recent call first):
File "/home/buildbot/buildarea/3.x.itamaro-centos-aws.refleak.nogil/build/Lib/concurrent/futures/thread.py", line 115 in _worker
File "/home/buildbot/buildarea/3.x.itamaro-centos-aws.refleak.nogil/build/Lib/threading.py", line 992 in run
File "/home/buildbot/buildarea/3.x.itamaro-centos-aws.refleak.nogil/build/Lib/threading.py", line 1041 in _bootstrap_inner
File "/home/buildbot/buildarea/3.x.itamaro-centos-aws.refleak.nogil/build/Lib/threading.py", line 1012 in _bootstrap
Thread 0x00007fe3fcbda640 (most recent call first):
File "/home/buildbot/buildarea/3.x.itamaro-centos-aws.refleak.nogil/build/Lib/concurrent/futures/thread.py", line 115 in _worker
File "/home/buildbot/buildarea/3.x.itamaro-centos-aws.refleak.nogil/build/Lib/threading.py", line 992 in run
File "/home/buildbot/buildarea/3.x.itamaro-centos-aws.refleak.nogil/build/Lib/threading.py", line 1041 in _bootstrap_inner
File "/home/buildbot/buildarea/3.x.itamaro-centos-aws.refleak.nogil/build/Lib/threading.py", line 1012 in _bootstrap
Thread 0x00007fe3e77fe640 (most recent call first):
File "/home/buildbot/buildarea/3.x.itamaro-centos-aws.refleak.nogil/build/Lib/concurrent/futures/thread.py", line 115 in _worker
File "/home/buildbot/buildarea/3.x.itamaro-centos-aws.refleak.nogil/build/Lib/threading.py", line 992 in run
File "/home/buildbot/buildarea/3.x.itamaro-centos-aws.refleak.nogil/build/Lib/threading.py", line 1041 in _bootstrap_inner
File "/home/buildbot/buildarea/3.x.itamaro-centos-aws.refleak.nogil/build/Lib/threading.py", line 1012 in _bootstrap
Thread 0x00007fe3e7fff640 (most recent call first):
File "/home/buildbot/buildarea/3.x.itamaro-centos-aws.refleak.nogil/build/Lib/concurrent/futures/thread.py", line 115 in _worker
File "/home/buildbot/buildarea/3.x.itamaro-centos-aws.refleak.nogil/build/Lib/threading.py", line 992 in run
File "/home/buildbot/buildarea/3.x.itamaro-centos-aws.refleak.nogil/build/Lib/threading.py", line 1041 in _bootstrap_inner
File "/home/buildbot/buildarea/3.x.itamaro-centos-aws.refleak.nogil/build/Lib/threading.py", line 1012 in _bootstrap
Thread 0x00007fe3ffe6f740 (most recent call first):
File "/home/buildbot/buildarea/3.x.itamaro-centos-aws.refleak.nogil/build/Lib/threading.py", line 1092 in join
File "/home/buildbot/buildarea/3.x.itamaro-centos-aws.refleak.nogil/build/Lib/concurrent/futures/thread.py", line 272 in shutdown
File "/home/buildbot/buildarea/3.x.itamaro-centos-aws.refleak.nogil/build/Lib/test/test_concurrent_futures/executor.py", line 79 in test_shutdown_race_issue12456
File "/home/buildbot/buildarea/3.x.itamaro-centos-aws.refleak.nogil/build/Lib/unittest/case.py", line 606 in _callTestMethod
File "/home/buildbot/buildarea/3.x.itamaro-centos-aws.refleak.nogil/build/Lib/unittest/case.py", line 660 in run
File "/home/buildbot/buildarea/3.x.itamaro-centos-aws.refleak.nogil/build/Lib/unittest/case.py", line 716 in __call__
File "/home/buildbot/buildarea/3.x.itamaro-centos-aws.refleak.nogil/build/Lib/unittest/suite.py", line 122 in run
File "/home/buildbot/buildarea/3.x.itamaro-centos-aws.refleak.nogil/build/Lib/unittest/suite.py", line 84 in __call__
File "/home/buildbot/buildarea/3.x.itamaro-centos-aws.refleak.nogil/build/Lib/unittest/suite.py", line 122 in run
File "/home/buildbot/buildarea/3.x.itamaro-centos-aws.refleak.nogil/build/Lib/unittest/suite.py", line 84 in __call__
File "/home/buildbot/buildarea/3.x.itamaro-centos-aws.refleak.nogil/build/Lib/unittest/runner.py", line 240 in run
File "/home/buildbot/buildarea/3.x.itamaro-centos-aws.refleak.nogil/build/Lib/test/libregrtest/single.py", line 57 in _run_suite
File "/home/buildbot/buildarea/3.x.itamaro-centos-aws.refleak.nogil/build/Lib/test/libregrtest/single.py", line 37 in run_unittest
File "/home/buildbot/buildarea/3.x.itamaro-centos-aws.refleak.nogil/build/Lib/test/libregrtest/single.py", line 135 in test_func
File "/home/buildbot/buildarea/3.x.itamaro-centos-aws.refleak.nogil/build/Lib/test/libregrtest/refleak.py", line 132 in runtest_refleak
File "/home/buildbot/buildarea/3.x.itamaro-centos-aws.refleak.nogil/build/Lib/test/libregrtest/single.py", line 87 in regrtest_runner
File "/home/buildbot/buildarea/3.x.itamaro-centos-aws.refleak.nogil/build/Lib/test/libregrtest/single.py", line 138 in _load_run_test
File "/home/buildbot/buildarea/3.x.itamaro-centos-aws.refleak.nogil/build/Lib/test/libregrtest/single.py", line 181 in _runtest_env_changed_exc
File "/home/buildbot/buildarea/3.x.itamaro-centos-aws.refleak.nogil/build/Lib/test/libregrtest/single.py", line 281 in _runtest
File "/home/buildbot/buildarea/3.x.itamaro-centos-aws.refleak.nogil/build/Lib/test/libregrtest/single.py", line 310 in run_single_test
File "/home/buildbot/buildarea/3.x.itamaro-centos-aws.refleak.nogil/build/Lib/test/libregrtest/worker.py", line 83 in worker_process
File "/home/buildbot/buildarea/3.x.itamaro-centos-aws.refleak.nogil/build/Lib/test/libregrtest/worker.py", line 118 in main
File "/home/buildbot/buildarea/3.x.itamaro-centos-aws.refleak.nogil/build/Lib/test/libregrtest/worker.py", line 122 in <module>
File "/home/buildbot/buildarea/3.x.itamaro-centos-aws.refleak.nogil/build/Lib/runpy.py", line 88 in _run_code
File "/home/buildbot/buildarea/3.x.itamaro-centos-aws.refleak.nogil/build/Lib/runpy.py", line 198 in _run_module_as_main
```
</details>
<details>
<summary>test failed</summary>
```
======================================================================
FAIL: test_free_reference (test.test_concurrent_futures.test_interpreter_pool.InterpreterPoolExecutorTest.test_free_reference)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/home/buildbot/buildarea/3.x.itamaro-centos-aws.refleak.nogil/build/Lib/test/test_concurrent_futures/executor.py", line 132, in test_free_reference
self.assertIsNone(wr())
~~~~~~~~~~~~~~~~~^^^^^^
AssertionError: <test.test_concurrent_futures.executor.MyObject object at 0x200121a00a0> is not None
```
</details>
<details>
<summary>USAN</summary>
```
test_map_exception (test.test_concurrent_futures.test_interpreter_pool.InterpreterPoolExecutorTest.test_map_exception) ... Python/thread_pthread.h:555:42: runtime error: null pointer passed as argument 1, which is declared to never be null
/usr/include/semaphore.h:55:36: note: nonnull attribute specified here
SUMMARY: UndefinedBehaviorSanitizer: undefined-behavior Python/thread_pthread.h:555:42 in
Fatal Python error: Segmentation fault
Thread 0x00007f89aedfd6c0 (most recent call first): File "/buildbot/buildarea/3.x.pablogsal-arch-x86_64.clang-ubsan-function/build/Lib/concurrent/futures/interpreter.py", line 131 in initialize
File "/buildbot/buildarea/3.x.pablogsal-arch-x86_64.clang-ubsan-function/build/Lib/concurrent/futures/thread.py", line 98 in _worker
File "/buildbot/buildarea/3.x.pablogsal-arch-x86_64.clang-ubsan-function/build/Lib/threading.py", line 992 in run
File "/buildbot/buildarea/3.x.pablogsal-arch-x86_64.clang-ubsan-function/build/Lib/threading.py", line 1041 in _bootstrap_inner
File "/buildbot/buildarea/3.x.pablogsal-arch-x86_64.clang-ubsan-function/build/Lib/threading.py", line 1012 in _bootstrap
Thread 0x00007f89adcfb6c0 (most recent call first): File "/buildbot/buildarea/3.x.pablogsal-arch-x86_64.clang-ubsan-function/build/Lib/concurrent/futures/interpreter.py", line 131 in initialize
File "/buildbot/buildarea/3.x.pablogsal-arch-x86_64.clang-ubsan-function/build/Lib/concurrent/futures/thread.py", line 98 in _worker
File "/buildbot/buildarea/3.x.pablogsal-arch-x86_64.clang-ubsan-function/build/Lib/threading.py", line 992 in run
File "/buildbot/buildarea/3.x.pablogsal-arch-x86_64.clang-ubsan-function/build/Lib/threading.py", line 1041 in _bootstrap_inner
File "/buildbot/buildarea/3.x.pablogsal-arch-x86_64.clang-ubsan-function/build/Lib/threading.py", line 1012 in _bootstrap
Current thread 0x00007f89af7fe6c0 (most recent call first): File "/buildbot/buildarea/3.x.pablogsal-arch-x86_64.clang-ubsan-function/build/Lib/concurrent/futures/interpreter.py", line 137 in initialize
File "/buildbot/buildarea/3.x.pablogsal-arch-x86_64.clang-ubsan-function/build/Lib/concurrent/futures/thread.py", line 98 in _worker
File "/buildbot/buildarea/3.x.pablogsal-arch-x86_64.clang-ubsan-function/build/Lib/threading.py", line 992 in run
File "/buildbot/buildarea/3.x.pablogsal-arch-x86_64.clang-ubsan-function/build/Lib/threading.py", line 1041 in _bootstrap_inner
File "/buildbot/buildarea/3.x.pablogsal-arch-x86_64.clang-ubsan-function/build/Lib/threading.py", line 1012 in _bootstrap
Thread 0x00007f89ad4fa6c0 (most recent call first): File "/buildbot/buildarea/3.x.pablogsal-arch-x86_64.clang-ubsan-function/build/Lib/concurrent/futures/interpreter.py", line 131 in initialize
File "/buildbot/buildarea/3.x.pablogsal-arch-x86_64.clang-ubsan-function/build/Lib/concurrent/futures/thread.py", line 98 in _worker
File "/buildbot/buildarea/3.x.pablogsal-arch-x86_64.clang-ubsan-function/build/Lib/threading.py", line 992 in run
File "/buildbot/buildarea/3.x.pablogsal-arch-x86_64.clang-ubsan-function/build/Lib/threading.py", line 1041 in _bootstrap_inner
File "/buildbot/buildarea/3.x.pablogsal-arch-x86_64.clang-ubsan-function/build/Lib/threading.py", line 1012 in _bootstrap
Thread 0x00007f89b64500c0 (most recent call first):
File "/buildbot/buildarea/3.x.pablogsal-arch-x86_64.clang-ubsan-function/build/Lib/threading.py", line 359 in wait
File "/buildbot/buildarea/3.x.pablogsal-arch-x86_64.clang-ubsan-function/build/Lib/concurrent/futures/_base.py", line 443 in result
File "/buildbot/buildarea/3.x.pablogsal-arch-x86_64.clang-ubsan-function/build/Lib/concurrent/futures/_base.py", line 309 in _result_or_cancel
File "/buildbot/buildarea/3.x.pablogsal-arch-x86_64.clang-ubsan-function/build/Lib/concurrent/futures/_base.py", line 611 in result_iterator
File "/buildbot/buildarea/3.x.pablogsal-arch-x86_64.clang-ubsan-function/build/Lib/test/test_concurrent_futures/executor.py", line 54 in test_map_exception
File "/buildbot/buildarea/3.x.pablogsal-arch-x86_64.clang-ubsan-function/build/Lib/unittest/case.py", line 606 in _callTestMethod
File "/buildbot/buildarea/3.x.pablogsal-arch-x86_64.clang-ubsan-function/build/Lib/unittest/case.py", line 660 in run
File "/buildbot/buildarea/3.x.pablogsal-arch-x86_64.clang-ubsan-function/build/Lib/unittest/case.py", line 716 in __call__
File "/buildbot/buildarea/3.x.pablogsal-arch-x86_64.clang-ubsan-function/build/Lib/unittest/suite.py", line 122 in run
File "/buildbot/buildarea/3.x.pablogsal-arch-x86_64.clang-ubsan-function/build/Lib/unittest/suite.py", line 84 in __call__
File "/buildbot/buildarea/3.x.pablogsal-arch-x86_64.clang-ubsan-function/build/Lib/unittest/suite.py", line 122 in run
File "/buildbot/buildarea/3.x.pablogsal-arch-x86_64.clang-ubsan-function/build/Lib/unittest/suite.py", line 84 in __call__
File "/buildbot/buildarea/3.x.pablogsal-arch-x86_64.clang-ubsan-function/build/Lib/unittest/runner.py", line 240 in run
File "/buildbot/buildarea/3.x.pablogsal-arch-x86_64.clang-ubsan-function/build/Lib/test/libregrtest/single.py", line 57 in _run_suite
File "/buildbot/buildarea/3.x.pablogsal-arch-x86_64.clang-ubsan-function/build/Lib/test/libregrtest/single.py", line 37 in run_unittest
File "/buildbot/buildarea/3.x.pablogsal-arch-x86_64.clang-ubsan-function/build/Lib/test/libregrtest/single.py", line 135 in test_func
File "/buildbot/buildarea/3.x.pablogsal-arch-x86_64.clang-ubsan-function/build/Lib/test/libregrtest/single.py", line 91 in regrtest_runner
File "/buildbot/buildarea/3.x.pablogsal-arch-x86_64.clang-ubsan-function/build/Lib/test/libregrtest/single.py", line 138 in _load_run_test
File "/buildbot/buildarea/3.x.pablogsal-arch-x86_64.clang-ubsan-function/build/Lib/test/libregrtest/single.py", line 181 in _runtest_env_changed_exc
File "/buildbot/buildarea/3.x.pablogsal-arch-x86_64.clang-ubsan-function/build/Lib/test/libregrtest/single.py", line 281 in _runtest
File "/buildbot/buildarea/3.x.pablogsal-arch-x86_64.clang-ubsan-function/build/Lib/test/libregrtest/single.py", line 310 in run_single_test
File "/buildbot/buildarea/3.x.pablogsal-arch-x86_64.clang-ubsan-function/build/Lib/test/libregrtest/worker.py", line 83 in worker_process
File "/buildbot/buildarea/3.x.pablogsal-arch-x86_64.clang-ubsan-function/build/Lib/test/libregrtest/worker.py", line 118 in main
File "/buildbot/buildarea/3.x.pablogsal-arch-x86_64.clang-ubsan-function/build/Lib/test/libregrtest/worker.py", line 122 in <module>
File "<frozen runpy>", line 88 in _run_code
File "<frozen runpy>", line 198 in _run_module_as_main
Extension modules: _testinternalcapi (total: 1)
UndefinedBehaviorSanitizer:DEADLYSIGNAL
==2260212==ERROR: UndefinedBehaviorSanitizer: SEGV on unknown address 0x03e800227cf4 (pc 0x7f89b64e1194 bp 0x7f89af7fd370 sp 0x7f89af7fd330 T2260241)
==2260212==The signal is caused by a READ memory access.
#0 0x7f89b64e1194 (/usr/lib/libc.so.6+0x90194) (BuildId: 915eeec6439cfded1125deefc44a8d73e57873d9)
#1 0x7f89b648dd6f in raise (/usr/lib/libc.so.6+0x3cd6f) (BuildId: 915eeec6439cfded1125deefc44a8d73e57873d9)
#2 0x555a15c75b3d in faulthandler_fatal_error /buildbot/buildarea/3.x.pablogsal-arch-x86_64.clang-ubsan-function/build/./Modules/faulthandler.c:338:5
#3 0x7f89b648de1f (/usr/lib/libc.so.6+0x3ce1f) (BuildId: 915eeec6439cfded1125deefc44a8d73e57873d9)
#4 0x7f89b64e7504 in sem_wait (/usr/lib/libc.so.6+0x96504) (BuildId: 915eeec6439cfded1125deefc44a8d73e57873d9)
#5 0x555a15c3e86b in PyThread_acquire_lock_timed /buildbot/buildarea/3.x.pablogsal-arch-x86_64.clang-ubsan-function/build/Python/thread_pthread.h:555:33
#6 0x7f89b47308c6 in _queues_add /buildbot/buildarea/3.x.pablogsal-arch-x86_64.clang-ubsan-function/build/./Modules/_interpqueuesmodule.c:909:5
#7 0x7f89b47308c6 in queue_create /buildbot/buildarea/3.x.pablogsal-arch-x86_64.clang-ubsan-function/build/./Modules/_interpqueuesmodule.c:1103:19
#8 0x7f89b47308c6 in queuesmod_create /buildbot/buildarea/3.x.pablogsal-arch-x86_64.clang-ubsan-function/build/./Modules/_interpqueuesmodule.c:1487:19
#9 0x555a157a3caa in cfunction_call /buildbot/buildarea/3.x.pablogsal-arch-x86_64.clang-ubsan-function/build/Objects/methodobject.c:551:18
#10 0x555a15677105 in _PyObject_MakeTpCall /buildbot/buildarea/3.x.pablogsal-arch-x86_64.clang-ubsan-function/build/Objects/call.c:242:18
#11 0x555a15a556ef in _PyEval_EvalFrameDefault /buildbot/buildarea/3.x.pablogsal-arch-x86_64.clang-ubsan-function/build/Python/generated_cases.c.h:2759:35
#12 0x555a15680c7c in _PyObject_VectorcallTstate /buildbot/buildarea/3.x.pablogsal-arch-x86_64.clang-ubsan-function/build/./Include/internal/pycore_call.h:167:11
#13 0x555a1567e32f in method_vectorcall /buildbot/buildarea/3.x.pablogsal-arch-x86_64.clang-ubsan-function/build/Objects/classobject.c:71:20
#14 0x555a15da9af5 in thread_run /buildbot/buildarea/3.x.pablogsal-arch-x86_64.clang-ubsan-function/build/./Modules/_threadmodule.c:337:21
#15 0x555a15c3f5f3 in pythread_wrapper /buildbot/buildarea/3.x.pablogsal-arch-x86_64.clang-ubsan-function/build/Python/thread_pthread.h:242:5
#16 0x7f89b64df1ce (/usr/lib/libc.so.6+0x8e1ce) (BuildId: 915eeec6439cfded1125deefc44a8d73e57873d9)
#17 0x7f89b65606eb (/usr/lib/libc.so.6+0x10f6eb) (BuildId: 915eeec6439cfded1125deefc44a8d73e57873d9)
UndefinedBehaviorSanitizer can not provide additional info.
SUMMARY: UndefinedBehaviorSanitizer: SEGV (/usr/lib/libc.so.6+0x90194) (BuildId: 915eeec6439cfded1125deefc44a8d73e57873d9)
==2260212==ABORTING
```
</details>
CC @encukou
### CPython versions tested on:
CPython main branch
### Operating systems tested on:
_No response_
<!-- gh-linked-prs -->
### Linked PRs
* gh-125802
* gh-125803
* gh-125808
* gh-125817
<!-- /gh-linked-prs -->
| 44f841f01af0fb038e142a07f15eda1ecdd5b08a | 9dde4638e44639d45bd7d72e70a8d410995a585a |
python/cpython | python__cpython-125735 | # Enum by-value lookup no longer compares hashable argument to unhashable values
# Bug report
### Bug description:
When defining enum members with hashable values (e.g. `frozenset`s), it was previously (prior to 3.13) possible to look those members up by passing in unhashable values (e.g. `set`s) that compare equal to those original values.
This no longer works as of 3.13 (specifically GH-112514), since the unhashable argument is now only compared to unhashable enum values. I haven't been able to find any documentation of this change in the What's New or even the changelog, so I'm assuming it's not entirely intended.
Reproducible example:
```python
from enum import Enum
class Directions(Enum):
DOWN_ONLY = frozenset({"sc"})
UP_ONLY = frozenset({"cs"})
UNRESTRICTED = frozenset({"sc", "cs"})
dirs = {"sc"}
# in 3.13, this raises ValueError
the_dirs = Directions(dirs)
assert the_dirs is Directions.DOWN_ONLY
```
Note that in this example (which is based on [our use case](https://github.com/freeciv/freeciv/blob/0633ecb062fd76aa6c7ed11cd96ea3bd3ff864b5/common/generate_packets.py#L3341)) there's the obvious user-side fix of `Directions(frozenset(dirs))`, but if there happens to be another hashable-equal-to-unhashable use case outside of (frozen)sets, such a workaround might not exist.
### CPython versions tested on:
3.13
### Operating systems tested on:
Windows
<!-- gh-linked-prs -->
### Linked PRs
* gh-125735
* gh-125851
<!-- /gh-linked-prs -->
| aaed91cabcedc16c089c4b1c9abb1114659a83d3 | 079875e39589eb0628b5883f7ffa387e7476ec06 |
python/cpython | python__cpython-125704 | # _Py_DECREF_SPECIALIZED doesn't respect pymalloc tracking
PR https://github.com/python/cpython/pull/30872 added `_Py_DECREF_SPECIALIZED` which calls `destruct` over the object. When `destruct` is `PyObject_Free` this skips `tracemalloc` counting reporting that the memory is alive. This also makes debuggers segfault because they think that the object is alive because they did not get a notification. This is the code that's not getting called:
https://github.com/python/cpython/blob/2e950e341930ea79549137d4d3771d5edb940e65/Objects/object.c#L2924-L2928
This may also qualify a regression as makes tracemalloc detect incorrect memory usage for these objects
<!-- gh-linked-prs -->
### Linked PRs
* gh-125704
* gh-125705
* gh-125706
* gh-125707
* gh-125712
* gh-125791
<!-- /gh-linked-prs -->
| f8ba9fb2ce6690d2dd05b356583e8e4790badad7 | 6d93690954daae9e9a368084765a4005f957686d |
python/cpython | python__cpython-125699 | # Configure `EXEEXT` hacks are interfering with `AX_C_FLOAT_WORDS_BIGENDIAN`
# Bug report
### Bug description:
We mess up `EXEEXT` in `configure.ac`:
https://github.com/python/cpython/blob/cda0ec8e7c4e9a010e5f73c5afaf18f86cb27b97/configure.ac#L1323-L1340
This creates problems[^1], since `AX_C_FLOAT_WORDS_BIGENDIAN` expects `EXEEXT` and `ac_exeext` to be the same. `EXEEXT` and `ac_exeext` are set up by `AC_PROG_CC`:
https://github.com/python/cpython/blob/cda0ec8e7c4e9a010e5f73c5afaf18f86cb27b97/configure.ac#L1026
We can mitigate this by:
1. setting `ac_exeext=$EXEEXT` after L1340 in `configure.ac`
2. use another variable than `EXEEXT`; for example `EXE_SUFFIX`
3. other workarounds?
My gut feel regarding these is that I'd really not like to add more `EXEEXT` hacks, so I'd like to avoid 1). 2) _should_ be ok, given that no-one else are depending on `EXEEXT` (cc. @hroncok).
### CPython versions tested on:
CPython main branch
### Operating systems tested on:
Linux, macOS, Other
[^1]: https://github.com/python/cpython/pull/125571#issuecomment-2422385731, https://github.com/python/cpython/pull/125571#issuecomment-2422414137
<!-- gh-linked-prs -->
### Linked PRs
* gh-125699
* gh-125758
* gh-125995
* gh-126006
* gh-126007
<!-- /gh-linked-prs -->
| e924bb667a19ee1812d6c7592a37dd37346dda04 | 14cafe1a108cf0be73a27a0001003b5897eec8f0 |
python/cpython | python__cpython-125687 | # Python implementation of `json.loads()` accepts non-ascii digits
# Bug report
### Bug description:
You should be careful when matching unicode regexes:
https://github.com/python/cpython/blob/a0f5c8e6272a1fd5422892d773923b138e77ae5f/Lib/json/scanner.py#L11-L13
```python
>>> import sys
>>> sys.modules["_json"] = None
>>> import json
>>> json.loads("[1\uff10, 0.\uff10, 0e\uff10]")
[10, 0.0, 0.0]
```
I think it's safer to use `[0-9]` instead of `\d` here.
### CPython versions tested on:
3.13
### Operating systems tested on:
macOS
<!-- gh-linked-prs -->
### Linked PRs
* gh-125687
* gh-125692
* gh-125693
<!-- /gh-linked-prs -->
| d358425e6968858e52908794d15f37e62abc74ec | a0f5c8e6272a1fd5422892d773923b138e77ae5f |
python/cpython | python__cpython-125680 | # Multiprocessing Lock and RLock - invalid representation string on MacOSX.
# Bug report
### Bug description:
Due of absence of the `sem_getvalue` C function in the MacOSX semaphore implementation, `Lock` and `RLock` representation strings are invalid in the `multiprocessing` module.
Call to `self._semlock._get_value()` raises an exception, and set part of repr with **'unknown'**.
```py
import multiprocessing as mp
print(mp.Lock() # <Lock(owner=unknown)> vs <Lock(owner=None)> on Linux
print(mp.RLockj()) # <RLock(unknown, unknown)> vs <RLock(None, 0)> on Linux
```
I propose to replace in the `__repr__` method of each class the following test:
https://github.com/python/cpython/blob/37e533a39716bf7da026eda2b35073ef2eb3d1fb/Lib/multiprocessing/synchronize.py#L177
https://github.com/python/cpython/blob/37e533a39716bf7da026eda2b35073ef2eb3d1fb/Lib/multiprocessing/synchronize.py#L203
with `elif not self._semlock._is_zero():`.
This method is available and valid on each OS.
### CPython versions tested on:
CPython main branch
### Operating systems tested on:
macOS
<!-- gh-linked-prs -->
### Linked PRs
* gh-125680
* gh-126533
* gh-126534
<!-- /gh-linked-prs -->
| 75f7cf91ec5afc6091a0fd442a1f0435c19300b2 | d46d3f2ec783004f0927c9f5e6211a570360cf3b |
python/cpython | python__cpython-125675 | # Wrong type for first parameter of `newfunc` C function type
# Documentation
The first parameter of the `newfunc` function type has type `PyTypeObject *`:
https://github.com/python/cpython/blob/a0f5c8e6272a1fd5422892d773923b138e77ae5f/Include/object.h#L354
However, in a couple of places in the documentation it is shown as `PyObject *`:
https://github.com/python/cpython/blob/a0f5c8e6272a1fd5422892d773923b138e77ae5f/Doc/c-api/typeobj.rst?plain=1#L355-L361
https://github.com/python/cpython/blob/a0f5c8e6272a1fd5422892d773923b138e77ae5f/Doc/c-api/typeobj.rst?plain=1#L2647-L2651
<!-- gh-linked-prs -->
### Linked PRs
* gh-125675
* gh-128448
* gh-128449
<!-- /gh-linked-prs -->
| 616468b87bc5bcf5a4db688637ef748e1243db8a | 5768fef355a55aa9c6522e5444de9346bd267972 |
python/cpython | python__cpython-125730 | # Some weird `make test` behavior
# Bug report
### Bug description:
To varying degrees, this affects the 3.12, 3.13, and main branches. Other branches are also probably affected but I haven't tested them. They aren't showstoppers, but they are annoying.
1. `make test` breaks the terminal. All input is hidden until you run the `reset` command.
2. Tcl/Tk tests hijack keyboard input. This is doesn't seem to affect 3.12, moderately affects 3.13, and really breaks on main. Essentially the tests pop up tons of windows, lots of stuff happens in those windows, and while that test is running, the desktop is largely unusable. You can get some mouse events through, but no keyboard events. It's especially bad with `main` - you basically can't get anything done until the tests complete.
### CPython versions tested on:
CPython main branch
### Operating systems tested on:
macOS
<!-- gh-linked-prs -->
### Linked PRs
* gh-125730
<!-- /gh-linked-prs -->
| 1f16df4bfe5cfbe4ac40cc9c6d15f44bcfd99a64 | 1064141967a2d22c2ae9e22ae77e8c9616559947 |
python/cpython | python__cpython-125668 | # qidarg_converter_data Values Not Properly Initialized
# Bug report
### Bug description:
In https://github.com/python/cpython/pull/124548#issuecomment-2420488611, @ZeroIntensity noted that the various locals of type `qidarg_converter_data` in _interpqueuesmodule.c are not statically initialized. Whether or not they are initialized to zeros by default is "undefined behavior, ergo depends on the compiler. This could be a problem since the corresponding arg converter func branches on whether or the "label" field is NULL. This may (or may not) be related to the buildbot failures noted in that other issue.
Fixing this will be fairly trivial.
### CPython versions tested on:
CPython main branch
### Operating systems tested on:
_No response_
<!-- gh-linked-prs -->
### Linked PRs
* gh-125668
* gh-125670
<!-- /gh-linked-prs -->
| 7cf2dbc3cb3ef7be65a98bbfc87246d36d795c82 | c3164ae3cf4e8f9ccc4df8ea5f5664c5927ea839 |
python/cpython | python__cpython-125732 | # PyREPL exits the interpreter on specific input
# Bug report
### Bug description:
Playing with the Python 3.13.0 interpreter installed from brew on macOS 14.7 in a Terminal window.
I entered the repl and typed
```python
>>> import math
>>> math.tau
6.283185307179586
```
Then I hit ctrl-P to get the last line and hit SPC, `/`, `2`. Oddly, the SPC got eaten and the slash ended up right next to the `u`. I didn't notice that until I hit Enter, at which point the exception occurred and exited the interpreter.
Unfortunately, I cannot reproduce it again.
```python
% python3
Python 3.13.0 (main, Oct 7 2024, 05:02:14) [Clang 15.0.0 (clang-1500.3.9.4)] on darwin
Type "help", "copyright", "credits" or "license" for more information.
>>> import math
>>> math.tau
6.283185307179586
>>> math.tau/ 2
SyntaxError: source code string cannot contain null bytes (<python-input-2>)
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "<frozen runpy>", line 198, in _run_module_as_main
File "<frozen runpy>", line 88, in _run_code
File "/usr/local/Cellar/python@3.13/3.13.0_1/Frameworks/Python.framework/Versions/3.13/lib/python3.13/_pyrepl/__main__.py", line 6, in <module>
__pyrepl_interactive_console()
~~~~~~~~~~~~~~~~~~~~~~~~~~~~^^
File "/usr/local/Cellar/python@3.13/3.13.0_1/Frameworks/Python.framework/Versions/3.13/lib/python3.13/_pyrepl/main.py", line 59, in interactive_console
run_multiline_interactive_console(console)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^
File "/usr/local/Cellar/python@3.13/3.13.0_1/Frameworks/Python.framework/Versions/3.13/lib/python3.13/_pyrepl/simple_interact.py", line 160, in run_multiline_interactive_console
more = console.push(_strip_final_indent(statement), filename=input_name, _symbol="single") # type: ignore[call-arg]
File "/usr/local/Cellar/python@3.13/3.13.0_1/Frameworks/Python.framework/Versions/3.13/lib/python3.13/code.py", line 313, in push
more = self.runsource(source, filename, symbol=_symbol)
File "/usr/local/Cellar/python@3.13/3.13.0_1/Frameworks/Python.framework/Versions/3.13/lib/python3.13/_pyrepl/console.py", line 179, in runsource
self.showsyntaxerror(filename, source=source)
~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/Cellar/python@3.13/3.13.0_1/Frameworks/Python.framework/Versions/3.13/lib/python3.13/_pyrepl/console.py", line 165, in showsyntaxerror
super().showsyntaxerror(filename=filename, **kwargs)
~~~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/Cellar/python@3.13/3.13.0_1/Frameworks/Python.framework/Versions/3.13/lib/python3.13/code.py", line 115, in showsyntaxerror
self._showtraceback(typ, value, None, source)
~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/Cellar/python@3.13/3.13.0_1/Frameworks/Python.framework/Versions/3.13/lib/python3.13/code.py", line 140, in _showtraceback
and not value.text and len(lines) >= value.lineno):
^^^^^^^^^^^^^^^^^^^^^^^^^^
TypeError: '>=' not supported between instances of 'int' and 'NoneType'
```
### CPython versions tested on:
3.13
### Operating systems tested on:
macOS
<!-- gh-linked-prs -->
### Linked PRs
* gh-125732
* gh-126023
<!-- /gh-linked-prs -->
| 44becb8cba677cbfdbcf2f7652277e5e1efc4f20 | 51b012b2a8093c92ef2c06884f9719657f9b17f7 |
python/cpython | python__cpython-125691 | # Update turtledemo docstrings with correct file names
# Bug report
### Bug description:
In #123370 , I discussed with Terry some measures to modernize the turtledemo. This requires changes to each file. **First**, I want to modify the incorrect comment file names, which is the easiest to review. There are five measures to modify, e.g German function names and parameter names, change ISO format time, and others focus on clock.py. I have already completed 2 in my initial commit in https://github.com/python/cpython/commit/a0b922f4ee1b3c59d220c8d2894821277a357a7a , but I later reverted the changes due to backporting
From previous discussion:
> I'd like to see a follow-up 'modernize PR to
> Replace module docstring with "turtledemo/clock.py -- clock program showing time and date." File name is wrong and instruction only applies when run in turtledemo, which it is redundant.
(clock.py fixed)
This issue is a bug fix, not a modernization enhancement?
<!-- gh-linked-prs -->
### Linked PRs
* gh-125691
<!-- /gh-linked-prs -->
| 9c01db40aa5edbd75ce50342c08f7ed018ee7864 | 6f26d496d3c894970ee18a125e9100791ebc2b36 |
python/cpython | python__cpython-125683 | # Python implementation of `json.loads()` accepts invalid unicode escapes
# Bug report
### Bug description:
While reviewing #125652 and reading the documentation of [`int()`](https://docs.python.org/3/library/functions.html#int), I realised this condition in `json.decoder` is insufficient:
https://github.com/python/cpython/blob/f203d1cb52f7697140337a73841c8412282e2ee0/Lib/json/decoder.py#L61
```python
>>> import sys
>>> sys.modules["_json"] = None
>>> import json
>>> json.loads(r'["\u 000", "\u-000", "\u+000", "\u0_00"]')
['\x00', '\x00', '\x00', '\x00']
```
### CPython versions tested on:
3.13
### Operating systems tested on:
macOS
<!-- gh-linked-prs -->
### Linked PRs
* gh-125683
* gh-125694
* gh-125695
<!-- /gh-linked-prs -->
| df751363e386d1f77c5ba9515a5539902457d386 | d358425e6968858e52908794d15f37e62abc74ec |
python/cpython | python__cpython-125645 | # Update location of `locations.md`
# Documentation
Commit d484383861b44b4cf76f31ad6af9a0b413334a89 moved the location of `locations.md`. The `InternalDocs/compiler.md` file should update the reference.
<!-- gh-linked-prs -->
### Linked PRs
* gh-125645
<!-- /gh-linked-prs -->
| 0d88b995a641315306d56fba7d07479b2c5f57ef | 37986e830ba25d2c382988b06bbe27410596346c |
python/cpython | python__cpython-125634 | # Add function `ispackage` to stdlib `inspect`
# Feature or enhancement
### Proposal:
Currently, the built-in module `inspect` lacks a function to determine whether an object is a package or not. This is a small but useful feature.
According to the documentation, this feature should not be difficult to implement, and we should add it to the standard library:
1. [Definition of a package](https://docs.python.org/3/reference/import.html#packages)
2. [module.\_\_package\_\_](https://docs.python.org/3/reference/datamodel.html#module.__package__)
3. [module.\_\_name\_\_](https://docs.python.org/3/reference/datamodel.html#module.__name__)
4. [module.\_\_spec\_\_](https://docs.python.org/3/reference/datamodel.html#module.__spec__)
I've tried to implement this feature and will make a PR later.
### Has this already been discussed elsewhere?
This is a minor feature, which does not need previous discussion elsewhere
### Links to previous discussion of this feature:
_No response_
<!-- gh-linked-prs -->
### Linked PRs
* gh-125634
<!-- /gh-linked-prs -->
| dad34531298fc0ea91b9000aafdd2ea2fce5e54a | c51b56038ba344dece607eb5f035dca544187813 |
python/cpython | python__cpython-125626 | # Allow 3.13 py.exe can be use on PCBuild
# Bug report
### Bug description:
> @rem It is fine to add new versions tot
his list when they have released,
> @rem but we do not use prerelease bu
ilds here
### CPython versions tested on:
3.13, 3.14, CPython main branch
### Operating systems tested on:
Windows
<!-- gh-linked-prs -->
### Linked PRs
* gh-125626
* gh-125649
* gh-125650
<!-- /gh-linked-prs -->
| 0cb20f2e7e867d5c34fc17dd5b8e51e8b0020bb3 | c124577ebe915a00de4033c0f7fa7c47621d79e0 |
python/cpython | python__cpython-125621 | # test_resource_tracker_sigkill fails on NetBSD: AssertionError for warning count in test_multiprocessing_fork
# Bug report
### Bug description:
```sh
-bash-5.2$ ./python -m test test_multiprocessing_fork.test_misc -m test_resource_tracker_sigkill
```
```sh
Using random seed: 1947163462
0:00:00 load avg: 0.16 Run 1 test sequentially in a single process
0:00:00 load avg: 0.16 [1/1] test_multiprocessing_fork.test_misc
test test_multiprocessing_fork.test_misc failed -- Traceback (most recent call last):
File "/home/blue/cpython/Lib/test/_test_multiprocessing.py", line 5766, in test_resource_tracker_sigkill
self.check_resource_tracker_death(signal.SIGKILL, True)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^
File "/home/blue/cpython/Lib/test/_test_multiprocessing.py", line 5748, in check_resource_tracker_death
self.assertEqual(len(all_warn), 1)
~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^
AssertionError: 0 != 1
test_multiprocessing_fork.test_misc failed (1 failure) in 3 min
== Tests result: FAILURE ==
1 test failed:
test_multiprocessing_fork.test_misc
```
OS: NetBSD 10.0
### CPython versions tested on:
CPython main branch
### Operating systems tested on:
Other
<!-- gh-linked-prs -->
### Linked PRs
* gh-125621
* gh-125624
* gh-125627
* gh-125628
* gh-125672
* gh-125673
<!-- /gh-linked-prs -->
| a0f5c8e6272a1fd5422892d773923b138e77ae5f | 77cebb1ce9baac9e01a45d34113c3bea74940d90 |
python/cpython | python__cpython-132818 | # [3.14] annotationlib - get_annotations returns an empty annotations dict if an `AttributeError` is raised when `__annotations__` is accessed
# Bug report
### Bug description:
If there's an annotation with an incorrect attribute access, the `AttributeError` causes `get_annotations` to return an empty dictionary for the annotations instead of failing or returning a `ForwardRef`.
```python
import typing
from annotationlib import get_annotations, Format
class Example2:
real_attribute: typing.Any
fake_attribute: typing.DoesNotExist
new_ann = get_annotations(Example2, format=Format.FORWARDREF)
print(f"{new_ann=}")
# This should fail, but instead returns an empty dict
value_ann = get_annotations(Example2, format=Format.VALUE)
print(f"{value_ann=}")
string_ann = get_annotations(Example2, format=Format.STRING)
print(f"{string_ann=}")
```
Output
```
new_ann={}
value_ann={}
string_ann={'real_attribute': 'typing.Any', 'fake_attribute': 'typing.DoesNotExist'}
```
I think this is due to `_get_dunder_annotations` catching `AttributeError` and returning an empty dict, intended for static types but catching this by mistake.
### CPython versions tested on:
3.14
### Operating systems tested on:
Linux
<!-- gh-linked-prs -->
### Linked PRs
* gh-132818
<!-- /gh-linked-prs -->
| af5799f3056b0eee61fc09587633500a3690e67e | 3109c47be8fc00df999c5bff01229a6b93513224 |
python/cpython | python__cpython-125617 | # Grammar error in appendix: "with output nor the ..." should be "with neither output nor the..."
See: https://github.com/python/cpython/blob/760872efecb95017db8e38a8eda614bf23d2a22c/Doc/tutorial/appendix.rst?plain=1#L23
<!-- gh-linked-prs -->
### Linked PRs
* gh-125617
* gh-125619
<!-- /gh-linked-prs -->
| aab3210271136ad8e8fecd927b806602c463e1f2 | 760872efecb95017db8e38a8eda614bf23d2a22c |
python/cpython | python__cpython-125635 | # [3.14] annotationlib - `get_annotations` with `Format.FORWARDREF` returns `_Stringifier` instances instead of ForwardRef for `module.type` annotations where the module is not loaded
# Bug report
### Bug description:
Tested on 3.14.0a1 - related to implementation of PEP 649 / PEP 749 - #119180
If a type uses dotted access but can't be evaluated, `get_annotations` returns `_Stringifier` objects instead of `ForwardRef` objects.
```python
from annotationlib import get_annotations, Format
class Example:
dotted: typing.Any
undotted: Any
ann = get_annotations(Example, format=Format.FORWARDREF)
for name, value in ann.items():
print(f"{name!r}: {value!r} | type: {type(value)!r}")
```
Output:
```
'dotted': typing.Any | type: <class 'annotationlib._Stringifier'>
'undotted': ForwardRef('Any') | type: <class 'annotationlib.ForwardRef'>
```
I'd expect to see `ForwardRef('typing.Any')` here so this was surprising. I also found another bug related to dotted access but not related to Stringifiers that I'll raise separately.
### CPython versions tested on:
3.14
### Operating systems tested on:
Linux
<!-- gh-linked-prs -->
### Linked PRs
* gh-125635
<!-- /gh-linked-prs -->
| d3be6f945a4def7d123b2ef4d11d59abcdd3e446 | 8f2c0f7a03b71485b5635cb47c000e4e8ace8800 |
python/cpython | python__cpython-125612 | # Bad specialization of `STORE_ATTR_INSTANCE_VALUE` with `obj.__dict__`
Consider:
```python
class MyObject: pass
def func():
o = MyObject()
o.__dict__
for _ in range(100):
o.foo = "bar"
o.baz = "qux"
for _ in range(100):
func()
```
```
opcode[STORE_ATTR_INSTANCE_VALUE].specialization.miss : 20382
opcode[STORE_ATTR_INSTANCE_VALUE].execution_count : 21167
```
The `STORE_ATTR_INSTANCE_VALUE` has a guard `_GUARD_DORV_NO_DICT` that ensures that the object does not have a managed dictionary:
https://github.com/python/cpython/blob/760872efecb95017db8e38a8eda614bf23d2a22c/Python/bytecodes.c#L2269
However, the specializer for `STORE_ATTR_INSTANCE_VALUE` does not take that into account. It only checks that the inline values are valid:
https://github.com/python/cpython/blob/760872efecb95017db8e38a8eda614bf23d2a22c/Python/specialize.c#L867-L886
I'm not sure if we should change the guard or change `specialize.c`
<!-- gh-linked-prs -->
### Linked PRs
* gh-125612
* gh-127698
<!-- /gh-linked-prs -->
| a353455fca1b8f468ff3ffbb4b5e316510b4fd43 | 36c6178d372b075e9c74b786cfb5e47702976b1c |
python/cpython | python__cpython-125611 | # Python dictionary watchers no longer trigger when an object's attributes change
# Bug report
Here's an example function using [`test_capi/test_watchers.py`](https://github.com/python/cpython/blob/main/Lib/test/test_capi/test_watchers.py) that passes in 3.12, but fails in 3.13 and 3.14:
```python
def test_watch_obj_dict(self):
o = MyClass()
with self.watcher() as wid:
self.watch(wid, o.__dict__)
o.foo = "bar"
self.assert_events(["new:foo:bar"]) # fails in 3.13+
```
The dictionary watcher doesn't fire in 3.13+ even though the `o.__dict__` changes when `o.foo` is set to `"bar"`. The problem is specific to inline values -- our code paths for inline values do not trigger dictionary watcher events. If the object does not use inline values, then the watcher events fire as expected.
This is a problem for [PyTorch Dyanmo](https://pytorch.org/docs/stable/torch.compiler.html) because it uses dictionary watchers for compiler guards.
cc @williamwen42 @markshannon
<!-- gh-linked-prs -->
### Linked PRs
* gh-125611
* gh-125982
<!-- /gh-linked-prs -->
| 5989eb74463c26780632f17f221d6bf4c9372a01 | 0cd21406bf84b3b4927a8117024232774823aee0 |
python/cpython | python__cpython-125605 | # Avoid Standalone Sub-structs In pycore_runtime.h
pycore_runtime.h is where the `_PyRuntimeState` struct is declared. Nearly every one of the structs it relies on ("sub-struct") is found in the internal header file that corresponds to that struct's subject matter, rather than in pycore_runtime.h. There are only a few that *are* declared in pycore_runtime.h, mostly because it was easier to put them there than create the appropriate header files.
In the interest of keeping the focus of pycore_runtime.h on `_PyRuntimeState`, I'd like to clear out the standalone sub-structs there.
The involves the following:
* `_Py_DebugOffsets` - move to its own header file (pycore_debugger_utils.h)
* `struct _getargs_runtime_state` - inline in `_PyRuntimeState`
* `struct _gilstate_runtime_state` - inline in `_PyRuntimeState`
* `_Py_AuditHookEntry` - move to its own header file (pycore_audit.h); also move the other audit-related APIs to pycore_audit.h or audit.h (to be added)
* `struct _reftracer_runtime_state` - move to pycore_object_state.h
`_Py_DebugOffsets` is the most meaningful one to move since it is so big (visually occludes `_PyRuntimeState`). For `_Py_AuditHookEntry`, we probably should have added pycore_audit.h in the first place.
<!-- gh-linked-prs -->
### Linked PRs
* gh-125605
<!-- /gh-linked-prs -->
| 6d93690954daae9e9a368084765a4005f957686d | 2e950e341930ea79549137d4d3771d5edb940e65 |
python/cpython | python__cpython-125601 | # Generate less warnings for stale code
# Feature or enhancement
### Proposal:
We added a feature in pdb to give a warning when the source code is changed during debugging, which was a nice feature. However, there are certain scenarios where we don't need to give this warning (like if the user restart the program immediately). Or, to be more specific, we only need to generate this warning when the user browses source code.
### Has this already been discussed elsewhere?
This is a minor feature, which does not need previous discussion elsewhere
### Links to previous discussion of this feature:
_No response_
<!-- gh-linked-prs -->
### Linked PRs
* gh-125601
<!-- /gh-linked-prs -->
| 77cebb1ce9baac9e01a45d34113c3bea74940d90 | 7cf2dbc3cb3ef7be65a98bbfc87246d36d795c82 |
python/cpython | python__cpython-125681 | # colors are missing on (Base)ExceptionGroup tracebacks in the pyrepl
### Bug description:
```python
import sys
class InvalidFruitException(Exception):
pass
def eat_fruit(fruit):
if fruit == "banana":
raise InvalidFruitException("no herbs please, only fruit")
if fruit == "rock":
raise InvalidFruitException("too tough")
def demo_continue_loop():
food = ["apple", "rock", "orange", "banana"]
exceptions = []
try:
for i, f in enumerate(food):
try:
eat_fruit(f)
except Exception as e:
e.add_note(f"failed on loop {i=} {f=}")
exceptions.append(e)
if exceptions:
raise ExceptionGroup("multiple errors eating food", exceptions)
finally:
del exceptions # no refcycles please!
def main():
demo_continue_loop()
return 0
if __name__ == "__main__":
sys.exit(main())
```
here's the output:
https://asciinema.org/a/681263

I'd expect the `eat_fruit()` parts to be highlighted in red
### CPython versions tested on:
3.13, 3.14
### Operating systems tested on:
Linux, macOS
This was added in https://github.com/python/cpython/issues/112730 but it looks like ExceptionGroups were missed
<!-- gh-linked-prs -->
### Linked PRs
* gh-125681
* gh-126021
<!-- /gh-linked-prs -->
| 51b012b2a8093c92ef2c06884f9719657f9b17f7 | f6cc7c8bd01d8468af70a65e550abef3854d0754 |
python/cpython | python__cpython-125616 | # it's no longer possible to delete items from f_locals (FrameLocalsProxy) in 3.13+
# Bug report
### Bug description:
```python
try:
driver.switch_to.alert.accept()
driver.switch_to.alert.dismiss()
except:
pass
```
In the example script above, if the code falls into the except: pass block due to an alert not being displayed, the script will ultimately fail (even if no coding errors are encountered) with FAILED (errors=1) .
Also, the following messages will be displayed in the output:

This code works fine in Python 3.12. The script succeeds with message OK and none of the above error messages. We can't move to 3.13 version until this is fixed.
### CPython versions tested on:
3.13
### Operating systems tested on:
Windows
<!-- gh-linked-prs -->
### Linked PRs
* gh-125616
* gh-125797
<!-- /gh-linked-prs -->
| 5b7a872b26a9ba6c93d7c2109559a82d1c1612de | 3d1df3d84e5c75a52b6f1379cd7f2809fc50befa |
python/cpython | python__cpython-125589 | # The python PEG parser generator doesn't allow f-strings anymore in actions
After the changes in 3.12 to the f-string formalisation, the PEG parser doesn't recognise the new tokens.
<!-- gh-linked-prs -->
### Linked PRs
* gh-125589
* gh-127969
<!-- /gh-linked-prs -->
| 9dfef4e5f4ac3c1ce494c48f2476a694c12d72a5 | 0e45b1fd0ffbb165f580ecdfd5234c1d54389501 |
python/cpython | python__cpython-125586 | # `test.test_urllib2.HandlerTests.test_ftp_error` fails without network access (running `-u-network`)
# Bug report
### Bug description:
The newly added `test_ftp_error` (in 77133f570dcad599e5b1199c39e999bfac959ae2, FWICS) is failing in environments without Internet access, even though it is not marked as needing the "network" resource:
```pytb
======================================================================
FAIL: test_ftp_error (test.test_urllib2.HandlerTests.test_ftp_error)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/var/tmp/portage/dev-lang/python-3.14.0_alpha1/work/Python-3.14.0a1/Lib/urllib/request.py", line 1531, in ftp_open
host = socket.gethostbyname(host)
socket.gaierror: [Errno -3] Temporary failure in name resolution
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/var/tmp/portage/dev-lang/python-3.14.0_alpha1/work/Python-3.14.0a1/Lib/test/test_urllib2.py", line 811, in test_ftp_error
urlopen("ftp://www.pythontest.net/")
~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/var/tmp/portage/dev-lang/python-3.14.0_alpha1/work/Python-3.14.0a1/Lib/urllib/request.py", line 489, in open
response = self._open(req, data)
File "/var/tmp/portage/dev-lang/python-3.14.0_alpha1/work/Python-3.14.0a1/Lib/urllib/request.py", line 506, in _open
result = self._call_chain(self.handle_open, protocol, protocol +
'_open', req)
File "/var/tmp/portage/dev-lang/python-3.14.0_alpha1/work/Python-3.14.0a1/Lib/urllib/request.py", line 466, in _call_chain
result = func(*args)
File "/var/tmp/portage/dev-lang/python-3.14.0_alpha1/work/Python-3.14.0a1/Lib/urllib/request.py", line 1533, in ftp_open
raise URLError(msg)
urllib.error.URLError: <urlopen error [Errno -3] Temporary failure in name resolution>
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/var/tmp/portage/dev-lang/python-3.14.0_alpha1/work/Python-3.14.0a1/Lib/test/test_urllib2.py", line 813, in test_ftp_error
self.assertEqual(raised.reason,
~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^
f"ftp error: {exception.args[0]}")
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
AssertionError: gaierror(-3, 'Temporary failure in name resolution') != 'ftp error: 500 OOPS: cannot change directory:/nonexistent'
```
### CPython versions tested on:
3.14
### Operating systems tested on:
Linux
<!-- gh-linked-prs -->
### Linked PRs
* gh-125586
<!-- /gh-linked-prs -->
| e4d90be84536746a966478acc4c0cf43a201f492 | feda9aa73ab95d17a291db22c416146f8e70edeb |
python/cpython | python__cpython-125564 | # Consider pinning runner image instead of using `ubuntu-latest` in JIT CI
It looks like GitHub updated the default/latest Ubuntu runner to use Ubuntu 24.04 from 22.04, which is leading to [failed CI runs](https://github.com/python/cpython/actions/runs/11357684569/job/31591164216?pr=125499). We should consider pinning the version here.
(GitHub's general guidance is to use `ubuntu-latest,` so if folks feel strongly against this, this could also be a temporary measure until the issue resolves.)
See also https://github.com/actions/runner-images/issues/10788
<!-- gh-linked-prs -->
### Linked PRs
* gh-125564
<!-- /gh-linked-prs -->
| c84a136511c673f495f466887716b55c13b7e3ac | 51ef54abc42e020d7e80549d49ca32310495b4eb |
python/cpython | python__cpython-126010 | # untokenize() does not round-trip for code containing line breaks (`\` + `\n`)
# Bug report
### Bug description:
Code which contains line breaks is not round-trip invariant:
```python
import tokenize, io
source_code = r"""
1 + \
2
"""
tokens = list(tokenize.generate_tokens(io.StringIO(source_code).readline))
x = tokenize.untokenize(tokens)
print(x)
# 1 +\
# 2
```
Notice that the space between `+` and `\` is now missing. The current tokenizer code simply inserts a backslash when it encounters two subsequent tokens with a differeing row offset:
https://github.com/python/cpython/blob/9c2bb7d551a695f35db953a671a2ddca89426bef/Lib/tokenize.py#L179-L182
I think this should be fixed. The docstring of `tokenize.untokenize` says:
> Round-trip invariant for full input:
Untokenized source will match input source exactly
To fix this, it will probably be necessary to inspect the raw line contents and count how much whitespace there is at the end of the line.
### CPython versions tested on:
CPython main branch
### Operating systems tested on:
Linux
<!-- gh-linked-prs -->
### Linked PRs
* gh-126010
* gh-129153
* gh-130579
<!-- /gh-linked-prs -->
| 7ad793e5dbdf07e51a71b70d20f3e6e3ab60244d | a4760ef8e5463116b3076e0f5e3c38b314f7b20f |
python/cpython | python__cpython-125551 | # Allow py.exe to detect 3.14 installs
Update the list of known package names to include 3.14 packages.
This should be backported to all branches.
<!-- gh-linked-prs -->
### Linked PRs
* gh-125551
* gh-125622
* gh-125623
<!-- /gh-linked-prs -->
| 8e7b2a1161744c7d3d90966a65ed6ae1019a65cb | aecbc2e6f40f8066f478c2d0f3be5b550e36cfd3 |
python/cpython | python__cpython-125563 | # Deprecate `prefix_chars` in `argument_group`
Now, after `conflict_handler` and `argument_default` have been documented, we can deprecate `prefix_chars`. @savannahostrowski, are you interesting? We should just add a runtime warning in the `_ArgumentGroup` constructor, test, and documentation (the `deprecated` instruction and something in What's New).
_Originally posted by @serhiy-storchaka in https://github.com/python/cpython/issues/89819#issuecomment-2414391217_
https://docs.python.org/3/library/argparse.html#argument-groups
<!-- gh-linked-prs -->
### Linked PRs
* gh-125563
<!-- /gh-linked-prs -->
| 7b04496e5c7ed47e9653f4591674fc9ffef34587 | 624be8699aec22bef137041478078c6fafaf032e |
python/cpython | python__cpython-125546 | # Make `ctrl-c` interrupt `threading.Lock.acquire()` on Windows
# Feature or enhancement
Since Python 3.2, pressing `ctrl-c` (`SIGINT`) interrupts `threading.Lock.acquire()` on POSIX platforms, including Linux and macOS. However, this does not work on Windows.
Now that `threading.Lock` and `threading.RLock` use `PyMutex` and `_PyRecursiveMutex` internally, this should be a lot easier to implement efficiently.
See also https://github.com/python/cpython/issues/125058
cc @pitrou @zooba @gpshead
<!-- gh-linked-prs -->
### Linked PRs
* gh-125546
<!-- /gh-linked-prs -->
| d8c864816121547338efa43c56e3f75ead98a924 | b454662921fd3a1fc27169e91aca03aadea08817 |
python/cpython | python__cpython-125523 | # avoid bare except: in stdlib
Bare excepts are best avoided, this issue is to reduce their use in the stdlib. It is not the intention to blindly replace them by `except BaseException:`, or to religiously purge all of them, but just to review them and avoid where it's feasible to do so.
(see also https://github.com/python/cpython/issues/125514).
<!-- gh-linked-prs -->
### Linked PRs
* gh-125523
* gh-125544
* gh-125726
* gh-125727
* gh-126321
* gh-126327
* gh-126328
* gh-129018
* gh-129455
* gh-129456
<!-- /gh-linked-prs -->
| e97910cdb76c1f1dadfc4721b828611e4f4b6449 | c9826c11db25e81b1a90c837f84074879f1b1126 |
python/cpython | python__cpython-125700 | # Strange warning and failure of JIT workflow on `aarch64-unknown-linux-gnu/gcc (Debug)`
# Bug report
Link: https://github.com/python/cpython/actions/runs/11333284963/job/31517441366#step:8:9113
Report:
```
Python/generated_cases.c.h: In function ‘_PyEval_EvalFrameDefault’:
Python/generated_cases.c.h:4738:46: warning: this statement may fall through [-Wimplicit-fallthrough=]
4738 | TARGET(INSTRUMENTED_LOAD_SUPER_ATTR) {
| ^
In file included from Python/ceval.c:679:
Python/ceval_macros.h:77:22: note: here
77 | # define TARGET(op) case op: TARGET_##op:
| ^~~~
Python/generated_cases.c.h:4752:9: note: in expansion of macro ‘TARGET’
4752 | TARGET(INSTRUMENTED_POP_JUMP_IF_FALSE) {
| ^~~~~~
Python/generated_cases.c.h:5049:34: warning: this statement may fall through [-Wimplicit-fallthrough=]
5049 | TARGET(INTERPRETER_EXIT) {
| ^
Python/ceval_macros.h:77:22: note: here
77 | # define TARGET(op) case op: TARGET_##op:
| ^~~~
Python/generated_cases.c.h:5066:9: note: in expansion of macro ‘TARGET’
5066 | TARGET(IS_OP) {
| ^~~~~~
Python/generated_cases.c.h:6817:31: warning: this statement may fall through [-Wimplicit-fallthrough=]
6817 | TARGET(RAISE_VARARGS) {
| ^
Python/ceval_macros.h:77:22: note: here
77 | # define TARGET(op) case op: TARGET_##op:
| ^~~~
Python/generated_cases.c.h:6842:9: note: in expansion of macro ‘TARGET’
6842 | TARGET(RERAISE) {
| ^~~~~~
```
Note that other builds do not have this warning.
Then this job gets canceled due to timeout:
<img width="951" alt="Снимок экрана 2024-10-15 в 16 10 58" src="https://github.com/user-attachments/assets/6e2d5ad3-f6c6-4b7b-87b3-8ac17aa89b33">
There are several other similar failures due to timeout: https://github.com/python/cpython/actions/runs/11333284963/job/31517441919
<img width="1341" alt="Снимок экрана 2024-10-15 в 16 12 49" src="https://github.com/user-attachments/assets/3f962504-f609-4553-8aaa-46f64a05baac">
<!-- gh-linked-prs -->
### Linked PRs
* gh-125700
<!-- /gh-linked-prs -->
| 57e3c59bb64fc2f8b2845a7e03ab0abb029ccd02 | c1bdbe84c8ab29b68bb109328e02af9464f104b3 |
python/cpython | python__cpython-125520 | # Messy traceback if `importlib.reload()` is called with a `str`
# Bug report
### Bug description:
If you call `importlib.reload()` with a `str`, there's a huge amount of unnecessary detail in the traceback that makes it look like there's an internal error in the function itself:
```pytb
>>> import importlib
>>> import typing
>>> importlib.reload("typing")
Traceback (most recent call last):
File "/Users/alexw/.pyenv/versions/3.13.0/lib/python3.13/importlib/__init__.py", line 101, in reload
name = module.__spec__.name
^^^^^^^^^^^^^^^
AttributeError: 'str' object has no attribute '__spec__'. Did you mean: '__doc__'?
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/Users/alexw/.pyenv/versions/3.13.0/lib/python3.13/importlib/__init__.py", line 104, in reload
name = module.__name__
^^^^^^^^^^^^^^^
AttributeError: 'str' object has no attribute '__name__'. Did you mean: '__ne__'?
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "<python-input-2>", line 1, in <module>
importlib.reload("typing")
~~~~~~~~~~~~~~~~^^^^^^^^^^
File "/Users/alexw/.pyenv/versions/3.13.0/lib/python3.13/importlib/__init__.py", line 106, in reload
raise TypeError("reload() argument must be a module")
TypeError: reload() argument must be a module
```
We should probably suppress the exception context here.
### CPython versions tested on:
3.12, 3.13, CPython main branch
### Operating systems tested on:
macOS
<!-- gh-linked-prs -->
### Linked PRs
* gh-125520
* gh-125768
* gh-125769
<!-- /gh-linked-prs -->
| c5c21fee7ae1ea689a351caa454c98e716a6e537 | 9256be7ff0ab035cfd262127d893c9bc88b3c84c |
python/cpython | python__cpython-125518 | # Unused code warning in `_testembed.c`
# Bug report
Link: https://github.com/python/cpython/actions/runs/11333284963/job/31517440086#step:6:1315
Report:
```
./Programs/_testembed.c:1904:5: warning: code will never be executed [-Wunreachable-code]
const char *err_msg;
^~~~~~~~~~~~~~~~~~~~
sed -e "s,/usr/bin/env python3,/usr/local/bin/python3.14," < ./Tools/scripts/pydoc3 > build/scripts-3.14/pydoc3.14
./Programs/_testembed.c:2052:5: warning: code will never be executed [-Wunreachable-code]
const char *err_msg;
sed -e "s,@EXENAME@,/usr/local/bin/python3.14," < ./Misc/python-config.in >python-config.py
^~~~~~~~~~~~~~~~~~~~
```
Source of problem:
- We cannot have `goto` targets on variable declarations.
- So, they are declared right above: https://github.com/python/cpython/blob/cc5a225cdc2a5d4e035dd08d59cef39182c10a6c/Programs/_testembed.c#L1903-L1910
- So, this warning marks `const char *err_msg;` as unreachable code
I propose to declare this variable before the first `goto`. This way we can remove this warning. And since this is just a test, we can do that freely.
<!-- gh-linked-prs -->
### Linked PRs
* gh-125518
<!-- /gh-linked-prs -->
| c8a1818fb01937b66b93728c11d68c9f9af688a5 | cc5a225cdc2a5d4e035dd08d59cef39182c10a6c |
python/cpython | python__cpython-125697 | # Multiple unused code warnings in `Python/generated_cases.c.h`
# Bug report
Link: https://github.com/python/cpython/actions/runs/11333284963/job/31517440086#step:6:1025
Report:
```
In file included from Python/ceval.c:870:
Python/generated_cases.c.h:5062:13: warning: code will never be executed [-Wunreachable-code]
stack_pointer += -1;
^~~~~~~~~~~~~
Python/generated_cases.c.h:4748:13: warning: code will never be executed [-Wunreachable-code]
stack_pointer += -1;
^~~~~~~~~~~~~
Python/generated_cases.c.h:803:31: warning: code will never be executed [-Wunreachable-code]
clang -c -fno-strict-overflow -Wsign-compare -Wunreachable-code -DNDEBUG -g -O3 -Wall -D_Py_TIER2=1 -D_Py_JIT -flto=thin -std=c11 -Wextra -Wno-unused-parameter -Wno-missing-field-initializers -Wstrict-prototypes -Werror=implicit-function-declaration -fvisibility=hidden -fprofile-instr-generate -I./Include/internal -I./Include/internal/mimalloc -I. -I./Include -DPy_BUILD_CORE -o Python/codegen.o Python/codegen.c
for (int _i = oparg; --_i >= 0;) {
^~~~~
Python/generated_cases.c.h:689:31: warning: code will never be executed [-Wunreachable-code]
for (int _i = oparg*2; --_i >= 0;) {
^~~~~
```
This looks like we have two different problems:
1. stack manipulations are added after `goto` and `return`: https://github.com/python/cpython/blob/cc5a225cdc2a5d4e035dd08d59cef39182c10a6c/Python/generated_cases.c.h#L4747-L4749 and https://github.com/python/cpython/blob/cc5a225cdc2a5d4e035dd08d59cef39182c10a6c/Python/generated_cases.c.h#L5061-L5063
2. some logical? problem with the loop definition
CC @Fidget-Spinner
<!-- gh-linked-prs -->
### Linked PRs
* gh-125697
* gh-133178
* gh-133181
<!-- /gh-linked-prs -->
| 25441592db179e9f5e6c896d1a691459a23e3422 | 19e93e2e269889ecb3c4c039091abff489f247c2 |
python/cpython | python__cpython-125516 | # test_traceback's PurePythonExceptionFormattingMixin does not fail correctly
```
class PurePythonExceptionFormattingMixin:
def get_exception(self, callable, slice_start=0, slice_end=-1):
try:
callable()
self.fail("No exception thrown.")
except:
return traceback.format_exc().splitlines()[slice_start:slice_end]
```
`self.fail` simply raises an exception, so putting it in a try..except is not going to work.
<!-- gh-linked-prs -->
### Linked PRs
* gh-125516
* gh-125524
* gh-125525
<!-- /gh-linked-prs -->
| 55c4f4c30b49734ce35dc88139b8b4fdc94c66fd | c8a1818fb01937b66b93728c11d68c9f9af688a5 |
python/cpython | python__cpython-125513 | # ``test_capi`` leaks references
# Bug report
### Bug description:
```python
admin@Admins-MacBook-Air ~/p/cpython (main)> ./python.exe -m test -R 3:3 test_capi
Using random seed: 2315181180
0:00:00 load avg: 88.93 Run 1 test sequentially in a single process
0:00:00 load avg: 88.93 [1/1] test_capi
beginning 6 repetitions. Showing number of leaks (. for 0 or less, X for 10 or more)
123:456
XXX XXX
test_capi leaked [10838, 10834, 10838] references, sum=32510
test_capi leaked [8036, 8034, 8036] memory blocks, sum=24106
test_capi failed (reference leak) in 43.4 sec
== Tests result: FAILURE ==
1 test failed:
test_capi
Total duration: 43.4 sec
Total tests: run=985 skipped=67
Total test files: run=1/1 failed=1
Result: FAILURE
```
### CPython versions tested on:
CPython main branch
### Operating systems tested on:
macOS
<!-- gh-linked-prs -->
### Linked PRs
* gh-125513
* gh-125532
<!-- /gh-linked-prs -->
| d3c82b9ccedd77fc302f5ab8ab0220b3372f574c | 55c4f4c30b49734ce35dc88139b8b4fdc94c66fd |
python/cpython | python__cpython-125499 | # Replace ghccc with `preserve_none` in JIT builds
Now that we are going to be updating the JIT to use LLVM 19 (see #124093), we can use the `preserve_none` attribute which exposes ghccc to the compiler instead of [manually patching in the calling convention](https://github.com/python/cpython/blob/8d42e2d915c3096e7eac1c649751d1da567bb7c3/Tools/jit/_targets.py#L148) in LLVM IR and then compiling that. Additionally, `preserve_none` supports both [x86-64](https://github.com/llvm/llvm-project/pull/76868) and [AArch64](https://github.com/llvm/llvm-project/issues/87423) targets.
Will need to see what happens with 32-bit Windows during CI.
<!-- gh-linked-prs -->
### Linked PRs
* gh-125499
<!-- /gh-linked-prs -->
| c29bbe21018dc1602ea70f34621de67cce782ed2 | 597d814334742dde386a4d2979b9418aee6fcaba |
python/cpython | python__cpython-125476 | # CI is failing on current main: test_asyncio.test_taskgroups
# Bug report
### Bug description:
Example of failing CI:
* https://github.com/python/cpython/actions/runs/11331846505/job/31512771372?pr=125471
* https://github.com/python/cpython/actions/runs/11331026920/job/31510134859?pr=125469
* https://github.com/python/cpython/actions/runs/11331846505/job/31512771372?pr=125471
### CPython versions tested on:
CPython main branch
### Operating systems tested on:
_No response_
<!-- gh-linked-prs -->
### Linked PRs
* gh-125476
* gh-125478
* gh-125486
<!-- /gh-linked-prs -->
| 0b28ea4a35dc7c68c97127f7aad8f0175d77c520 | 1bffd7a2a738506a4ad50c6c3c2c32926cce6d14 |
python/cpython | python__cpython-125471 | # New warning: unused variable `left_o`[-Wunused-variable] in `Python/generated_cases.c.h:186`
# Bug report
See output in https://github.com/python/cpython/pull/125455
<img width="992" alt="Снимок экрана 2024-10-14 в 19 37 26" src="https://github.com/user-attachments/assets/de742b55-c06f-418a-8ebf-deea82f6123e">
<!-- gh-linked-prs -->
### Linked PRs
* gh-125471
<!-- /gh-linked-prs -->
| 0c8c665581ede95fe119f902b070e395614b78ed | 843d28f59d2616d052d9d45f31823976da07f0f3 |
python/cpython | python__cpython-125462 | # Remove reference to python 2 in section 2 of language ref
# Documentation
The description of identifier names is based on how those names were handled in Python 2.0. The description should instead be only for Python 3.
<!-- gh-linked-prs -->
### Linked PRs
* gh-125462
* gh-125464
* gh-125465
<!-- /gh-linked-prs -->
| 5dac0dceda9097d46a0b5a6ad7c927e002c6c7a5 | d5dbbf4372cd3dbf3eead1cc70ddc4261c061fd9 |
python/cpython | python__cpython-125492 | # test_concurrent_futures.test_shutdown.test_processes_terminate() hangs randomly when run multiple times
Example on Fedora 40:
```
$ ./python -m test test_concurrent_futures.test_shutdown -m test_processes_terminate -v -R 3:3 --timeout=15
(...)
OK
.test_processes_terminate (test.test_concurrent_futures.test_shutdown.ProcessPoolForkProcessPoolShutdownTest.test_processes_terminate) ... /home/vstinner/python/main/Lib/multiprocessing/popen_fork.py:67: DeprecationWarning: This process (pid=290121) is multi-threaded, use of fork() may lead to deadlocks in the child.
self.pid = os.fork()
0.02s ok
test_processes_terminate (test.test_concurrent_futures.test_shutdown.ProcessPoolForkserverProcessPoolShutdownTest.test_processes_terminate) ... 0.22s ok
test_processes_terminate (test.test_concurrent_futures.test_shutdown.ProcessPoolSpawnProcessPoolShutdownTest.test_processes_terminate) ... Timeout (0:00:15)!
Thread 0x00007f955fe006c0 (most recent call first):
File "/home/vstinner/python/main/Lib/concurrent/futures/process.py", line 182 in _on_queue_feeder_error
File "/home/vstinner/python/main/Lib/multiprocessing/queues.py", line 290 in _feed
File "/home/vstinner/python/main/Lib/threading.py", line 992 in run
File "/home/vstinner/python/main/Lib/threading.py", line 1041 in _bootstrap_inner
File "/home/vstinner/python/main/Lib/threading.py", line 1012 in _bootstrap
Thread 0x00007f9564c006c0 (most recent call first):
File "/home/vstinner/python/main/Lib/threading.py", line 1092 in join
File "/home/vstinner/python/main/Lib/multiprocessing/queues.py", line 217 in _finalize_join
File "/home/vstinner/python/main/Lib/multiprocessing/util.py", line 216 in __call__
File "/home/vstinner/python/main/Lib/multiprocessing/queues.py", line 149 in join_thread
File "/home/vstinner/python/main/Lib/concurrent/futures/process.py", line 566 in _join_executor_internals
File "/home/vstinner/python/main/Lib/concurrent/futures/process.py", line 557 in join_executor_internals
File "/home/vstinner/python/main/Lib/concurrent/futures/process.py", line 380 in run
File "/home/vstinner/python/main/Lib/threading.py", line 1041 in _bootstrap_inner
File "/home/vstinner/python/main/Lib/threading.py", line 1012 in _bootstrap
Thread 0x00007f9573c83740 (most recent call first):
File "/home/vstinner/python/main/Lib/threading.py", line 1092 in join
File "/home/vstinner/python/main/Lib/concurrent/futures/process.py", line 854 in shutdown
File "/home/vstinner/python/main/Lib/test/test_concurrent_futures/test_shutdown.py", line 274 in test_processes_terminate
File "/home/vstinner/python/main/Lib/unittest/case.py", line 606 in _callTestMethod
File "/home/vstinner/python/main/Lib/unittest/case.py", line 660 in run
File "/home/vstinner/python/main/Lib/unittest/case.py", line 716 in __call__
File "/home/vstinner/python/main/Lib/unittest/suite.py", line 122 in run
File "/home/vstinner/python/main/Lib/unittest/suite.py", line 84 in __call__
File "/home/vstinner/python/main/Lib/unittest/suite.py", line 122 in run
File "/home/vstinner/python/main/Lib/unittest/suite.py", line 84 in __call__
File "/home/vstinner/python/main/Lib/unittest/runner.py", line 240 in run
File "/home/vstinner/python/main/Lib/test/libregrtest/single.py", line 57 in _run_suite
File "/home/vstinner/python/main/Lib/test/libregrtest/single.py", line 37 in run_unittest
File "/home/vstinner/python/main/Lib/test/libregrtest/single.py", line 135 in test_func
File "/home/vstinner/python/main/Lib/test/libregrtest/refleak.py", line 132 in runtest_refleak
File "/home/vstinner/python/main/Lib/test/libregrtest/single.py", line 87 in regrtest_runner
File "/home/vstinner/python/main/Lib/test/libregrtest/single.py", line 138 in _load_run_test
File "/home/vstinner/python/main/Lib/test/libregrtest/single.py", line 181 in _runtest_env_changed_exc
File "/home/vstinner/python/main/Lib/test/libregrtest/single.py", line 281 in _runtest
File "/home/vstinner/python/main/Lib/test/libregrtest/single.py", line 310 in run_single_test
File "/home/vstinner/python/main/Lib/test/libregrtest/main.py", line 363 in run_test
File "/home/vstinner/python/main/Lib/test/libregrtest/main.py", line 397 in run_tests_sequentially
File "/home/vstinner/python/main/Lib/test/libregrtest/main.py", line 541 in _run_tests
File "/home/vstinner/python/main/Lib/test/libregrtest/main.py", line 576 in run_tests
File "/home/vstinner/python/main/Lib/test/libregrtest/main.py", line 739 in main
File "/home/vstinner/python/main/Lib/test/libregrtest/main.py", line 747 in main
File "/home/vstinner/python/main/Lib/test/__main__.py", line 2 in <module>
File "/home/vstinner/python/main/Lib/runpy.py", line 88 in _run_code
File "/home/vstinner/python/main/Lib/runpy.py", line 198 in _run_module_as_main
```
Example on AMD64 Fedora Stable Refleaks PR buildbot: https://buildbot.python.org/#/builders/474/builds/1716
<!-- gh-linked-prs -->
### Linked PRs
* gh-125492
* gh-125533
* gh-125598
* gh-125599
<!-- /gh-linked-prs -->
| 760872efecb95017db8e38a8eda614bf23d2a22c | d83fcf8371f2f33c7797bc8f5423a8bca8c46e5c |
python/cpython | python__cpython-125574 | # Python 3.13 can't compile with armv5 target
# Bug report
### Bug description:
If we build cpython 3.13 for Linux/ARMv5 using `-march=armv5te` and then run inside a qemu with the `versatilepb` machine (which uses a ARM926EJ-S processor) then python crashes with "Illegal Instruction". Specifically, `mrc` is used in a few places to get the thread ID, but this not possible in cores lower than ARMv6K.
Users of `mrc`:
- https://github.com/python/cpython/blob/main/Include/object.h#L195
- https://github.com/python/cpython/blob/main/Include/internal/mimalloc/mimalloc/prim.h#L172
- https://github.com/python/cpython/blob/main/Include/internal/mimalloc/mimalloc/prim.h#L199
I suspect the best way of implemented the required check would be:
```
#if defined(__arm__) && __ARM_ARCH >= 6 && !defined(__ARM_ARCH_6__)
```
That is, if Arm and if the architecture level is 6 or great and if this is not bare v6 (so v6K/v6Z are included).
Or a blunter but simpler hammer on the grounds that v5/v6 is pretty niche these days could be:
```
#if defined(__arm__) && __ARM_ARCH >= 7
```
### CPython versions tested on:
3.13
### Operating systems tested on:
Linux
<!-- gh-linked-prs -->
### Linked PRs
* gh-125574
* gh-125595
<!-- /gh-linked-prs -->
| feda9aa73ab95d17a291db22c416146f8e70edeb | 51410d8bdcfe0fd215f94a098dc6cd0919c648a1 |
python/cpython | python__cpython-125437 | # Doc for ConfigParser misses `allow_unnamed_section=False` param
Currently
[doc for configparser.ConfigParser](https://docs.python.org/release/3.13.0/library/configparser.html#configparser.ConfigParser)
misses `allow_unnamed_section=False` param, which is added in https://github.com/python/cpython/pull/117273

<!-- gh-linked-prs -->
### Linked PRs
* gh-125437
* gh-126421
<!-- /gh-linked-prs -->
| d9602265479bcd96dc377d92a34556baf34ac3cd | 78015818c2601db842d101cad6ce2319c921935f |
python/cpython | python__cpython-125427 | # f_lineno has different results on Windows and Linux
# Bug report
### Bug description:
On Windows 10, `f_lineno` is 1. On WSL (Ubuntu 20), it is 611. I don't see any mention of cross-platform differences called out in the [bdb docs](https://docs.python.org/3.12/library/bdb.html), so I'm wondering if this is a bug.
```python
import bdb
f = {}
class areplDebug(bdb.Bdb):
# override
def user_line(self,frame):
global f
f = frame
b = areplDebug()
b.run('x=1+5',{},{})
print('frame lineno is ' + str(f.f_lineno)) # 611 on Linux, 1 on Windows
import linecache
line = linecache.getline(f.f_code.co_filename, f.f_lineno)
print('frame file is: ' + f.f_code.co_filename) # '/home/almenon/.pyenv/versions/3.12.7/lib/python3.12/bdb.py' on Linux, `<string>` on windows
print('frame line is: ' + line) # ' sys.settrace(None)\n' on Linux, None on Windows
```
It's not a WSL-specific issue because I'm getting the same error in Github CI. See https://github.com/Almenon/AREPL-backend/actions/runs/11316437382/job/31468723754?pr=193
**Reproduction**:
Checkout https://github.com/Almenon/AREPL-backend/tree/8aab53e834be9ec4c1a41de08831107446051bc5. Then:
```
cd AREPL-backend/python
python -m pip install -r requirements.txt
pytest
```
### CPython versions tested on:
3.12
### Operating systems tested on:
Linux
<!-- gh-linked-prs -->
### Linked PRs
* gh-125427
* gh-125530
* gh-125531
<!-- /gh-linked-prs -->
| 703227dd021491ceb9343f69fa48f4b6a05adbb3 | d3c82b9ccedd77fc302f5ab8ab0220b3372f574c |
python/cpython | python__cpython-125443 | # memoryview is a Sequence but does not implement the full Sequence API
# Bug report
### Bug description:
The `memoryview` builtin is registered as a `Sequence`, but doesn't implement the full API, because it doesn't inherit from `Sequence` at runtime and doesn't implement the mixin methods.
```python
>>> import collections.abc
>>> issubclass(memoryview, collections.abc.Sequence)
True
>>> collections.abc.Sequence.index
<function Sequence.index at 0x10148d250>
>>> memoryview.index
Traceback (most recent call last):
File "<python-input-4>", line 1, in <module>
memoryview.index
AttributeError: type object 'memoryview' has no attribute 'index'
```
Among the methods listed for Sequence in [the documentation](https://docs.python.org/3/library/collections.abc.html), memoryview is missing `__contains__`, `__reversed__`, `index`, and `count`. This is causing problems for typing; in the typeshed stubs we have to either lie one way by claiming that memoryview has methods it doesn't have, or lie another way by claiming it is not a Sequence when `issubclass()` says otherwise at runtime (python/typeshed#12800). To fix this, we should either make memoryview not a Sequence, or add the missing methods. The former has compatibility implications, so I propose to add the methods in 3.14.
### CPython versions tested on:
CPython main branch
### Operating systems tested on:
_No response_
<!-- gh-linked-prs -->
### Linked PRs
* gh-125443
* gh-125446
* (under consideration) gh-125441
* (under consideration) gh-125505
<!-- /gh-linked-prs -->
| 4331832db02ff4a7598dcdd99cae31087173dce0 | 050d59bd1765de417bf4ec8b5c3cbdac65695f5e |
python/cpython | python__cpython-125404 | # incorrect formatting in chapter "12. Virtual Environments and Packages" of the tutorial
# Documentation
Chapter 12. Virtual Environments and Packages of the tutorial formats console commands as "bash" code, which results in some strange formatting.
https://docs.python.org/3.14/tutorial/venv.html
for example:

or

the fix is it replace ".. code-block:: bash" with ".. code-block:: console"
<!-- gh-linked-prs -->
### Linked PRs
* gh-125404
<!-- /gh-linked-prs -->
| 6c386b703d19aaec9a34fd1e843a4d0a144ad14b | cd0f9d111a040ad863c680e9f464419640c8c3fd |
python/cpython | python__cpython-125399 | # The virtualenv activate script does not correctly detect the Windows Git Bash shell
# Bug report
### Bug description:
When using the virtualenv activate script in Git Bash for Windows, the environment is not correctly detected and paths are not converted.
This results in `$PATH` being set to something like `D:\a\github-actions-shells\github-actions-shells\venv/Scripts:...`, instead of `/d/a/github-actions-shells/github-actions-shells/venv/Scripts`.
Prior to https://github.com/python/cpython/pull/112508, the detection used `$OSTYPE`, which reports `msys` in Git Bash for Windows, however `uname` returns `MINGW...`.
This is a regression in Python 3.13.0
### CPython versions tested on:
3.13
### Operating systems tested on:
Windows
<!-- gh-linked-prs -->
### Linked PRs
* gh-125399
* gh-125733
<!-- /gh-linked-prs -->
| 2a378dba987e125521b678364f0cd44b92dd5d52 | 4b421e8aca7f2dccc5ac8604b78589941dd7974c |
python/cpython | python__cpython-125386 | # 4.8. Defining Functions example is slightly misleading
https://docs.python.org/3/tutorial/controlflow.html#defining-functions
This is the example:
```
def fib(n): # write Fibonacci series up to n
"""Print a Fibonacci series up to n."""
a, b = 0, 1
while a < n:
print(a, end=' ')
a, b = b, a+b
print()
# Now call the function we just defined:
fib(2000)
```
But this example writes Fibonacci series less than n, not up to n.
<!-- gh-linked-prs -->
### Linked PRs
* gh-125386
* gh-125395
* gh-125396
<!-- /gh-linked-prs -->
| 283ea5f3b2b6a18605b8598a979afe263b0f21ce | ce740d46246b28bb675ba9d62214b59be9b8411e |
python/cpython | python__cpython-125717 | # Whitespace-only lines in PDB cause a line continuation when taken from history
# Bug report
### Bug description:
The issue happens *only* when pulling a line from history.
Typing 4 spaces or a Tab and hitting Enter when at the PDB prompt is interpreted the same as hitting Enter without entering whitespace.
However, grabbing a whitespace-only line from history causes a prompt/line continuatin.
To reproduce:
1. Type `if True:` and hit Enter
2. Type 4 spaces and hit Enter
3. Type 4 spaces, `print("True")` and hit Enter twice
4. Hit the up arrow 2 times
5. Hit Enter again
After those 5 steps you will see:

Repeatedly hitting Enter at that point shows the same continuation prompt:

The new REPL had a somewhat similar issue (#118911 was part of it).
### CPython versions tested on:
3.13
### Operating systems tested on:
Linux
<!-- gh-linked-prs -->
### Linked PRs
* gh-125717
* gh-125736
<!-- /gh-linked-prs -->
| 8f5e39d5c885318e3128a3e84464c098b5f79a79 | 4c53b2577531c77193430cdcd66ad6385fcda81f |
python/cpython | python__cpython-130471 | # Use 4 spaces for indentation in PDB
# Feature or enhancement
### Proposal:
It looks like PDB evolved the ability to handle multi-line Python expressions in Python 3.13. I didn't see this mentioned in [the release notes](https://docs.python.org/3/whatsnew/3.13.html#pdb), but I'm excited to see this feature. Thank you to whoever added this feature! (@gaogaotiantian maybe?)
I did notice that the Tab key works in a somewhat unexpected way.
When I hit Enter at the end of an `if True:` line, I see:

Then if I hit the Tab key I see this:

Note that a tab character is inserted (just as one would be within the old Python REPL) but the tab width looks like 2 spaces.
I expected to see this instead:

In the old Python REPL, the Tab key used to insert a tab character, but it *looked* like 4 spaces were inserted because the tab stop happened to be 4 characters from the end of the `>>> ` prompt (which is 4 characters long):

One way to resolve this would be to shorten the PDB prompt to something like `(P) ` or `PDB `.
I would propose instead that the Tab key should insert 4 spaces in PDB (just as the new REPL does).
### Has this already been discussed elsewhere?
This is a minor feature, which does not need previous discussion elsewhere
### Links to previous discussion of this feature:
_No response_
<!-- gh-linked-prs -->
### Linked PRs
* gh-130471
<!-- /gh-linked-prs -->
| b6769e9404646e38d9c786984ef308c8e9747b91 | c989e74446836ce8a00ae002d4c229c81e1f7905 |
python/cpython | python__cpython-125356 | # Rewrite parse_intermixed_args() in argparse
`parse_intermixed_args()` and `parse_known_intermixed_args()` are implemented by parsing command lines twice -- first only optional arguments, then the remaining is parsed as positional arguments. This approach has some issues.
* The parser is temporary modified to suppress positional or required optional arguments in these two stages. This is not good, because the parser can no longer be used concurrently. This also smells bad in general, this can hide some bugs.
* Default values are handled twice.
* Unknown options in `parse_known_intermixed_args()` cannot be intermixed with positional arguments. Well, "parsing only known arguments" is a dubious feature, but still...
I tried to rewrite the implementation by moving the code deeper. No longer parser patching, defaults are handled only once, unknown options are excluded from positionals parsing. @hpaulj, @bitdancer, as the authors of the original implementation, could you please review this code?
<!-- gh-linked-prs -->
### Linked PRs
* gh-125356
* gh-125834
* gh-125839
<!-- /gh-linked-prs -->
| 759a54d28ffe7eac8c23917f5d3dfad8309856be | 57e3c59bb64fc2f8b2845a7e03ab0abb029ccd02 |
python/cpython | python__cpython-125482 | # `from __future__ import barry_as_FLUFL` doesn't work
# Bug report
### Bug description:
```
% ./python.exe -c 'from __future__ import barry_as_FLUFL; print(1 <> 2)'
File "<string>", line 1
from __future__ import barry_as_FLUFL; print(1 <> 2)
^^
SyntaxError: invalid syntax
```
But with Barry as the FLUFL, `<>` is the correct way to test for inequality.
The future works only if you pass the right flag to `compile()` (as test_flufl.py does), but there's no way to do that in a file.
Obviously this future is a joke so this doesn't matter greatly, but I think it indicates that future imports that affect the parser aren't handled properly. That's something we should fix in case we get other futures in the ... future that also affect the parser. The joke future is also useful as a way to demonstrate how future statements work in general without having to rely on a specific future that will go away in a few releases.
### CPython versions tested on:
3.12, CPython main branch
### Operating systems tested on:
_No response_
<!-- gh-linked-prs -->
### Linked PRs
* gh-125482
* gh-131062
* gh-131063
<!-- /gh-linked-prs -->
| 3bd3e09588bfde7edba78c55794a0e28e2d21ea5 | 05e89c34bd8389f87bd6c9462d5a06ef9e1a65ab |
python/cpython | python__cpython-125324 | # Unsafe DECREFs of borrowed references in the interpreter.
# Bug report
### Bug description:
It is unsafe to borrow a `PyObject *` reference from a `_PyStackRef` and `Py_DECREF` the `PyObject *` reference and not close the `_PyStackRef`. This is quite a common pattern in bytecodes.c, and prevents any optimizations based on reference lifetimes as the inferred lifetime is incorrect.
The fix for this is to change the incorrect pattern:
```C
PyObject *left_o = PyStackRef_AsPyObjectBorrow(left);
use(left_o);
Py_DECREF(left_o);
```
to the correct pattern:
```C
PyObject *left_o = PyStackRef_AsPyObjectBorrow(left);
use(left_o);
PyStackRef_CLOSE(left);
```
This is causing problems, as optimizations to exploit stack refs don't work: https://github.com/python/cpython/compare/main...faster-cpython:cpython:use-stackrefs-opt-experiment?expand=1
### CPython versions tested on:
CPython main branch
### Operating systems tested on:
Other
<!-- gh-linked-prs -->
### Linked PRs
* gh-125324
* gh-125439
<!-- /gh-linked-prs -->
| 4b358ee647809019813f106eb901f466a3846d98 | b52c7306ea4470f9d7548655c2a1b89a07ff5504 |
python/cpython | python__cpython-126141 | # The platform module can cause crashes in Windows due to slow WMI calls
# Crash report
### What happened?
When running on a virtual machine where WMI calls seems to have very variable performance the WMI C module can cause Python to crash.
If you have (or can simulate slow WMI calls) then simple python code should (non-deterministically) reproduce the problem. The reason it's non-deterministic is because It's a thread race with a shared resource on the stack. WMI thread in CPython can end up with a pointer to a now invalid stack frame.
I can cause the problem by repeatedly calling platform.machine() and platform.win32_ver() in a loop of about 100 iterations on a machine with slow WMI calls.
```
import platform
for i in range(100):
platform.win32_ver()
platform.machine()
```
On the affected machines this will sometimes cause the whole process to die with error that relates to the stack being trashed, such as 0xC0000409 where the stack canary has been overwritten.
From a crash dump (that i cannot share) I debugged this issue by taking the WMI module and running on its own. I notice in the code that there is a timeout that seems to have been created because the WMI calls themselves can be quite slow, especially in the case of permission problems where WMIs own timeout is quite long.
https://github.com/python/cpython/blob/main/PC/_wmimodule.cpp#L282
The problem is that this timeout can cause the function that the platform modules uses to finish before the thread running the WMI code does. This is a bit of a problem because the thread is using a pointer to a struct that is allocated on the stack that is about to go away.
https://github.com/python/cpython/blob/main/PC/_wmimodule.cpp#L241
That struct has handles to a bunch of things that the WMI thread wants to use or clean up, including references to the ends of a pipe for which WriteFile calls are used.
In some situations python hangs, sometimes windows terminates it because it detected a stack overflow, sometimes it works, sometimes the timeout is fine, but it all depends on where the thread doing the WMI work was at the time the calling function terminates.
I can stop this problem by monkey patching the WMI calls in the platform module (it has alternative code paths that work ok). I can also stop it by removing the simulated timeout in the WMI module.
The problem is that lots of tools use the platform module - i first discovered this whilst using poetry when poetry install would just terminate, but it can affect anything that uses the platform module to make WMI calls on a machine with slow WMI.
There is no reasonable workaround on the virtual machines I use because they are managed by an organisation (as is the python install on those machines).
### CPython versions tested on:
3.12
### Operating systems tested on:
Windows
### Output from running 'python -VV' on the command line:
_No response_
<!-- gh-linked-prs -->
### Linked PRs
* gh-126141
* gh-126202
* gh-126203
<!-- /gh-linked-prs -->
| 60c415bd531392a239c23c754154a7944695ac99 | c29bbe21018dc1602ea70f34621de67cce782ed2 |
python/cpython | python__cpython-125317 | # Deprecation of `Py_GetPrefix`-family APIs should document the `sys.base_*` counterparts
# Documentation
Documentation suggests `sys.prefix` as an alternative to `Py_GetPrefix` in the deprecation note [here](https://docs.python.org/3/c-api/init.html#c.Py_GetPrefix)
> Deprecated since version 3.13, will be removed in version 3.15: Get [sys.prefix](https://docs.python.org/3/library/sys.html#sys.prefix) instead.
and it seems correct:
```python
>>> import poc # CPython module that prints `Py_GetPrefix` on init
Prefix: /home/y5/micromamba/envs/py3.13
>>> import sys; print(sys.prefix); print(sys.base_prefix)
/home/y5/micromamba/envs/py3.13
/home/y5/micromamba/envs/py3.13
```
But strictly speaking, `sys.base_prefix` is the real counterpart rather than `sys.prefix` when it involves venv:
```python
# inside venv
>>> import poc
Prefix: /home/y5/micromamba/envs/py3.13
>>> import sys; print(sys.prefix); print(sys.base_prefix)
/tmp/foo
/home/y5/micromamba/envs/py3.13
```
Source: #125235
<!-- gh-linked-prs -->
### Linked PRs
* gh-125317
* gh-125776
<!-- /gh-linked-prs -->
| 7d88140d5299bd086434840db66ede8ccd01a688 | ded105a62b9d78717f8dc64652e3903190b585dd |
python/cpython | python__cpython-125749 | # Behaviour of os.kill() on Windows may be misunderstood
The [latest os.kill() document](https://docs.python.org/3/library/os.html#os.kill) says “The Windows version of kill() additionally takes **process handles** to be killed,” but the `kill()` function actually does not accept process handles instead of process IDs. Runing the following code with `Python 3.12.6` on `Windows 10 Education Edition version 22H2 amd64` will result in an error.
Test code:
```
import os, signal
proc_handle = os.spawnl(os.P_NOWAIT, r'C:\Windows\notepad.exe', 'notepad.exe')
os.kill(proc_handle, signal.SIGTERM)
```
Error message:
```
Traceback (most recent call last):
File "C:\Users\John\Documents\Python\os_kill_proc_handle.py", line 3, in <module>
os.kill(proc_handle, signal.SIGTERM)
OSError: [WinError 87] The parameter is incorrect
```
<!-- gh-linked-prs -->
### Linked PRs
* gh-125749
* gh-126587
* gh-126588
<!-- /gh-linked-prs -->
| 75ffac296ef24758b7e5bd9316f32a8170ade37f | 6ec886531f14bdf90bc0c3718ac2ae8e3f8823b8 |
python/cpython | python__cpython-125297 | # Argparse docs have a strange fragment identifier for `name or flags`
# Bug report
### Bug description:
Clicking on the "name or flags" parameter under the `add_argument `method docs links to [#id5](https://docs.python.org/3/library/argparse.html#id5) instead of #name-or-flags. The jump target isn't broken, but it has a slightly worse UX. It looks like this behaviour changed from the 3.11 docs onward.
### CPython versions tested on:
3.12, 3.13, CPython main branch
### Operating systems tested on:
macOS
<!-- gh-linked-prs -->
### Linked PRs
* gh-125297
* gh-125299
* gh-125300
<!-- /gh-linked-prs -->
| c1913effeed4e4da4d5310a40ab518945001ffba | 2f8301cbfbdd2976d254a4a772b4879069dd4298 |
python/cpython | python__cpython-125292 | # Possible mis-leading sample code in Doc/library/asyncio-task.rst
In the [Coroutines and Tasks](https://docs.python.org/3.13/library/asyncio-task.html), there is a possible semantic bug in one of sample codes.
In the [Awaitables](https://docs.python.org/3.13/library/asyncio-task.html#awaitables) subsection, the sample code for coroutines is:
```python
import asyncio
async def nested():
return 42
async def main():
# Nothing happens if we just call "nested()".
# A coroutine object is created but not awaited,
# so it *won't run at all*.
nested()
# Let's do it differently now and await it:
print(await nested()) # will print "42".
asyncio.run(main())
```
The code and its comments are all correct. But think about these two lines:
- in line 10: `nested()`
- in line 13: `print(await nested())`
Since `nested` function doesn't print anything and just returns an integer number, if code in line 10 has no bug at all (even related to asyncio/async/await things), it prints nothing.
So I think, it is better to move the `print` to `nested` function and change the code sample to:
```python
import asyncio
async def nested():
print(42)
async def main():
# Nothing happens if we just call "nested()".
# A coroutine object is created but not awaited,
# so it *won't run at all*.
nested() # will raise RuntimeWarning: coroutine 'nested' was never awaited
# Let's do it differently now and await it:
await nested() # will print "42".
asyncio.run(main())
```
<!-- gh-linked-prs -->
### Linked PRs
* gh-125292
* gh-125374
* gh-125375
<!-- /gh-linked-prs -->
| fa52b82c91a8e1a0971bd5fef656473ec93f41e3 | 4a2282b0679bbf7b7fbd36aae1b1565145238961 |
python/cpython | python__cpython-125368 | # Increase minimum supported Sphinx to 7.2.6
# Documentation
Python 3.13 has been released, so I looked to review the minimum Sphinx version.
Using the same survey as https://github.com/python/cpython/issues/109209#issuecomment-1713859051 and #117928:
Distro | CPython| Sphinx
-- | --: | --:
Debian 13 Trixie (testing) | 3.12.6 | 7.4.7
Debian Sid (unstable) | 3.12.6 | 7.4.7
Debian (experimental) | 3.12.6 | *N/A*
Fedora 39 (EOL 12/11/2024) | 3.12.6 | **6.2.1**
Fedora 40 | 3.12.6 | 7.2.6
Fedora 41 | 3.13.0 | 7.3.7
Fedora rawhide | 3.13.0 | 7.3.7
Gentoo | *N/A* | *N/A*
openSUSE Leap | | 8.0.2
openSUSE Tumbleweed | | 8.0.2
RHEL | *N/A* | *N/A*
**_Minimum Sphinx version for 3.12_** | **_3.12_** | **_7.2.6_** †
† The blocker to updating is Fedora 39, which is end-of-life in a month's time. cc @hroncok -- would you prefer we wait until November until the actual EOL date?
Sphinx 7.2 has support for documenting generic classes and [`:no-typesetting:`](https://www.sphinx-doc.org/en/master/usage/domains/index.html#basic-markup). Sadly [`.. py:type::`](https://www.sphinx-doc.org/en/master/usage/domains/python.html#directive-py-type) is new in Sphinx 7.4, so we need to wait for Fedora 40's end-of-life (13 May 2025) to use it.
References:
* https://qa.debian.org/madison.php?package=python3
* https://qa.debian.org/madison.php?package=sphinx
* https://packages.debian.org/trixie/python3
* https://packages.debian.org/trixie/python3-sphinx
* https://fedorapeople.org/groups/schedule/f-39/f-39-key-tasks.html
* https://packages.fedoraproject.org/pkgs/python3.12/python3/
* https://packages.fedoraproject.org/pkgs/python3.13/python3/
* https://packages.fedoraproject.org/pkgs/python-sphinx/python3-sphinx/
* https://software.opensuse.org/package/python-Sphinx
cc:
* @doko42 @mitya57 / Debian
* @hroncok / Fedora / RHEL
* @mgorny / Gentoo
* @danigm @mcepl / openSUSE
* @AA-Turner @hugovk / CPython
<!-- gh-linked-prs -->
### Linked PRs
* gh-125368
* gh-125720
* gh-125721
<!-- /gh-linked-prs -->
| 2bb7ab7ad364ec804eab8ed6867df01ece887240 | 322f14eeff9e3b5853eaac3233f7580ca0214cf8 |
python/cpython | python__cpython-125416 | # cross-compile error: undefined reference to `__atomic_load_8'
# Bug report
### Bug description:
Should not configure script check if `-latomic` is needed?
getting build error when cross-compile:
```
arm-linux-gnueabi-gcc -L/opt/zlib-arm-linux-gnueabi/lib -L/opt/xz-arm-linux-gnueabi/lib -L/opt/readline-arm-linux-gnueabi/lib -L/opt/libffi-arm-linux-gnueabi/lib -L/opt/libtirpc-arm-linux-gnueabi/lib -L/opt/libuuid-arm-linux-gnueabi/lib -L/opt/zlib-arm-linux-gnueabi/lib -L/opt/xz-arm-linux-gnueabi/lib -L/opt/readline-arm-linux-gnueabi/lib -L/opt/libffi-arm-linux-gnueabi/lib -L/opt/libtirpc-arm-linux-gnueabi/lib -L/opt/libuuid-arm-linux-gnueabi/lib -Xlinker -export-dynamic -o Programs/_testembed Programs/_testembed.o -L. -lpython3.13 -lpthread -ldl -lpthread -lutil -lm
/root/x-tools/arm-linux-gnueabi/lib/gcc/arm-linux-gnueabi/13.2.0/../../../../arm-linux-gnueabi/bin/ld: ./libpython3.13.so: undefined reference to `__atomic_load_8'
/root/x-tools/arm-linux-gnueabi/lib/gcc/arm-linux-gnueabi/13.2.0/../../../../arm-linux-gnueabi/bin/ld: ./libpython3.13.so: undefined reference to `__atomic_store_8'
/root/x-tools/arm-linux-gnueabi/lib/gcc/arm-linux-gnueabi/13.2.0/../../../../arm-linux-gnueabi/bin/ld: ./libpython3.13.so: undefined reference to `__atomic_fetch_add_8'
/root/x-tools/arm-linux-gnueabi/lib/gcc/arm-linux-gnueabi/13.2.0/../../../../arm-linux-gnueabi/bin/ld: ./libpython3.13.so: undefined reference to `__atomic_compare_exchange_8'
collect2: error: ld returned 1 exit status
```
configure cmd line:
```
./configure --prefix=/opt/python-arm-linux-gnueabi-3.13.0 --host=arm-linux-gnueabi --target=arm-linux-gnueabi --build=x86_64-linux-gnu --disable-ipv6 ac_cv_file__dev_ptmx=no ac_cv_file__dev_ptc=no --enable-shared --with-openssl=/opt/openssl-arm-linux-gnueabi --with-build-python=/root/.pyenv/versions/3.13.0/bin/python
```
## workaround
```bash
export LDFLAGS="-latomic $LDFLAGS"
```
### CPython versions tested on:
3.13
### Operating systems tested on:
Linux
<!-- gh-linked-prs -->
### Linked PRs
* gh-125416
* gh-125493
<!-- /gh-linked-prs -->
| 8d42e2d915c3096e7eac1c649751d1da567bb7c3 | 0b28ea4a35dc7c68c97127f7aad8f0175d77c520 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.