repo stringclasses 1 value | instance_id stringlengths 20 22 | problem_statement stringlengths 126 60.8k | merge_commit stringlengths 40 40 | base_commit stringlengths 40 40 |
|---|---|---|---|---|
python/cpython | python__cpython-129899 | # Store and use the current task of asyncio on thread state for all threads
`asyncio` currently stores the current task in a global dict `_current_tasks` and on thread state. Storing it on dict in free-threading will cause significant contention under many threads and such I propose to remove it and always rely on the task stored on the thread state.
This has the added benefit that external introspection will always have the correct task and it will be guaranteed by this implementation while also being performant.
cc @asvetlov @ambv @pablogsal
<!-- gh-linked-prs -->
### Linked PRs
* gh-129899
<!-- /gh-linked-prs -->
| 660f126f870535b6fa607e6d9cdd3cdbd9ed2cb1 | fb17f41522718013036ce44cbe83a72f5d9a2104 |
python/cpython | python__cpython-129896 | # Documentation: Remove incorrect role directive in `graphlib.py`
# Documentation
Remove :exec: role directive from ValueError in TopologicalSorter.done() docstring since docstrings are not processed as reST contents.
<!-- gh-linked-prs -->
### Linked PRs
* gh-129896
* gh-129904
* gh-129905
<!-- /gh-linked-prs -->
| c53730171f18b90202aa0918b3c05412222bb1ec | f7c7decc4c7c10084ab3c1473e1a590666d3ea17 |
python/cpython | python__cpython-129888 | # Support context manager protocol by contextvars.Token
# Feature or enhancement
### Proposal:
Sometimes, mostly in tests, I write something like
```python
from contextvars import ContextVar
var = ContextVar('var')
def test_a():
token = var.set('new val')
do_stuff()
var.reset(token)
```
It looks a little cumbersome, the support for `with var:` would be awesome:
```
def test_b():
with var.set('new val'):
do_stuff()
```
### Has this already been discussed elsewhere?
This is a minor feature, which does not need previous discussion elsewhere
### Links to previous discussion of this feature:
_No response_
<!-- gh-linked-prs -->
### Linked PRs
* gh-129888
<!-- /gh-linked-prs -->
| 469d2e416c453b19d7a75fe31ceec732445e9ef2 | e1b38ea82ee20ad8b10578e7244e292b3fac9ae8 |
python/cpython | python__cpython-129875 | # avoid mixing pure python and C implementation of asyncio
Currently writing tests in asyncio is difficult because we have two separate implementations. The C implementation overrides the pure Python implementation at runtime as such testing python implementation is tricky. I propose that the pure python implementation should use the pure python versions of things like `register_task` and `enter_task` so that the it gets tested properly and it makes debugging easier. There should not really be a case where we mix both the implementations, both should be separate.
<!-- gh-linked-prs -->
### Linked PRs
* gh-129875
* gh-129887
* gh-129890
* gh-129891
* gh-130515
* gh-130516
<!-- /gh-linked-prs -->
| d5796e64e061a5366186561d1a003c1436ad3492 | 6fbf15f98e04f582aeccf5334a94840149ff7cd5 |
python/cpython | python__cpython-129859 | # IDLE: Only copy the text section of idle.html to idlelib/help.html
Parsing help.html to display the IDLE doc requires ignoring unneeded material both before and after the text section. It makes more sense to never copy this material into the file.
1. In help.copystrip(), copy and strip only the IDLE text section of idle.html to idlelib/help.html. This includes the lines `<section id="idle">` to the matching `</section>` (plus a couple of blank lines). The result is reduced from 1194 to 805 lines. This will remove irrelevant lines from help.html diffs, making them easier to check. It may speedup copystrip, though this does not matter.
2. In help.HelpParser.handle_starttag, remove the code to skip the html before and after this section and set self.show on and off again. Edit all other code involving self.show. Proccessing only the needed html with less code will be faster. The simpler code will be a bit easier to understand and maintain.
3. Include a reduced help.html in the PR.
4. Near the top of Doc/library/idle.rst, add a comment (reminder) about running copystrip after editing.
<!-- gh-linked-prs -->
### Linked PRs
* gh-129859
* gh-129884
* gh-129885
<!-- /gh-linked-prs --> | 6fbf15f98e04f582aeccf5334a94840149ff7cd5 | 0d9c4e260d4ea8fd8bc61c58bdf7db4c670470ee |
python/cpython | python__cpython-129913 | # Test_sqlite3 failing with SQLite 3.49.0
# Bug report
### Bug description:
test_sqlite3/test_dump.py relies on fts4 for `test_dump_virtual_tables`. However, SQLite 3.49.0 apparently no longer builds with`ENABLE_FTS4` by default, presumably because it was superseded by FTS5. This causes this unit test to fail.
### CPython versions tested on:
3.13
### Operating systems tested on:
Linux
<!-- gh-linked-prs -->
### Linked PRs
* gh-129913
* gh-129918
<!-- /gh-linked-prs -->
| cda83cade0b684bcb1221a30bfe0b6861abd3b3f | 91d954411272a07a5431326711a8a5bdf4e2c323 |
python/cpython | python__cpython-129902 | # Special syntax error for `elif` after `else`
# Feature or enhancement
### Proposal:
I've implemented a special syntax error in the case that an `elif` follows an `else`. See [here](https://github.com/swfarnsworth/cpython/commit/883ada95e129bec660f55da7b1422deec3871979).
```python
if i % 3 == 0:
print("divisible by 3")
else:
print("not divisible by 3")
elif i % 2 == 0:
print("divisible by 2")
# SyntaxError: elif not allowed after else
```
This is currently the extent of the new behavior. If possible, I would be interested to implement behavior like this:
```python
if isinstance(i, int):
if i % 3 == 0:
print("divisible by 3")
else:
print("not divisible by 3")
elif isinstance(i, str):
print("i is a string")
# SyntaxError: elif not allowed after else. Perhaps the elif is at the wrong indentation level?
```
This would be even more informative for beginners, but I'm not sure if the parser supports that level of state. In particular, we wouldn't want the latter syntax error (the one that suggests that the indentation level is wrong) to be raised if the invalid elif were within a for block.
### Has this already been discussed elsewhere?
No response given
### Links to previous discussion of this feature:
_No response_
<!-- gh-linked-prs -->
### Linked PRs
* gh-129902
<!-- /gh-linked-prs -->
| 99b71efe8e9d59ce04b6d59ed166b57dff3e84d8 | c3a71180656a906d243e4cc0ab974387753b2fe1 |
python/cpython | python__cpython-129848 | # pure-Python warn_explicit() passes wrong arg to WarningMessage
The pure-Python implementation of the `warnings.warn_explicit()` function does this:
```python
msg = WarningMessage(message, category, filename, lineno, source)
```
But the 5th argument of `WarningMessage` is `file` (the file the message is supposed to be printed into), not `source` ("the destroyed object which emitted a `ResourceWarning`").
Here's how to reproduce the bug:
```pycon
>>> import sys, importlib, warnings
>>> warnings.warn_explicit('eggs', UserWarning, 'eggs.py', 42, source=object())
eggs.py:42: UserWarning: eggs
Object allocated at (most recent call last):
File "<stdin>", lineno 1
>>> # so far so good; now let's try the same with pure-Python implementation
>>> sys.modules['_warnings'] = None
>>> importlib.reload(warnings)
<module 'warnings' from '/usr/lib/python3.12/warnings.py'>
>>> warnings.warn_explicit('eggs', UserWarning, 'eggs.py', 42, source=object())
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/lib/python3.12/warnings.py", line 413, in warn_explicit
_showwarnmsg(msg)
File "/usr/lib/python3.12/warnings.py", line 115, in _showwarnmsg
_showwarnmsg_impl(msg)
File "/usr/lib/python3.12/warnings.py", line 30, in _showwarnmsg_impl
file.write(text)
^^^^^^^^^^
AttributeError: 'object' object has no attribute 'write'
```
<!-- gh-linked-prs -->
### Linked PRs
* gh-129848
* gh-131349
* gh-131350
<!-- /gh-linked-prs -->
| 80e00ecc399db8aeaa9f3a1c87a2cfb34517d7be | ac50ece6cea8745834e4ec0a9617809a51245bfc |
python/cpython | python__cpython-129845 | # warnings.catch_warnings docstring: obsolete compat note
The docstring for `warnings.catch_warnings.__init__` reads:
```
For compatibility with Python 3.0, please consider all arguments to be
keyword-only.
```
This makes little sense these days.
<!-- gh-linked-prs -->
### Linked PRs
* gh-129845
<!-- /gh-linked-prs -->
| 0f128b9435fccb296714f3ea2466c3fdda77d91d | 80b9e79d84e835ecdb5a15c9ba73e44803ca9d32 |
python/cpython | python__cpython-129839 | # _Py_NO_SANITIZE_UNDEFINED is defined twice when compiling with recent GCC
# Bug report
### Bug description:
When compiling with a recent version of GCC and enabling undefined sanitizer, I see the following warning:
```
./Modules/faulthandler.c:49:11: warning: "_Py_NO_SANITIZE_UNDEFINED" redefined
49 | # define _Py_NO_SANITIZE_UNDEFINED __attribute__((no_sanitize_undefined))
| ^~~~~~~~~~~~~~~~~~~~~~~~~
./Modules/faulthandler.c:44:13: note: this is the location of the previous definition
44 | # define _Py_NO_SANITIZE_UNDEFINED __attribute__((no_sanitize("undefined")))
| ^~~~~~~~~~~~~~~~~~~~~~~~~
```
The conditions just need some updating to match recent GCC and Clang versions.
### CPython versions tested on:
CPython main branch
### Operating systems tested on:
Linux
<!-- gh-linked-prs -->
### Linked PRs
* gh-129839
* gh-130365
* gh-130366
<!-- /gh-linked-prs -->
| 568db400ff07240a5ed6f263af281405ccaec716 | 0f5b82169e12321fd2294bf534496ad42a682ac4 |
python/cpython | python__cpython-129836 | # pathlib ABCs: `ReadablePath.glob('')` doesn't add trailing slash
# Bug report
Minor issue in some still-private code.
In the pathlib ABCs, `ReadablePath.glob('')` should work a bit like `JoinablePath.joinpath('')`, i.e. it should yield the path *with a trailing slash* if it's an existing directory. This isn't presently the case, because the globbing implementation tries to defer adding trailing slashes, for basically no good reason.
Most easily reproduced by running `DummyReadablePathTest.test_glob_empty_pattern()`, which currently asserts the wrong result (!)
<!-- gh-linked-prs -->
### Linked PRs
* gh-129836
<!-- /gh-linked-prs -->
| 707d066193c26ab66c8e5e45e72c3a37f48daf45 | 6c67904e793828d84716a8c83436c9495235f3a1 |
python/cpython | python__cpython-129806 | # `Tools/jit` has several `bytes` and `bytearray` mixups
# Bug report
`Stensil` defines `body` as `bytearray` https://github.com/python/cpython/blob/175844713af383c9e4dd60166d1d7407c80a1949/Tools/jit/_stencils.py#L189-L197
But, it is passed to functions that expect `bytes`, this now works for historic reasons. But, since mypy@2.0 or mypy@1.5 with `--strict-bytes` turned on - it won't. See https://github.com/python/mypy/blob/master/CHANGELOG.md#--strict-bytes
Examples:
```diff
diff --git Tools/jit/_stencils.py Tools/jit/_stencils.py
index ee761a73fa8..8b6957f8bdb 100644
--- Tools/jit/_stencils.py
+++ Tools/jit/_stencils.py
@@ -141,7 +141,11 @@ class Hole:
def __post_init__(self) -> None:
self.func = _PATCH_FUNCS[self.kind]
- def fold(self, other: typing.Self, body: bytes) -> typing.Self | None:
+ def fold(
+ self,
+ other: typing.Self,
+ body: bytes | bytearray,
+ ) -> typing.Self | None:
"""Combine two holes into a single hole, if possible."""
instruction_a = int.from_bytes(
body[self.offset : self.offset + 4], byteorder=sys.byteorder
diff --git Tools/jit/_targets.py Tools/jit/_targets.py
index d23ced19842..4015fc564ad 100644
--- Tools/jit/_targets.py
+++ Tools/jit/_targets.py
@@ -97,7 +97,7 @@ def _handle_section(self, section: _S, group: _stencils.StencilGroup) -> None:
raise NotImplementedError(type(self))
def _handle_relocation(
- self, base: int, relocation: _R, raw: bytes
+ self, base: int, relocation: _R, raw: bytes | bytearray
) -> _stencils.Hole:
raise NotImplementedError(type(self))
@@ -257,7 +257,7 @@ def _unwrap_dllimport(self, name: str) -> tuple[_stencils.HoleValue, str | None]
return _stencils.symbol_to_value(name)
def _handle_relocation(
- self, base: int, relocation: _schema.COFFRelocation, raw: bytes
+ self, base: int, relocation: _schema.COFFRelocation, raw: bytes | bytearray
) -> _stencils.Hole:
match relocation:
case {
@@ -348,7 +348,10 @@ def _handle_section(
}, section_type
def _handle_relocation(
- self, base: int, relocation: _schema.ELFRelocation, raw: bytes
+ self,
+ base: int,
+ relocation: _schema.ELFRelocation,
+ raw: bytes | bytearray,
) -> _stencils.Hole:
symbol: str | None
match relocation:
@@ -424,7 +427,10 @@ def _handle_section(
stencil.holes.append(hole)
def _handle_relocation(
- self, base: int, relocation: _schema.MachORelocation, raw: bytes
+ self,
+ base: int,
+ relocation: _schema.MachORelocation,
+ raw: bytes | bytearray,
) -> _stencils.Hole:
symbol: str | None
match relocation:
```
I propose to proactively fix this by using `bytes | bytearray`. Why? Because this union won't allow to mutate `byte` objects. Why not `collections.abc.Buffer`? `jit` requires `python3.11+`, and `Buffer` is available since 3.12, we also don't want to add `typing_extensions` package as a single dependency. We also don't want to do some cool `TYPE_CHECKING` hacks, when we can just use a union.
<!-- gh-linked-prs -->
### Linked PRs
* gh-129806
* gh-130216
* gh-133540
<!-- /gh-linked-prs -->
| 422f8e9e02e68d45aee3846751a003a70fca13b6 | cfe41037eb5293a051846ddc0b4afdb7a5f60540 |
python/cpython | python__cpython-129771 | # Fatal Python error from `warnings._release_lock()`
# Crash report
### What happened?
Exposing the mutex used by the `_warnings` module in https://github.com/python/cpython/pull/128386 has made it possible to abort the interpreter by calling `warnings._release_lock()`:
```python
import warnings
warnings._release_lock()
```
Error message:
```
Fatal Python error: _PyRecursiveMutex_Unlock: unlocking a recursive mutex that is not owned by the current thread
Python runtime state: initialized
Current thread 0x0000718eb9295740 (most recent call first):
File "<string>", line 1 in <module>
Aborted (core dumped)
```
Found using [fusil](https://github.com/devdanzin/fusil) by @vstinner.
### CPython versions tested on:
CPython main branch, 3.14
### Operating systems tested on:
Linux
### Output from running 'python -VV' on the command line:
Python 3.14.0a4+ (heads/main:e1006ce1de, Feb 6 2025, 17:22:01) [GCC 13.3.0]
<!-- gh-linked-prs -->
### Linked PRs
* gh-129771
<!-- /gh-linked-prs -->
| ae132edc296d27c6ed04fe4d400c67e3cfb622e8 | e2064d67504bc360c20e03eeea8b360d605cb439 |
python/cpython | python__cpython-129764 | # Clean up "lltrace"
We should get rid of the `LLTRACE` macro, and just use `Py_DEBUG` instead (they're both used interchangeably anyways).
Also, we can check `PYTHON_LLTRACE` once at startup, instead of doing it constantly all over the place.
<!-- gh-linked-prs -->
### Linked PRs
* gh-129764
<!-- /gh-linked-prs -->
| fbaa6c8ff06cf885d9b8c8ea6cf25bab3781a2bd | f52a3a51eb711e16445307ff1ce28e94ff4b1535 |
python/cpython | python__cpython-134238 | # Data race on `block->next` in `mi_block_set_nextx`
# Bug report
I've seen this in *non-debug* TSan builds. The TSAN report looks like:
```
Write of size 8 at 0x7fffc4043690 by thread T2692:
#0 mi_block_set_nextx /raid/sgross/cpython/./Include/internal/mimalloc/mimalloc/internal.h:652:15 (python+0x2ce71e) (BuildId: 2d15b5a5260b454c4f23bd5e53d32d43bfb806c4)
#1 _mi_free_block_mt /raid/sgross/cpython/Objects/mimalloc/alloc.c:467:9 (python+0x2ce71e)
#2 _mi_free_block /raid/sgross/cpython/Objects/mimalloc/alloc.c:506:5 (python+0x2a8b9a) (BuildId: 2d15b5a5260b454c4f23bd5e53d32d43bfb806c4)
#3 _mi_free_generic /raid/sgross/cpython/Objects/mimalloc/alloc.c:524:3 (python+0x2a8b9a)
#4 mi_free /raid/sgross/cpython/Objects/mimalloc/alloc.c (python+0x2c765b) (BuildId: 2d15b5a5260b454c4f23bd5e53d32d43bfb806c4)
#5 _PyObject_MiFree /raid/sgross/cpython/Objects/obmalloc.c:284:5 (python+0x2c765b)
...
Previous atomic read of size 8 at 0x7fffc4043690 by thread T2690:
#0 _Py_atomic_load_uintptr_relaxed /raid/sgross/cpython/./Include/cpython/pyatomic_gcc.h:375:10 (python+0x4d0341) (BuildId: 2d15b5a5260b454c4f23bd5e53d32d43bfb806c4)
#1 _Py_IsOwnedByCurrentThread /raid/sgross/cpython/./Include/object.h:252:12 (python+0x4d0341)
#2 _Py_TryIncrefFast /raid/sgross/cpython/./Include/internal/pycore_object.h:560:9 (python+0x4d0341)
#3 _Py_TryIncrefCompare /raid/sgross/cpython/./Include/internal/pycore_object.h:599:9 (python+0x4d0341)
#4 PyMember_GetOne /raid/sgross/cpython/Python/structmember.c:99:18 (python+0x4d0054) (BuildId: 2d15b5a5260b454c4f23bd5e53d32d43bfb806c4)
#5 member_get /raid/sgross/cpython/Objects/descrobject.c:179:12 (python+0x2056aa) (BuildId: 2d15b5a5260b454c4f23bd5e53d32d43bfb806c4)
...
SUMMARY: ThreadSanitizer: data race /raid/sgross/cpython/./Include/internal/mimalloc/mimalloc/internal.h:652:15 in mi_block_set_nextx
```
This happens when we call `_Py_TryIncrefCompare()` or `_Py_TryXGetRef` or similar on an object that may be concurrently freed. Perhaps surprisingly, this is a supported operation. See https://peps.python.org/pep-0703/#mimalloc-changes-for-optimistic-list-and-dict-access.
The problem is `mi_block_set_nextx` doesn't use a relaxed store, so this is a data race because the [mimalloc freelist pointer](https://github.com/python/cpython/blob/0d68b14a0d8f493b2f403f64608bcfc055457053/Include/internal/mimalloc/mimalloc/types.h#L237-L239) may overlap the `ob_tid` field. The mimalloc freelist pointer is at the beginning of the freed memory block and `ob_tid` is the first field in `PyObject`.
You won't see this data race if:
* The object uses `Py_TPFLAGS_MANAGED_DICT`. In that case the beginning the managed dictionary pointer comes before `ob_tid`. That is fine because, unlike `ob_tid`, the managed dictionary pointer is never accessed concurrently with freeing the object.
* If CPython is built with `--with-pydebug`. The debug allocator sticks two extra words at the beginning of each allocation, so the freelist pointers will overlap with those (this is also fine).
Here are two options:
* Use relaxed stores in mimalloc, such as in `mi_block_set_nextx`. There's about six of these assignments -- not terrible to change -- but I don't love the idea of modifications to mimalloc that don't make sense to upstream, and these only make sense in the context of free threaded CPython.
* Reorder `PyObject` in the free threading build so that `ob_type` is the first field. This avoids any overlap with `ob_tid`. It's annoying to break ABI or change the PyObject header though.
cc @mpage @Yhg1s
<!-- gh-linked-prs -->
### Linked PRs
* gh-134238
* gh-134352
* gh-134353
<!-- /gh-linked-prs -->
| 317c49622397222b7c7fb49837e6b1fd7e82a80d | dd7f1130570d50461b2a0f81ab01c55b9ce93700 |
python/cpython | python__cpython-129754 | # `configure` help for tail call interpreter is wrong
# Bug report
```
--tail-call-interp enable tail-calling interpreter in evaluation loop
and rest of CPython
```
But the flag is `--with-tail-call-interp`
cc @Fidget-Spinner
<!-- gh-linked-prs -->
### Linked PRs
* gh-129754
<!-- /gh-linked-prs -->
| 43e024021392c8c70e5a56cdf7428ced45d73688 | e1e85204edbf8c0c9ba1e50c74ac8708553585d8 |
python/cpython | python__cpython-129738 | # Race between grow_thread_array and _Py_qsbr_reserve under free threading
# Bug report
### Bug description:
I don't have a succinct reproducer for this bug yet, but I saw the following race in JAX CI:
https://github.com/jax-ml/jax/issues/26359
```
WARNING: ThreadSanitizer: data race (pid=208275)
Write of size 8 at 0x555555d43b60 by thread T121:
#0 grow_thread_array /__w/jax/jax/cpython/Python/qsbr.c:101:19 (python3.13+0x4a3905) (BuildId: 8f8869b5f3143bd14dda26aa2bf37336b4902370)
#1 _Py_qsbr_reserve /__w/jax/jax/cpython/Python/qsbr.c:203:13 (python3.13+0x4a3905)
#2 new_threadstate /__w/jax/jax/cpython/Python/pystate.c:1569:27 (python3.13+0x497df2) (BuildId: 8f8869b5f3143bd14dda26aa2bf37336b4902370)
#3 PyGILState_Ensure /__w/jax/jax/cpython/Python/pystate.c:2766:16 (python3.13+0x49af78) (BuildId: 8f8869b5f3143bd14dda26aa2bf37336b4902370)
#4 nanobind::gil_scoped_acquire::gil_scoped_acquire() /proc/self/cwd/external/nanobind/include/nanobind/nb_misc.h:15:43 (xla_extension.so+0xa4fe551) (BuildId: 32eac14928efa68545d22a6013f16aa63a686fef)
#5 xla::CpuCallback::PrepareAndCall(void*, void**) /proc/self/cwd/external/xla/xla/python/callback.cc:67:26 (xla_extension.so+0xa4fe551)
#6 xla::XlaPythonCpuCallback(void*, void**, XlaCustomCallStatus_*) /proc/self/cwd/external/xla/xla/python/callback.cc:177:22 (xla_extension.so+0xa500c9a) (BuildId: 32eac14928efa68545d22a6013f16aa63a686fef)
...
Previous read of size 8 at 0x555555d43b60 by thread T124:
#0 _Py_qsbr_reserve /__w/jax/jax/cpython/Python/qsbr.c:216:47 (python3.13+0x4a3ad7) (BuildId: 8f8869b5f3143bd14dda26aa2bf37336b4902370)
#1 new_threadstate /__w/jax/jax/cpython/Python/pystate.c:1569:27 (python3.13+0x497df2) (BuildId: 8f8869b5f3143bd14dda26aa2bf37336b4902370)
#2 PyGILState_Ensure /__w/jax/jax/cpython/Python/pystate.c:2766:16 (python3.13+0x49af78) (BuildId: 8f8869b5f3143bd14dda26aa2bf37336b4902370)
#3 nanobind::gil_scoped_acquire::gil_scoped_acquire() /proc/self/cwd/external/nanobind/include/nanobind/nb_misc.h:15:43 (xla_extension.so+0xa4fe551) (BuildId: 32eac14928efa68545d22a6013f16aa63a686fef)
#4 xla::CpuCallback::PrepareAndCall(void*, void**) /proc/self/cwd/external/xla/xla/python/callback.cc:67:26 (xla_extension.so+0xa4fe551)
#5 xla::XlaPythonCpuCallback(void*, void**, XlaCustomCallStatus_*) /proc/self/cwd/external/xla/xla/python/callback.cc:177:22 (xla_extension.so+0xa500c9a) (BuildId: 32eac14928efa68545d22a6013f16aa63a686fef)
...
```
I think what's happening here is that two threads that were *not* created by Python are calling `PyGILState_Ensure` concurrently, so they can call into CPython APIs.
This appears to be an unlocked access on `shared->array` and it would probably be sufficient to move that read under the mutex in `_Py_qsbr_reserve`.
### CPython versions tested on:
3.13
### Operating systems tested on:
Linux
<!-- gh-linked-prs -->
### Linked PRs
* gh-129738
* gh-129747
<!-- /gh-linked-prs -->
| b4ff8b22b3066b814c3758f87eaddfa923e657ed | 78377c788e02e91bf43d290d69317198a2e563fd |
python/cpython | python__cpython-130055 | # gzip raising exception when closing with buffer backed by BytesIO
# Bug report
### Bug description:
Hello,
the following snippet raises an exception in Python 3.13.2 while it's fine with Python 3.12.
```python
import io
import gzip
def foo():
buffer = gzip.GzipFile(fileobj=io.BytesIO(), mode="w")
foo()
```
Running this with python3.12 is silent, with Python 3.13.2 instead:
```
Exception ignored in: <gzip on 0x7fa4fd99c550>
Traceback (most recent call last):
File "/usr/lib/python3.13/gzip.py", line 359, in close
fileobj.write(self.compress.flush())
```
### CPython versions tested on:
3.13
### Operating systems tested on:
Linux
<!-- gh-linked-prs -->
### Linked PRs
* gh-130055
* gh-130669
* gh-130670
<!-- /gh-linked-prs -->
| 7f39137662f637518a74228286e7ec675fa4e27d | e41981704f0a6adb58c3e258b4226619521ce03c |
python/cpython | python__cpython-129721 | # socket.CAN_RAW_ERR_FILTER is not defined in Python 3.11 and later
# Bug report
### Bug description:
When trying to use `socket.CAN_RAW_ERR_FILTER` on a Debian bookworm system, running Python 3.11, an `AttributeError` is raised. This is because the value is no longer defined in `socket.py`/`_socket`.
I have also tested in Python 3.13.1, and the issue is still present there.
Typical usage of the value:
```python
import socket
sock = socket.socket(socket.AF_CAN, socket.SOCK_RAW | socket.SOCK_NONBLOCK, socket.CAN_RAW)
sock.bind((interface_name,))
sock.setsockopt(socket.SOL_CAN_RAW, socket.CAN_RAW_ERR_FILTER, socket.CAN_ERR_MASK)
```
Minimized reproduction:
```python
import socket
print(socket.CAN_RAW_ERR_FILTER)
```
Actual output:
```
Traceback (most recent call last):
File "<python-input-1>", line 1, in <module>
print(socket.CAN_RAW_ERR_FILTER)
^^^^^^^^^^^^^^^^^^^^^^^^^
AttributeError: module 'socket' has no attribute 'CAN_RAW_ERR_FILTER'. Did you mean: 'CAN_RAW_FILTER'?
```
Expected output (taken from Python 3.10.12):
```
2
```
Investigating the issue, it seems to have been introduced in Python 3.11 by the following PR: https://github.com/python/cpython/pull/30066
In this PR, adding the `CAN_RAW_ERR_FILTER` value to `socket` was made conditional on defining `CAN_RAW_ERR_FILTER` during compilation. However, no change was made to actually define the value, thus it is never compiled for compatible systems.
Note that the `CAN_RAW_ERR_FILTER` value is coming from `linux/can/raw.h` where is it defined as an `enum` value. Thus it will not be used by the C/C++ preprocessor to evaluate the `#ifdef CAN_RAW_ERR_FILTER` in `Modules/socketmodule.c`.
Excerpt of relevant `raw.h` file from Debian Bookworm:
```c
/* for socket options affecting the socket (not the global system) */
enum {
CAN_RAW_FILTER = 1, /* set 0 .. n can_filter(s) */
CAN_RAW_ERR_FILTER, /* set filter for error frames */
CAN_RAW_LOOPBACK, /* local loopback (default:on) */
CAN_RAW_RECV_OWN_MSGS, /* receive my own msgs (default:off) */
CAN_RAW_FD_FRAMES, /* allow CAN FD frames (default:off) */
CAN_RAW_JOIN_FILTERS, /* all filters must match to trigger */
CAN_RAW_XL_FRAMES, /* allow CAN XL frames (default:off) */
};
```
### CPython versions tested on:
3.11, 3.13
### Operating systems tested on:
Linux
<!-- gh-linked-prs -->
### Linked PRs
* gh-129721
* gh-132702
<!-- /gh-linked-prs -->
| ce31ae5209c976d28d1c21fcbb06c0ae5e50a896 | 741c6386b8615fbfb4f2e6027556751039119950 |
python/cpython | python__cpython-130389 | # macOS combined architecture platform tags are undocumented, and inconsistent
# Bug report
### Bug description:
While formalising the definition of platform tags for iOS and Android in packaging.python.org/pull/1804, I notice that macOS combined architecture tags (e.g., `universal2`) aren't documented.
The canonical definition is here: https://github.com/python/cpython/blob/cdcacec79f7a216c3c988baa4dc31ce4e76c97ac/Lib/_osx_support.py#L546-L562
However, these tags aren't documented as part of the [mac usage guide](https://docs.python.org/3/using/mac.html), beyond a passing reference to the default installers being "universal2".
The [documentation of the `configure` options that enable these builds](https://docs.python.org/3/using/configure.html#cmdoption-with-universal-archs) add an extra layer of complexity, as they describe options that exist, but (a) the names don't match the values returned by `sysconfig.get_platform()`, and (b) any mapping between the two isn't documented. What architectures are built for a "3-way" build? What's the corresponding wheel platform tag? (i386, ppc and x86_64; and fat3, respectively)
These options are almost entirely anachronistic as Python isn't maintaining support for i386, ppc or ppc64. The "fix" here might be to remove support for all tags other than `universal2`.
### CPython versions tested on:
CPython main branch
### Operating systems tested on:
macOS
<!-- gh-linked-prs -->
### Linked PRs
* gh-130389
* gh-130449
* gh-130450
<!-- /gh-linked-prs -->
| 474c388740ca5f8060c074f517dd14c54409126f | 30e892473e0dfe5c3fecabcaac420cefe45e2ed4 |
python/cpython | python__cpython-129708 | # New `compute-changes.py` should be checked with mypy
# Feature or enhancement
Right now we have a special `mypy.ini` in `Tools/build`: https://github.com/python/cpython/blob/cdcacec79f7a216c3c988baa4dc31ce4e76c97ac/Tools/build/mypy.ini#L2
But, it only checks one file. Since newly added https://github.com/python/cpython/blob/main/Tools/build/compute-changes.py is fully annotated, it should be included as well.
Refs https://github.com/python/cpython/pull/129627
I will send a PR.
<!-- gh-linked-prs -->
### Linked PRs
* gh-129708
<!-- /gh-linked-prs -->
| 8b2fb629334613fa34a79f0a53d297f77121ed58 | cb640b659e14cb0a05767054f95a9d25787b472d |
python/cpython | python__cpython-130089 | # Data race in `intern_common` when interning str objects in the free threading build
# Bug report
When running `./python -m test test_exceptions --parallel-threads=10` in a TSAN build:
```
WARNING: ThreadSanitizer: data race (pid=763025)
Atomic read of size 4 at 0x7fffbe0718cc by thread T190:
#0 _Py_atomic_load_uint32_relaxed Include/cpython/pyatomic_gcc.h:367 (python+0x1d386e) (BuildId: 5612db6eff0f51c7fd99ee4409b2ceafceea484c)
#1 Py_INCREF Include/refcount.h:261 (python+0x1d386e)
#2 _Py_NewRef Include/refcount.h:518 (python+0x1d386e)
#3 dict_setdefault_ref_lock_held Objects/dictobject.c:4386 (python+0x1e6817) (BuildId: 5612db6eff0f51c7fd99ee4409b2ceafceea484c)
#4 PyDict_SetDefaultRef Objects/dictobject.c:4403 (python+0x1e6a37) (BuildId: 5612db6eff0f51c7fd99ee4409b2ceafceea484c)
#5 intern_common Objects/unicodeobject.c:15820 (python+0x2a7a8d) (BuildId: 5612db6eff0f51c7fd99ee4409b2ceafceea484c)
#6 _PyUnicode_InternImmortal Objects/unicodeobject.c:15874 (python+0x2e92d1) (BuildId: 5612db6eff0f51c7fd99ee4409b2ceafceea484c)
#7 _PyPegen_new_identifier Parser/pegen.c:549 (python+0xad61f) (BuildId: 5612db6eff0f51c7fd99ee4409b2ceafceea484c)
....
Previous write of size 4 at 0x7fffbe0718cc by thread T182:
#0 Py_SET_REFCNT Include/refcount.h:176 (python+0x2a7c9a) (BuildId: 5612db6eff0f51c7fd99ee4409b2ceafceea484c)
#1 Py_SET_REFCNT Include/refcount.h:145 (python+0x2a7c9a)
#2 intern_common Objects/unicodeobject.c:15848 (python+0x2a7c9a)
#3 _PyUnicode_InternImmortal Objects/unicodeobject.c:15874 (python+0x2e92d1) (BuildId: 5612db6eff0f51c7fd99ee4409b2ceafceea484c)
#4 _PyPegen_new_identifier Parser/pegen.c:549 (python+0xad61f) (BuildId: 5612db6eff0f51c7fd99ee4409b2ceafceea484c)
...
```
There are a few thread-safety issues with `intern_common`:
* It can return a string that's not marked as interned because we insert into `interned` before we mark the string as interned. This can be fixed with some additional locking.
* The `Py_SET_REFCNT(s, Py_REFCNT(s) - 2)` modification is not thread-safe with respect to other reference count modifications in the free threading build
* `_Py_SetImmortal` is not thread-safe in some circumstances (see https://github.com/python/cpython/issues/113956)
The `_Py_SetImmortal()` issue is tricky and I think it's unlikely to cause problems in practice, so I think we can defer dealing with that for now.
<!-- gh-linked-prs -->
### Linked PRs
* gh-130089
<!-- /gh-linked-prs -->
| b9d2ee687cfca6365e26e156b1e22824b16dabb8 | fc8c99a8ce483db23fa624592457e350e99193f6 |
python/cpython | python__cpython-129727 | # Add information about IDLE to online contents
Clicking [Complete table of contents](https://docs.python.org/3/contents.html) on docs.python.org shows, in part,

A short line like:
> Python's Integrated Development Environment
Maybe the remainder can be omitted?
@terryjreedy
<!-- gh-linked-prs -->
### Linked PRs
* gh-129727
* gh-129864
* gh-129865
<!-- /gh-linked-prs -->
| 33a7094aa680bca66582fec4dcda7d150eb90cd8 | 1bccd6c34f82f8373c320792323bfd7f7a328bc7 |
python/cpython | python__cpython-129830 | # test_fstring.py:1655: SyntaxWarning: invalid escape sequence
# Bug report
### Bug description:
This is a failure during the set of tests performed during build of 3.13.2:
```
default: 0:00:22 load avg: 1.70 [18/44] test_fstring
default: /root/Python-3.13.2/Lib/test/test_fstring.py:1655: SyntaxWarning: invalid escape sequence '\N'
default: self.assertEqual(f'{b"\N{OX}"=}', 'b"\\N{OX}"=b\'\\\\N{OX}\'')
```
### CPython versions tested on:
3.13
### Operating systems tested on:
Linux
<!-- gh-linked-prs -->
### Linked PRs
* gh-129830
* gh-130068
<!-- /gh-linked-prs -->
| 2dd018848ca254047835850b8b95d805cbf7efaf | e09442089eb86d88d4b5a96e56f713cb31173ae9 |
python/cpython | python__cpython-129679 | # ConfigParser: writing a config with an empty unnamed section adds an extra newline to the beginning of the file
# Bug report
### Bug description:
If you read a config with an empty unnamed section using ConfigParser with allow_unnamed_section set to True and write it back, the resulting file will contain an extra newline at the beginning:
```python
from configparser import ConfigParser
from io import StringIO
cfg = ConfigParser(allow_unnamed_section=True)
cfg.read_string('[sect]')
output = StringIO()
cfg.write(output)
print(repr(output.getvalue())) # '\n[sect]\n\n'
```
### CPython versions tested on:
3.13
### Operating systems tested on:
Windows
<!-- gh-linked-prs -->
### Linked PRs
* gh-129679
<!-- /gh-linked-prs -->
| ef8eeca9d8d464cff0e7bc5c428e9b8ba4936962 | b9d2ee687cfca6365e26e156b1e22824b16dabb8 |
python/cpython | python__cpython-129704 | # MemoryError freelist is not thread-safe in free threaded build
# Bug report
The `MemoryError` freelist isn't thread-safe if the GIL is disabled:
https://github.com/python/cpython/blob/285c1c4e9543299c8bf69ceb39a424782b8c632e/Objects/exceptions.c#L3850-L3860
Most of the freelists were made thread-safe by making them per-thread in the free threaded build (using `pycore_freelist.h`), but we don't want to do that for `MemoryError` because its freelist serves a different purpose (it's not really for performance). I think we should just use a lock for `MemoryError`'s freelist.
<!-- gh-linked-prs -->
### Linked PRs
* gh-129704
* gh-129742
<!-- /gh-linked-prs -->
| 51b4edb1a4092f60d84f7d14eb41c12085e39c31 | 4d56c40440c9fd4499d61d24977336d8cd8d8d83 |
python/cpython | python__cpython-130686 | # Document that public headers target C11 and C++11
Our public documentation should say that you need C11 or C++11 to `#include <Python.h>`.
Internally, we need to be more lenient & careful (though we won’t promise upfront how much). It's not OK to just break C99 support or slightly out-of-spec compilers.
We should test as much as we can; the devguide should be updated with details.
API WG discussion/vote: https://github.com/capi-workgroup/decisions/issues/30#issuecomment-2610090581
Discourse topic: https://discuss.python.org/t/python-3-14-headers-will-require-c11-and-c-11/79481
<!-- gh-linked-prs -->
### Linked PRs
* gh-130686
* gh-130688
* gh-130692
<!-- /gh-linked-prs -->
### Devguide PR
* https://github.com/python/devguide/pull/1524 | ab11c097052757b79060c75dd4835c2431e752b7 | 003e6d2b9776c07147a9c628eb028fd2ac3f0008 |
python/cpython | python__cpython-129661 | # Drop test_embed from PGO training
# Feature or enhancement
### Proposal:
The number of unit tests for PGO profile build was reduced in gh-80225 (bpo-36044). Additionally, recent versions of CPython could skip `test_embed`, which is considered to have become bigger for its contribution.
The git-main has already disabled it in b00e1254fc00941bf91e41138940e73fd22e1cbf, which should be explicit and backported to compare benchmarks rationally.
### Has this already been discussed elsewhere?
No response given
### Links to previous discussion of this feature:
#129377
<!-- gh-linked-prs -->
### Linked PRs
* gh-129661
* gh-129662
* gh-129685
<!-- /gh-linked-prs -->
| 285c1c4e9543299c8bf69ceb39a424782b8c632e | f61afca262d3a0aa6a8a501db0b1936c60858e35 |
python/cpython | python__cpython-129647 | # Update the locale alias mapping
`locale_alias` mapping in the `locale` module is manually generated from the `locale.aliases` file from X.org distribution and the supported locales list from glibc. As these files are changed, we need to update `locale_alias` from time to time. Last time it was updated in 3.8. There are not much changes since then, but they are.
Other issue is that "univ" and "universal" map to non-existing locale "en_US.utf" (should be "en_US.UTF-8"), as was noticed in #122877. `locale.aliases` no longer contains such aliases, it only contains aliases "univ.utf8" and "universal.utf8@ucs4".
<!-- gh-linked-prs -->
### Linked PRs
* gh-129647
* gh-129658
<!-- /gh-linked-prs -->
| f61afca262d3a0aa6a8a501db0b1936c60858e35 | 14489c1bb44dc2f4179278463fedd9a940b63f41 |
python/cpython | python__cpython-129644 | # `PyList_SetItem` missing atomic store
# Bug report
`PyList_SetItem` currently uses `Py_XSETREF` to set the item and decref the old one, however the store is not atomic as such it can race with a concurrent read. The fix is to use a atomic store with release order to correctly set the new item and then decref the old object.
cc @colesbury @Yhg1s
<!-- gh-linked-prs -->
### Linked PRs
* gh-129644
* gh-129677
* gh-129680
* gh-129725
<!-- /gh-linked-prs -->
| fb5d1c923677e7982360bad934d70cf9ad3366ca | e41ec8e18b078024b02a742272e675ae39778536 |
python/cpython | python__cpython-129604 | # Segfault if `pysqlite_Row->description == PyNone`
# Crash report
### What happened?
Most of the code in `Modules/_sqlite/row.c` assumes `->description` is a tuple. However, it may be `None`. Since it is possible to craft a `sqlite3.Row` object "by hand", it is easy to provoke segfauls for paths that involve the `PyTuple` API and `description == Py_None`. Any real code would never directly instantiate a row object; it would be implicitly created by the cursor (via the `.fetch*()` APIs or by iterating on the cursor). However, I don't think we should let a possible segfault hang around.
```python
import sqlite3
cx = sqlite3.connect(":memory:")
cu = cx.cursor()
row = sqlite3.Row(cu, (1,2))
row.keys() # <= boom
```
### CPython versions tested on:
3.14, 3.13, 3.12, CPython main branch
### Operating systems tested on:
macOS
### Output from running 'python -VV' on the command line:
_No response_
<!-- gh-linked-prs -->
### Linked PRs
* gh-129604
* gh-129923
* gh-129924
<!-- /gh-linked-prs -->
| 7e6ee50b6b8c760bcefb92ab4ddbc3d85d37a834 | d05053a203d922c8056f12ef3c9338229fdce043 |
python/cpython | python__cpython-129620 | # `ast.parse(..., mode='single')` parses of multiple statements which are then not unparsed
# Bug report
### Bug description:
```python
$ ./python
Python 3.14.0a4+ (heads/test:d906bde250, Feb 2 2025, 11:08:38) [GCC 11.4.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import ast
>>> a = ast.parse('i = 1; j = 2', mode='single')
>>> print(ast.dump(a, indent=2))
Interactive(
body=[
Assign(
targets=[
Name(id='i', ctx=Store())],
value=Constant(value=1)),
Assign(
targets=[
Name(id='j', ctx=Store())],
value=Constant(value=2))])
>>> ast.unparse(a)
'j = 2'
```
My guess the `ast.parse()` should probably error?
### CPython versions tested on:
3.14, 3.12, 3.13, 3.11, 3.10
### Operating systems tested on:
Linux
<!-- gh-linked-prs -->
### Linked PRs
* gh-129620
<!-- /gh-linked-prs -->
| a8cb5e4a43a0f4699590a746ca02cd688480ba15 | 20c5f969dd12a0b3d5ea7c03fedf3e2ac202c2c0 |
python/cpython | python__cpython-129909 | # Update bundled pip to 25.0.1
# Feature
### Description:
A new version of pip was recently released, but `ensurepip` still [uses](https://github.com/python/cpython/blob/3.13/Lib/ensurepip/__init__.py#L13C1-L13C24) an older version. As a result, when creating a new virtual environment, `ensurepip` installs an outdated version of pip, leading to a warning whenever pip is used.
For example:
```bash
python3 -m venv venv
source venv/bin/activate
pip install python-dotenv
```
Produces the following output:
```
Collecting python-dotenv
Using cached python_dotenv-1.0.1-py3-none-any.whl.metadata (23 kB)
Using cached python_dotenv-1.0.1-py3-none-any.whl (19 kB)
Installing collected packages: python-dotenv
Successfully installed python-dotenv-1.0.1
[notice] A new release of pip is available: 24.3.1 -> 25.0
[notice] To update, run: pip install --upgrade pip
```
This results in an unnecessary warning about an outdated pip version.
### CPython versions tested on:
3.12
### Operating systems tested on:
Linux
<!-- gh-linked-prs -->
### Linked PRs
* gh-129909
* gh-129946
* gh-129947
<!-- /gh-linked-prs -->
| b8f7bddd6c6b5d2d13c97882042ce808aceca5a8 | a1a6df282d732b54ba99de93416f8f9d85b3de2f |
python/cpython | python__cpython-129570 | # The function unicodedata.normalize() should always return an instance of the built-in str type.
# Bug report
### Bug description:
The current implementation of unicodedata.normalize() returns a new reference to the input string when the data is already normalized. It is fine for instances of the built-in str type. However, if the function receives an instance of a subclass of str, the return type becomes inconsistent.
```
import unicodedata
class MyStr(str):
pass
s1 = unicodedata.normalize('NFKC', MyStr('Å')) # U+00C5 (already normalized)
s2 = unicodedata.normalize('NFKC', MyStr('Å')) # U+0041 U+030A (not normalized)
print(type(s1), type(s2)) # <class '__main__.MyStr'> <class 'str'>
```
In addition, passing instances of user-defined str subclasses can lead to unexpected sharing of modifiable attributes:
```
import unicodedata
class MyStr(str):
pass
original = MyStr('ascii string')
original.is_original = True
verified = unicodedata.normalize('NFKC', original)
verified.is_original = False
print(original.is_original) # False
```
The solution would be to use the PyUnicode_FromObject() API for early returns in the normalize() function implementation instead of Py_NewRef() to make sure that the function always returns an instance of the built-in str type.
### CPython versions tested on:
3.11, 3.13
### Operating systems tested on:
Windows
<!-- gh-linked-prs -->
### Linked PRs
* gh-129570
* gh-130403
* gh-130404
<!-- /gh-linked-prs -->
| c359fcd2f50d02e4709e9ca3175c1ba1ea6dc7ef | 9bf73c032fbf0ea27ebf6e53223c3bc69ee0dbc5 |
python/cpython | python__cpython-130233 | # Inconsistent name mangling in `TypedDict` in function and class forms
# Bug report
Let's say that you have a dict like `{"__key": 1}` and you want to type it.
You can write:
```python
>>> import typing
>>> class A(typing.TypedDict):
... __key: int
>>> A.__mutable_keys__
frozenset({'_A__key'})
```
and:
```python
>>> B = typing.TypedDict("B", [("__key", int)])
>>> B.__mutable_keys__
frozenset({'__key'})
```
Note that `A` mangles `__key` as a regular name. While `B` does not.
I guess that it is expected, but!
Docs (https://docs.python.org/3/library/typing.html#typing.TypedDict) does not say anything about this behavior. We only mention that functional form should be used for invalid identifiers. But, `__key` is a valid indentifier.
We don't have explicit tests for this either.
And Typing Spec does not mention this as well: https://typing.readthedocs.io/en/latest/spec/typeddict.html
So, what we can do:
- Do not mangle names in this case (hard and problematic: it can break existing stuff)
- Document and test current behavior (my vote)
Please, share your thoughts on this. And I willing to send a PR with the fix.
<!-- gh-linked-prs -->
### Linked PRs
* gh-130233
* gh-130841
* gh-130842
<!-- /gh-linked-prs -->
| 63ffb406bb000a42b0dbddcfc01cb98a12f8f76a | f67ff9e82071b21c1960401aed4844b00b5bfb53 |
python/cpython | python__cpython-129560 | # Add `bytearray.resize()` method
# Feature or enhancement
### Proposal:
Add `bytearray.resize()` which wraps [`PyByteArray_Resize`](https://docs.python.org/3/c-api/bytearray.html#c.PyByteArray_Resize)
`PyByteArray_Resize` is part of the C Stable API and allows efficiently expanding a bytearray object's buffer in place (when possible / most efficient) without needing to have another object which can "hold" the data temporarily or needing to copy the data from one storage to a second. (ex. `bytearray.extend(itertools.range(0, 20)`, `a = bytearray(); a += b'temp'`.
This can be somewhat emulated currently with appending a `itertools.range` / iterator that provides `__length_hint__`, but that still requires copying byte data out of the iterator. `PyByteArray_Resize` doesn't require setting / clearing the newly allocated space, just always ensures the data ends with a null byte `\0`.
### Has this already been discussed elsewhere?
This is a minor feature, which does not need previous discussion elsewhere
### Links to previous discussion of this feature:
_No response_
<!-- gh-linked-prs -->
### Linked PRs
* gh-129560
<!-- /gh-linked-prs -->
| 5fb019fc29a90e722aff20a9522bf588351358cd | 7d9a22f50923309955a2caf7d57013f224071e6e |
python/cpython | python__cpython-129563 | # Race on GC enabled state under free threading
# Bug report
### Bug description:
The following code, when run under Python 3.13 commit 0468ea12305ef5a0a3d1dc4af8a82fb94d202cd6 built with TSAN, produces a data race error:
```python
import concurrent.futures
import functools
import gc
import threading
num_threads = 10
def closure(b):
b.wait()
for _ in range(100):
gc.disable()
gc.enable()
with concurrent.futures.ThreadPoolExecutor(max_workers=num_threads) as executor:
for i in range(100):
b = threading.Barrier(num_threads)
for _ in range(num_threads):
executor.submit(functools.partial(closure, b))
```
TSAN report:
```
WARNING: ThreadSanitizer: data race (pid=3441855)
Write of size 4 at 0x5576617e4e34 by thread T10:
#0 PyGC_Disable /usr/local/google/home/phawkins/p/cpython/Python/gc_free_threading.c:1508:22 (python3.13+0x444e39) (BuildId: 64364346a930c8e649018fed7b43c1cc0e8920e5)
#1 gc_disable_impl /usr/local/google/home/phawkins/p/cpython/Modules/gcmodule.c:51:5 (python3.13+0x4d8cbf) (BuildId: 64364346a930c8e649018fed7b43c1cc0e8920e5)
#2 gc_disable /usr/local/google/home/phawkins/p/cpython/Modules/clinic/gcmodule.c.h:45:12 (python3.13+0x4d8cbf)
#3 cfunction_vectorcall_NOARGS /usr/local/google/home/phawkins/p/cpython/Objects/methodobject.c:484:24 (python3.13+0x289ed9) (BuildId: 64364346a930c8e649018fed7b43c1cc0e8920e5)
#4 _PyObject_VectorcallTstate /usr/local/google/home/phawkins/p/cpython/./Include/internal/pycore_call.h:168:11 (python3.13+0x1ead4a) (BuildId: 64364346a930c8e649018fed7b43c1cc0e8920e5)
#5 PyObject_Vectorcall /usr/local/google/home/phawkins/p/cpython/Objects/call.c:327:12 (python3.13+0x1ead4a)
#6 _PyEval_EvalFrameDefault /usr/local/google/home/phawkins/p/cpython/Python/generated_cases.c.h:813:23 (python3.13+0x3e271b) (BuildId: 64364346a930c8e649018fed7b43c1cc0e8920e5)
#7 _PyEval_EvalFrame /usr/local/google/home/phawkins/p/cpython/./Include/internal/pycore_ceval.h:119:16 (python3.13+0x3de84a) (BuildId: 64364346a930c8e649018fed7b43c1cc0e8920e5)
#8 _PyEval_Vector /usr/local/google/home/phawkins/p/cpython/Python/ceval.c:1812:12 (python3.13+0x3de84a)
#9 _PyFunction_Vectorcall /usr/local/google/home/phawkins/p/cpython/Objects/call.c (python3.13+0x1eb3bf) (BuildId: 64364346a930c8e649018fed7b43c1cc0e8920e5)
#10 _PyObject_VectorcallTstate /usr/local/google/home/phawkins/p/cpython/./Include/internal/pycore_call.h:168:11 (python3.13+0x5723a2) (BuildId: 64364346a930c8e649018fed7b43c1cc0e8920e5)
#11 partial_vectorcall /usr/local/google/home/phawkins/p/cpython/./Modules/_functoolsmodule.c:252:16 (python3.13+0x5723a2)
#12 _PyVectorcall_Call /usr/local/google/home/phawkins/p/cpython/Objects/call.c:273:16 (python3.13+0x1eb033) (BuildId: 64364346a930c8e649018fed7b43c1cc0e8920e5)
#13 _PyObject_Call /usr/local/google/home/phawkins/p/cpython/Objects/call.c:348:16 (python3.13+0x1eb033)
#14 PyObject_Call /usr/local/google/home/phawkins/p/cpython/Objects/call.c:373:12 (python3.13+0x1eb0b5) (BuildId: 64364346a930c8e649018fed7b43c1cc0e8920e5)
#15 _PyEval_EvalFrameDefault /usr/local/google/home/phawkins/p/cpython/Python/generated_cases.c.h:1355:26 (python3.13+0x3e4902) (BuildId: 64364346a930c8e649018fed7b43c1cc0e8920e5)
#16 _PyEval_EvalFrame /usr/local/google/home/phawkins/p/cpython/./Include/internal/pycore_ceval.h:119:16 (python3.13+0x3de84a) (BuildId: 64364346a930c8e649018fed7b43c1cc0e8920e5)
#17 _PyEval_Vector /usr/local/google/home/phawkins/p/cpython/Python/ceval.c:1812:12 (python3.13+0x3de84a)
#18 _PyFunction_Vectorcall /usr/local/google/home/phawkins/p/cpython/Objects/call.c (python3.13+0x1eb3bf) (BuildId: 64364346a930c8e649018fed7b43c1cc0e8920e5)
#19 _PyObject_VectorcallTstate /usr/local/google/home/phawkins/p/cpython/./Include/internal/pycore_call.h:168:11 (python3.13+0x1ef38f) (BuildId: 64364346a930c8e649018fed7b43c1cc0e8920e5)
#20 method_vectorcall /usr/local/google/home/phawkins/p/cpython/Objects/classobject.c:70:20 (python3.13+0x1ef38f)
#21 _PyVectorcall_Call /usr/local/google/home/phawkins/p/cpython/Objects/call.c:273:16 (python3.13+0x1eb033) (BuildId: 64364346a930c8e649018fed7b43c1cc0e8920e5)
#22 _PyObject_Call /usr/local/google/home/phawkins/p/cpython/Objects/call.c:348:16 (python3.13+0x1eb033)
#23 PyObject_Call /usr/local/google/home/phawkins/p/cpython/Objects/call.c:373:12 (python3.13+0x1eb0b5) (BuildId: 64364346a930c8e649018fed7b43c1cc0e8920e5)
#24 thread_run /usr/local/google/home/phawkins/p/cpython/./Modules/_threadmodule.c:337:21 (python3.13+0x564a82) (BuildId: 64364346a930c8e649018fed7b43c1cc0e8920e5)
#25 pythread_wrapper /usr/local/google/home/phawkins/p/cpython/Python/thread_pthread.h:243:5 (python3.13+0x4bdd87) (BuildId: 64364346a930c8e649018fed7b43c1cc0e8920e5)
Previous write of size 4 at 0x5576617e4e34 by thread T9:
#0 PyGC_Enable /usr/local/google/home/phawkins/p/cpython/Python/gc_free_threading.c:1499:22 (python3.13+0x444dc9) (BuildId: 64364346a930c8e649018fed7b43c1cc0e8920e5)
#1 gc_enable_impl /usr/local/google/home/phawkins/p/cpython/Modules/gcmodule.c:37:5 (python3.13+0x4d8c9f) (BuildId: 64364346a930c8e649018fed7b43c1cc0e8920e5)
#2 gc_enable /usr/local/google/home/phawkins/p/cpython/Modules/clinic/gcmodule.c.h:27:12 (python3.13+0x4d8c9f)
#3 cfunction_vectorcall_NOARGS /usr/local/google/home/phawkins/p/cpython/Objects/methodobject.c:484:24 (python3.13+0x289ed9) (BuildId: 64364346a930c8e649018fed7b43c1cc0e8920e5)
#4 _PyObject_VectorcallTstate /usr/local/google/home/phawkins/p/cpython/./Include/internal/pycore_call.h:168:11 (python3.13+0x1ead4a) (BuildId: 64364346a930c8e649018fed7b43c1cc0e8920e5)
#5 PyObject_Vectorcall /usr/local/google/home/phawkins/p/cpython/Objects/call.c:327:12 (python3.13+0x1ead4a)
#6 _PyEval_EvalFrameDefault /usr/local/google/home/phawkins/p/cpython/Python/generated_cases.c.h:813:23 (python3.13+0x3e271b) (BuildId: 64364346a930c8e649018fed7b43c1cc0e8920e5)
#7 _PyEval_EvalFrame /usr/local/google/home/phawkins/p/cpython/./Include/internal/pycore_ceval.h:119:16 (python3.13+0x3de84a) (BuildId: 64364346a930c8e649018fed7b43c1cc0e8920e5)
#8 _PyEval_Vector /usr/local/google/home/phawkins/p/cpython/Python/ceval.c:1812:12 (python3.13+0x3de84a)
#9 _PyFunction_Vectorcall /usr/local/google/home/phawkins/p/cpython/Objects/call.c (python3.13+0x1eb3bf) (BuildId: 64364346a930c8e649018fed7b43c1cc0e8920e5)
#10 _PyObject_VectorcallTstate /usr/local/google/home/phawkins/p/cpython/./Include/internal/pycore_call.h:168:11 (python3.13+0x5723a2) (BuildId: 64364346a930c8e649018fed7b43c1cc0e8920e5)
#11 partial_vectorcall /usr/local/google/home/phawkins/p/cpython/./Modules/_functoolsmodule.c:252:16 (python3.13+0x5723a2)
#12 _PyVectorcall_Call /usr/local/google/home/phawkins/p/cpython/Objects/call.c:273:16 (python3.13+0x1eb033) (BuildId: 64364346a930c8e649018fed7b43c1cc0e8920e5)
#13 _PyObject_Call /usr/local/google/home/phawkins/p/cpython/Objects/call.c:348:16 (python3.13+0x1eb033)
#14 PyObject_Call /usr/local/google/home/phawkins/p/cpython/Objects/call.c:373:12 (python3.13+0x1eb0b5) (BuildId: 64364346a930c8e649018fed7b43c1cc0e8920e5)
#15 _PyEval_EvalFrameDefault /usr/local/google/home/phawkins/p/cpython/Python/generated_cases.c.h:1355:26 (python3.13+0x3e4902) (BuildId: 64364346a930c8e649018fed7b43c1cc0e8920e5)
#16 _PyEval_EvalFrame /usr/local/google/home/phawkins/p/cpython/./Include/internal/pycore_ceval.h:119:16 (python3.13+0x3de84a) (BuildId: 64364346a930c8e649018fed7b43c1cc0e8920e5)
#17 _PyEval_Vector /usr/local/google/home/phawkins/p/cpython/Python/ceval.c:1812:12 (python3.13+0x3de84a)
#18 _PyFunction_Vectorcall /usr/local/google/home/phawkins/p/cpython/Objects/call.c (python3.13+0x1eb3bf) (BuildId: 64364346a930c8e649018fed7b43c1cc0e8920e5)
#19 _PyObject_VectorcallTstate /usr/local/google/home/phawkins/p/cpython/./Include/internal/pycore_call.h:168:11 (python3.13+0x1ef38f) (BuildId: 64364346a930c8e649018fed7b43c1cc0e8920e5)
#20 method_vectorcall /usr/local/google/home/phawkins/p/cpython/Objects/classobject.c:70:20 (python3.13+0x1ef38f)
#21 _PyVectorcall_Call /usr/local/google/home/phawkins/p/cpython/Objects/call.c:273:16 (python3.13+0x1eb033) (BuildId: 64364346a930c8e649018fed7b43c1cc0e8920e5)
#22 _PyObject_Call /usr/local/google/home/phawkins/p/cpython/Objects/call.c:348:16 (python3.13+0x1eb033)
#23 PyObject_Call /usr/local/google/home/phawkins/p/cpython/Objects/call.c:373:12 (python3.13+0x1eb0b5) (BuildId: 64364346a930c8e649018fed7b43c1cc0e8920e5)
```
This is my attempt to reproduce a race I saw in the JAX tsan CI, which is similar but not identical:
```
WARNING: ThreadSanitizer: data race (pid=145998)
Read of size 4 at 0x55cb2fa81e34 by thread T64 (mutexes: read M0):
#0 gc_should_collect /__w/jax/jax/cpython/Python/gc_free_threading.c:1062:59 (python3.13+0x44732b) (BuildId: c9937216e103905f871b62bf50b66fc5a8e96f80)
#1 record_allocation /__w/jax/jax/cpython/Python/gc_free_threading.c:1086:13 (python3.13+0x44732b)
#2 gc_alloc /__w/jax/jax/cpython/Python/gc_free_threading.c:1708:5 (python3.13+0x44732b)
#3 _PyObject_GC_New /__w/jax/jax/cpython/Python/gc_free_threading.c:1720:20 (python3.13+0x447101) (BuildId: c9937216e103905f871b62bf50b66fc5a8e96f80)
#4 tuple_iter /__w/jax/jax/cpython/Objects/tupleobject.c:1110:10 (python3.13+0x2e0700) (BuildId: c9937216e103905f871b62bf50b66fc5a8e96f80)
#5 PyObject_GetIter /__w/jax/jax/cpython/Objects/abstract.c:2860:25 (python3.13+0x1c1f12) (BuildId: c9937216e103905f871b62bf50b66fc5a8e96f80)
...
Previous write of size 4 at 0x55cb2fa81e34 by thread T70 (mutexes: read M0):
#0 PyGC_Disable /__w/jax/jax/cpython/Python/gc_free_threading.c:1508:22 (python3.13+0x444e39) (BuildId: c9937216e103905f871b62bf50b66fc5a8e96f80)
#1 __Pyx_PyType_Ready _traversal.c (_traversal.cpython-313t-x86_64-linux-gnu.so+0x7b16) (BuildId: 0e187ad70a2faa86eeb3f1292897a0491466b409)
#2 exec_builtin_or_dynamic /__w/jax/jax/cpython/Python/import.c:815:12 (python3.13+0x4658d0) (BuildId: c9937216e103905f871b62bf50b66fc5a8e96f80)
#3 _imp_exec_dynamic_impl /__w/jax/jax/cpython/Python/import.c:4756:12 (python3.13+0x4658d0)
#4 _imp_exec_dynamic /__w/jax/jax/cpython/Python/clinic/import.c.h:513:21 (python3.13+0x4658d0)
#5 cfunction_vectorcall_O /__w/jax/jax/cpython/Objects/methodobject.c:512:24 (python3.13+0x28a135) (BuildId: c9937216e103905f871b62bf50b66fc5a8e96f80)
#6 _PyVectorcall_Call /__w/jax/jax/cpython/Objects/call.c:273:16 (python3.13+0x1eb033) (BuildId: c9937216e103905f871b62bf50b66fc5a8e96f80)
...
```
i.e., it looks like there might be a race between `gc_should_collect` and `PyGC_Disable` as well.
### CPython versions tested on:
3.13
### Operating systems tested on:
Linux
<!-- gh-linked-prs -->
### Linked PRs
* gh-129563
* gh-129756
<!-- /gh-linked-prs -->
| b184abf074c0e1f379a238f07da5616460f36b93 | 4e3330f054b91049c7260eb02b1e2c3808958e11 |
python/cpython | python__cpython-129880 | # Syntax error on '{z} if z is not None else pass'
# Bug report
### Bug description:
```python
My question is in the title.
In the tutorial, I read: Use 'pass' in places, where code is required syntactically, but none is needed.
In order to save indentations, I used the conditional assignement in some segment of code. But I only want to include z in the set x, if it is not None, because later, the None disturbs.
So my thesis is: The code in the title is syntactically correct, but still, I get a syntax error.
Thank you for your kind consideration
```
### CPython versions tested on:
3.10
### Operating systems tested on:
Windows
<!-- gh-linked-prs -->
### Linked PRs
* gh-129880
<!-- /gh-linked-prs -->
| bcc9a5dddb777c8195b56645a15298cc0949ed9b | 51d4bf1e0e5349090da72721c865b6c2b28277f3 |
python/cpython | python__cpython-129504 | # Handling errors in ctypes callbacks
If an error happens in the callback or in converting result of the callback, it is handled by calling `PyErr_FormatUnraisable()`. If an error happens in preparing arguments for the callback, it is handled in different way -- some message is printed to stderr and then `PyErr_Print()` is called. In one case the error is not properly handled.
It is better to handle all errors in the uniform way. `PyErr_Print()` is not suitable for it because it treats SystemExit specially and has a side effect of setting sys.last_exc` and other variables.
<!-- gh-linked-prs -->
### Linked PRs
* gh-129504
* gh-129517
* gh-129639
<!-- /gh-linked-prs -->
| 9d63ae5fe52d95059ab1bcd4cbb1f9e17033c897 | 3447f4a56a71a4017e55d8f46160a63f111ec373 |
python/cpython | python__cpython-116346 | # New warning on `main`: `writing 1 byte into a region of size 0 [-Wstringop-overflow=]`
# Bug report
### Bug description:
This happens in Modules/_decimal/libmpdec/io.c
Here: https://github.com/python/cpython/blob/a4722449caccc42ad644611d02fbdb5005f601eb/Modules/_decimal/libmpdec/io.c#L348-L349
<img width="635" alt="Image" src="https://github.com/user-attachments/assets/b66e774d-cc81-4a27-b435-6fd12fc911ac" />
### CPython versions tested on:
CPython main branch
### Operating systems tested on:
Linux
<!-- gh-linked-prs -->
### Linked PRs
* gh-116346
<!-- /gh-linked-prs -->
| 6c63afc3be9dd612f587fe7869c1f317d5ea37cc | 652f66ac386dad5992c6f7c494871907272503f8 |
python/cpython | python__cpython-129465 | # ForwardRef: remove `__hash__` and `__eq__`
# Feature or enhancement
Currently, `annotationlib.ForwardRef` has a `__hash__` and `__eq__` method, which look at a few attributes: https://github.com/python/cpython/blob/8b4a0d641cde667f94ce49f5e64da6bd9d6fbd9c/Lib/typing.py#L1096 (3.13). These look unsound already; it's possible for two ForwardRefs to compare equal but hash differently (if they have the same evaluated value but not the same `__forward_module__`). This gets worse on 3.14, where ForwardRef has gained a few more fields (https://github.com/python/cpython/blob/a4722449caccc42ad644611d02fbdb5005f601eb/Lib/annotationlib.py#L232).
I think it's better to remove the `__eq__` and `__hash__` methods from `ForwardRef` objects, and make it so two ForwardRefs are equal only if they're identical. I don't see a good use case for comparing two ForwardRefs for equality; if you want to know that they refer to the same thing, you should evaluate them and compare the type object that comes out.
This came up in agronholm/typeguard#492 where the current implementation of equality caused some grief. cc people involved with some runtime type checking tools: @agronholm @Viicos @leycec.
<!-- gh-linked-prs -->
### Linked PRs
* gh-129465
* gh-132283
<!-- /gh-linked-prs -->
| ac14d4a23f58c1f3e1909753aa17c1c302ea771d | 231a50fa9a5444836c94cda50a56ecf080c7d3b0 |
python/cpython | python__cpython-129450 | # Python 3.13.1-jit : issue when compiling from source
Hi all,
I am trying to build python 3.13.1 from source (grabed here https://www.python.org/ftp/python/3.13.1/Python-3.13.1.tgz) on Ubuntu 22.04 instance with configuration so that I disable GIL + get JIT :
`./configure --disable-gil --enable-experimental-jit --enable-optimizations --with-lto`
Requirements installed according to https://devguide.python.org/getting-started/setup-building/index.html#install-dependencies
The make step fails with error :
```
gcc -c -fno-strict-overflow -Wsign-compare -DNDEBUG -g -O3 -Wall -D_Py_TIER2=1 -D_Py_JIT -fno-semantic-interposition -flto -fuse-linker-plugin -ffat-lto-objects -flto-partition=none -g -std=c11 -Wextra -Wno-unused-parameter
-Wno-missing-field-initializers -Wstrict-prototypes -Werror=implicit-function-declaration -fvisibility=hidden -fprofile-generate -I./Include/internal -I./Include/internal/mimalloc -I. -I./Include -DPy_BUILD_CORE -o Python
/mystrtoul.o Python/mystrtoul.c
Traceback (most recent call last):
File "/tmp/Python-3.13.1/./Tools/jit/build.py", line 8, in <module>
import _targets
File "/tmp/Python-3.13.1/Tools/jit/_targets.py", line 15, in <module>
import _schema
File "/tmp/Python-3.13.1/Tools/jit/_schema.py", line 60, in <module>
class MachORelocation(typing.TypedDict):
File "/tmp/Python-3.13.1/Tools/jit/_schema.py", line 64, in MachORelocation
Section: typing.NotRequired[dict[typing.Literal["Value"], str]]
AttributeError: module 'typing' has no attribute 'NotRequired'
```
I've been looking around, and was not able to find any reference to this kind of issue. What am i doing wrong ?
Thank you so much for any help in reply.
<!-- gh-linked-prs -->
### Linked PRs
* gh-129450
* gh-129472
<!-- /gh-linked-prs -->
| 652f66ac386dad5992c6f7c494871907272503f8 | 9bc8c5fc0c8557b831bf47e6fe6ff85678cc8c0c |
python/cpython | python__cpython-129413 | # CSV write has SEGV when trying to write data 2GB or larger
# Crash report
### What happened?
```python
import csv
bad_size = 2 * 1024 * 1024 * 1024 + 1
val = 'x' * bad_size
print("Total size of data {}".format(len(val)))
for size in [2147483647, 2147483648, 2147483649]:
data = val[0:size]
print("Trying to write data of size {}".format(len(data)))
with open('dump.csv', 'w', newline='') as csvfile:
spamwriter = csv.writer(csvfile, delimiter=',',
quotechar='|', quoting=csv.QUOTE_MINIMAL)
spamwriter.writerow([data])
```
```
python dump2.py
Total size of data 2147483649
Trying to write data of size 2147483647
Trying to write data of size 2147483648
Segmentation fault (core dumped)
```
This happens with both 3.10 and 3.12
When I reproduce this with python-dbg inside of gdb I see the following:
```
#0 __pthread_kill_implementation (no_tid=0, signo=6, threadid=140737352495104) at ./nptl/pthread_kill.c:44
#1 __pthread_kill_internal (signo=6, threadid=140737352495104) at ./nptl/pthread_kill.c:78
#2 __GI___pthread_kill (threadid=140737352495104, signo=signo@entry=6) at ./nptl/pthread_kill.c:89
#3 0x00007ffff7c42476 in __GI_raise (sig=sig@entry=6) at ../sysdeps/posix/raise.c:26
#4 0x00007ffff7c287f3 in __GI_abort () at ./stdlib/abort.c:79
#5 0x00007ffff7c2871b in __assert_fail_base (fmt=0x7ffff7ddd130 "%s%s%s:%u: %s%sAssertion `%s' failed.\n%n",
assertion=0x814bed "index >= 0", file=0x7ee008 "../Include/cpython/unicodeobject.h", line=318,
function=<optimized out>) at ./assert/assert.c:92
#6 0x00007ffff7c39e96 in __GI___assert_fail (assertion=assertion@entry=0x814bed "index >= 0",
file=file@entry=0x7ee008 "../Include/cpython/unicodeobject.h", line=line@entry=318,
function=function@entry=0x97ae28 <__PRETTY_FUNCTION__.4.lto_priv.56> "PyUnicode_READ") at ./assert/assert.c:101
#7 0x00000000006d46c3 in PyUnicode_READ (index=-2147483648, data=0x7ffe772fd058, kind=1)
at ../Include/cpython/unicodeobject.h:318
#8 join_append_data (self=self@entry=0x7ffff74a0050, field_kind=field_kind@entry=1,
field_data=field_data@entry=0x7ffe772fd058, field_len=field_len@entry=2147483648, quoted=quoted@entry=0x7fffffffd0ec,
copy_phase=copy_phase@entry=0) at ../Modules/_csv.c:1108
#9 0x00000000006d49ea in join_append (self=self@entry=0x7ffff74a0050,
field=field@entry='xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx', quoted=<optimized out>, quoted@entry=0) at ../Modules/_csv.c:1213
#10 0x00000000006d4c9a in csv_writerow (self=self@entry=0x7ffff74a0050,
seq=seq@entry=['xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx']) at ../Modules/_csv.c:1303
#11 0x000000000062b002 in _PyEval_EvalFrameDefault (tstate=0xcd8d80 <_PyRuntime+475008>, frame=0x7ffff7fb0020, throwflag=0)
at Python/bytecodes.c:3094
```
### CPython versions tested on:
3.12
### Operating systems tested on:
Linux
### Output from running 'python -VV' on the command line:
Python 3.12.8 (main, Dec 4 2024, 08:54:12) [GCC 11.4.0]
<!-- gh-linked-prs -->
### Linked PRs
* gh-129413
* gh-129436
* gh-129437
<!-- /gh-linked-prs -->
| 97b0ef05d987ebef354512b516a246feb411e815 | 4815131910cec72805ad2966e7af1e2eba49fe51 |
python/cpython | python__cpython-129411 | # `Lib/http/__init__.py` mentions RFF instead of RFC
# Documentation
Just a typo in 3.13, where http mentions `RFF` instead of `RFC`
<!-- gh-linked-prs -->
### Linked PRs
* gh-129411
* gh-129414
<!-- /gh-linked-prs -->
| 7dd0a7e52ee832559b89d5ccba732c8e91260df8 | 180ee43bde99b8ce4c4f1d5237ab191e26118061 |
python/cpython | python__cpython-129410 | # The `SystemError` description is misleading
# Documentation
The docs for [SystemError](https://docs.python.org/3/library/exceptions.html#SystemError) currently say:
> Raised when the interpreter finds an internal error, but the situation does not look so serious to cause it to abandon all hope. The associated value is a string indicating what went wrong (in low-level terms).
>
> You should report this to the author or maintainer of your Python interpreter. Be sure to report the version of the Python interpreter (sys.version; it is also printed at the start of an interactive Python session), the exact error message (the exception’s associated value) and if possible the source of the program that triggered the error.
A `SystemError` isn't always an interpreter problem, at least in CPython's case. A `SystemError` can be caused relatively easily through misuse of the C API, so I don't think it's a good idea to point people to the issue tracker if they ever see a `SystemError`.
<!-- gh-linked-prs -->
### Linked PRs
* gh-129410
* gh-129610
* gh-129611
<!-- /gh-linked-prs -->
| 39b754a35976924f6df46cd475e889bcf8598ca1 | 632ca568219f86679661bc288f46fa5838102ede |
python/cpython | python__cpython-129406 | # Documented `Py_mod_multiple_interpreters` default is incorrect
Docs added in #107306 say the default for `Py_mod_multiple_interpreters` is `Py_MOD_MULTIPLE_INTERPRETERS_NOT_SUPPORTED`, but the actual default is `Py_MOD_MULTIPLE_INTERPRETERS_SUPPORTED` (preserving pre-3.12 behaviour):
https://github.com/python/cpython/blob/180ee43bde99b8ce4c4f1d5237ab191e26118061/Objects/moduleobject.c#L351-L354
cc @ericsnowcurrently
<!-- gh-linked-prs -->
### Linked PRs
* gh-129406
* gh-130507
* gh-130510
<!-- /gh-linked-prs -->
| fc8d2cba541f378df0a439412665f3dbe0b9ae3c | 805839021ba7074423811ba07995ae57984a46d3 |
python/cpython | python__cpython-129419 | # Incorrect exception message in `Barrier.__init__`
# Bug report
### Bug description:
```python
# Add a code block here, if required
>>>import threading
>>>x=threading.Barrier(parties=0.5)
Traceback (most recent call last):
File "<pyshell#0>", line 1, in <module>
x=Barrier(parties=0.5)
ValueError: parties must be > 0
```
Yes. As you see 0.5 is below 0.
To solve this problem, adding this sentence is viable.
```python313\Lib\threading.py
class Barrier:
"""Implements a Barrier.
Useful for synchronizing a fixed number of threads at known synchronization
points. Threads block on 'wait()' and are simultaneously awoken once they
have all made that call.
"""
def __init__(self, parties, action=None, timeout=None):
"""Create a barrier, initialised to 'parties' threads.
'action' is a callable which, when supplied, will be called by one of
the threads after they have all entered the barrier and just prior to
releasing them all. If a 'timeout' is provided, it is used as the
default for all subsequent 'wait()' calls.
#if not isinstance(parties, int):
# raise TypeError("parties must be an integer")
if parties < 1:
raise ValueError("parties must be > 0")
"""
self._cond = Condition(Lock())
self._action = action
self._timeout = timeout
self._parties = parties
self._state = 0 # 0 filling, 1 draining, -1 resetting, -2 broken
self._count = 0
```
### CPython versions tested on:
3.13
### Operating systems tested on:
Windows
<!-- gh-linked-prs -->
### Linked PRs
* gh-129419
* gh-129468
* gh-129469
<!-- /gh-linked-prs -->
| bcb25d60b1baf9348e73cbd2359342cea6009c36 | a4722449caccc42ad644611d02fbdb5005f601eb |
python/cpython | python__cpython-129959 | # `test_repr_rlock` failing randomly on main branch CI
```console
test_repr_rlock (test.test_multiprocessing_spawn.test_processes.WithProcessesTestLock.test_repr_rlock) ... FAIL
======================================================================
FAIL: test_repr_rlock (test.test_multiprocessing_spawn.test_processes.WithProcessesTestLock.test_repr_rlock)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/Users/runner/work/cpython/cpython/Lib/test/_test_multiprocessing.py", line 1530, in test_repr_rlock
self.assertEqual('<RLock(SomeOtherThread, nonzero)>', repr(lock))
~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
AssertionError: '<RLock(SomeOtherThread, nonzero)>' != '<RLock(None, 0)>'
- <RLock(SomeOtherThread, nonzero)>
+ <RLock(None, 0)>
```
https://github.com/python/cpython/actions/runs/13011159029/job/36289021024?pr=129399
<!-- gh-linked-prs -->
### Linked PRs
* gh-129959
<!-- /gh-linked-prs -->
| a98a6bd1128663fbe58c0c73d468710245a57ad6 | 5326c27fc674713879747e61a8deca7a66e69754 |
python/cpython | python__cpython-129394 | # Make 'sys.platform' return "freebsd" only on FreeBSD without major version
# Feature or enhancement
### Proposal:
When using using `sys.platform` and especially `requirements.txt `/`pyproject.toml` one cannot constrain dependencies to FreeBSD as whole, but need to deal with the suffixed major version. A change should follow suit all other cases to contain the platform name only without the major version.
Locally patched:
```
osipovmi@deblndw011x:~/var/Projekte/cpython (main *+>)
$ /tmp/python-3.13/bin/python3
Python 3.14.0a4+ (heads/main-dirty:8e57877e3f4, Jan 28 2025, 10:01:48) [Clang 19.1.5 (https://github.com/llvm/llvm-project.git llvmorg-19.1.5-0-gab4b5a on freebsd
Type "help", "copyright", "credits" or "license" for more information.
>>> import sys
>>> sys.platform
'freebsd'
>>>
```
A PR wll follow soon.
### Has this already been discussed elsewhere?
No response given
### Links to previous discussion of this feature:
_No response_
<!-- gh-linked-prs -->
### Linked PRs
* gh-129394
<!-- /gh-linked-prs -->
| e3eba8ce266f90d9f8faeb5b2b4b64e56110bd2a | 8df5193d37f70a1478642c4b456dcc7d6df6c117 |
python/cpython | python__cpython-129844 | # bytes.fromhex() should parse a bytes
# Feature or enhancement
### Proposal:
`bytes.fromhex()` should accept a `bytes`:
```python
>>> bytes.fromhex(b'8a3218def90a84cb4373beed87d9ba1ccc7d90d1')
b'\x8a2\x18\xde\xf9\n\x84\xcbCs\xbe\xed\x87\xd9\xba\x1c\xcc}\x90\xd1'
```
### Background:
`bytes.fromhex()` accepts a `str`:
```python
>>> bytes.fromhex('8a3218def90a84cb4373beed87d9ba1ccc7d90d1')
b'\x8a2\x18\xde\xf9\n\x84\xcbCs\xbe\xed\x87\xd9\xba\x1c\xcc}\x90\xd1'
```
However, it refuses to parse a byte string:
```python
>>> bytes.fromhex(b'8a3218def90a84cb4373beed87d9ba1ccc7d90d1')
Traceback (most recent call last):
File "<python-input-0>", line 1, in <module>
bytes.fromhex(b'8a3218def90a84cb4373beed87d9ba1ccc7d90d1')
~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
TypeError: fromhex() argument must be str, not bytes
```
This requires an extra `.decode()`, which is rather wasteful given that the `str` is not of any real use.
This came up for me in parsing the output of `git cat-file --batch`, which must be a binary stream because it contains bytes, but includes header lines like
```
8a3218def90a84cb4373beed87d9ba1ccc7d90d1 100644 1394
```
The integers are parseable directly from bytes:
```python
>>> int(b'100644', 8)
33188
```
so it seems like an omission that the SHAs are not.
### Has this already been discussed elsewhere?
This is a minor feature, which does not need previous discussion elsewhere
### Links to previous discussion of this feature:
_No response_
<!-- gh-linked-prs -->
### Linked PRs
* gh-129844
<!-- /gh-linked-prs -->
| e0637cebe5bf863897f2e89dfcb76be0015c1877 | 405a2d74cbdef5a899c900b6897ec85fe465abd2 |
python/cpython | python__cpython-129347 | # No NULL check in sqlite `connection.c`
# Bug report
### Bug description:
https://github.com/python/cpython/blob/7ec17429d462aee071c067e3b84c8a7e4fcf7263/Modules/_sqlite/connection.c#L960
In this case we don'y check it against NULL, and can dereference in next line:
https://github.com/python/cpython/blob/7ec17429d462aee071c067e3b84c8a7e4fcf7263/Modules/_sqlite/connection.c#L961
### CPython versions tested on:
CPython main branch
### Operating systems tested on:
_No response_
<!-- gh-linked-prs -->
### Linked PRs
* gh-129347
* gh-129372
* gh-129373
<!-- /gh-linked-prs -->
| 379ab856f59423c570333403a7d5d72f3ea82d52 | a6a8c6f86e811f9fcdb577bc1d9b85fbf86c8267 |
python/cpython | python__cpython-129348 | # syslogmodule.c lack of NULL check
### Bug description:
https://github.com/python/cpython/blob/7ec17429d462aee071c067e3b84c8a7e4fcf7263/Modules/syslogmodule.c#L178
After in if body we use Py_DECREF(indent), which can cause null pointer dereference
### CPython versions tested on:
CPython main branch
### Operating systems tested on:
Other
### Additional information
Found by Linux Verification Center (linuxtesting.org) with SVACE.
Reporter: Burkov Egor ([eburkov@rvision.ru](mailto:eburkov@rvision.ru)).
Organization: R-Vision ([support@rvision.ru](mailto:support@rvision.ru)).
<!-- gh-linked-prs -->
### Linked PRs
* gh-129348
* gh-129442
* gh-129443
<!-- /gh-linked-prs -->
| 25cf79a0829422bd8479ca0c13c72b769422077b | c67afb581eccb3ce20a4965c8f407fd2662b6bdf |
python/cpython | python__cpython-129361 | # docs of deprecated C function should propose replacement in C, not in Python
# Documentation
In the 3.13 Python/C API Reference Manual, page "Initialization, Finalization, and Threads", `Py_GetProgramName()` and several other C functions are declared deprecated. To replace them, it is suggested "Get sys.executable instead".
As maintainer of a C program, I find this not helpful. I would need a drop-in replacement in C, not a reference to a Python variable.
<!-- gh-linked-prs -->
### Linked PRs
* gh-129361
<!-- /gh-linked-prs -->
| 632ca568219f86679661bc288f46fa5838102ede | a33dcb9e431c463c20ecdc02a206ddf0b7388687 |
python/cpython | python__cpython-129320 | # pyatomic include statement shouldn't use "cpython/*" but just "*"
# Bug report
### Bug description:
I'm looking at (and using) a build generated from:
```sh
git clone --depth 1 https://github.com/python/cpython.git --branch v3.13.1
pushd cpython
./configure --enable-optimizations --prefix=$(pwd)/install
make -j4
make install
popd
```
in the file: cpython/install/include/python3.13/cpython/pyatomic.h
there is the statement
```c
#if _Py_USE_GCC_BUILTIN_ATOMICS
# define Py_ATOMIC_GCC_H
# include "cpython/pyatomic_gcc.h"
# undef Py_ATOMIC_GCC_H
#elif __STDC_VERSION__ >= 201112L && !defined(__STDC_NO_ATOMICS__)
# define Py_ATOMIC_STD_H
# include "cpython/pyatomic_std.h"
# undef Py_ATOMIC_STD_H
#elif defined(_MSC_VER)
# define Py_ATOMIC_MSC_H
# include "cpython/pyatomic_msc.h"
# undef Py_ATOMIC_MSC_H
#else
# error "no available pyatomic implementation for this platform/compiler"
#endif
```
these includes say: include, relative to me (quotation), the file cpython/pyatomic_gcc.h
But there is no file "cpython/pyatomic_gcc.h" relative to the cypthon/pyatomic.h
I can of course compensate by putting a:
ln -s . cypthon
in that cpython directory
The fix is to change "cpython/pyatomic_gcc.h" to "pyatomic_gcc.h"
where quotation = "relative to me"
### CPython versions tested on:
3.13
### Operating systems tested on:
_No response_
<!-- gh-linked-prs -->
### Linked PRs
* gh-129320
* gh-130667
* gh-130668
<!-- /gh-linked-prs -->
| 3a974e39d54902699f360bc4db2fd351a6baf3ef | 25cf79a0829422bd8479ca0c13c72b769422077b |
python/cpython | python__cpython-129293 | # Support for Bluetooth LE L2CAP Connection-oriented channels through AF_BLUETOOTH/BTPROTO_L2CAP socket
# Feature or enhancement
### Proposal:
As has been discussed previously (69242, 70145), "new" fields have been added in `sockaddr_l2`, but are still not supported in CPython's `socket` implementation.
* The lack of `l2_bdaddr_type` prevents opening L2CAP connection-oriented channels to Bluetooth LE devices.
* Likewise, the missing `l2_cid` prevents raw access to e.g ATT.
My suggestion is to add support for a third and fourth optional element in the address tuple for `l2_cid` and `l2_bdaddr_type` respectively.
* Only one of `psm` and `cid` can be non-zero. This is enforced by the kernel.
* `l2_bdaddr_type` can take the value of `BDADDR_LE_PUBLIC` or `BDADDR_LE_RANDOM`. `BDADDR_BREDR` is implied if the element is missing, preserving compatibility with existing code.
I.e for LE CoC connection to PSM `0x80`
```python
sock = socket(AF_BLUETOOTH, SOCK_SEQPACKET, BTPROTO_L2CAP)
sock.connect((bdaddr, 0x80, 0, BDADDR_LE_RANDOM))
```
For a raw LE ATT connection:
```python
CID_ATT = 0x04
sock = socket(AF_BLUETOOTH, SOCK_SEQPACKET, BTPROTO_L2CAP)
s.bind((BDADDR_ANY, 0, CID_ATT, BDADDR_LE_RANDOM))
s.connect((bdaddr, 0, CID_ATT, BDADDR_LE_RANDOM))
```
while keeping the existing format for classic BR/EDR L2CAP.
```python
sock.connect((bdaddr, 0x80))
```
I have a working implementation of this locally (tested on Linux 6.11) and will file a PR for review as soon as I've got this issue number assigned.
This is my first attempt at contributing to CPython, so I apologize in advance for any mistakes in the process.
Edit 2025-01-26: Added CID field to proposal. ~PR to be updated.~
### Has this already been discussed elsewhere?
This is a minor feature, which does not need previous discussion elsewhere
### Links to previous discussion of this feature:
https://github.com/python/cpython/issues/69242
https://github.com/python/cpython/issues/70145
<!-- gh-linked-prs -->
### Linked PRs
* gh-129293
<!-- /gh-linked-prs -->
| 45a24f54af4a65c14cc15fc13d3258726e2fe73b | a083633fa046386b8cdaae0c87fef25289dde9a1 |
python/cpython | python__cpython-129272 | # Refleaks in multiple buildbots for test_capi, test_embed, test_import, test_interpreters, test_json and more
# Bug report
### Bug description:
See for example:
https://buildbot.python.org/all/#/builders/259/builds/2043
```
10 slowest tests:
- test.test_multiprocessing_spawn.test_processes: 5 min 50 sec
- test.test_concurrent_futures.test_wait: 5 min 3 sec
- test_fstring: 4 min 53 sec
- test.test_multiprocessing_forkserver.test_processes: 4 min 46 sec
- test_socket: 4 min 40 sec
- test.test_multiprocessing_spawn.test_misc: 3 min 34 sec
- test_io: 3 min 33 sec
- test_regrtest: 3 min 26 sec
- test_signal: 3 min 5 sec
- test_subprocess: 3 min 1 sec
16 tests skipped:
test.test_asyncio.test_windows_events
test.test_asyncio.test_windows_utils test_android test_apple
test_devpoll test_free_threading test_ioctl test_kqueue
test_launcher test_msvcrt test_startfile test_winapi
test_winconsoleio test_winreg test_winsound test_wmi
4 tests skipped (resource denied):
test_peg_generator test_tkinter test_ttk test_zipfile64
8 re-run tests:
test_capi test_embed test_import test_interpreters test_json
test_logging test_sysconfig test_zoneinfo
8 tests failed:
test_capi test_embed test_import test_interpreters test_json
test_logging test_sysconfig test_zoneinfo
456 tests OK.
```
### CPython versions tested on:
CPython main branch
### Operating systems tested on:
Linux
<!-- gh-linked-prs -->
### Linked PRs
* gh-129272
<!-- /gh-linked-prs -->
| 7a54a653b718a70c96755f6fc39f01f5c582558a | e119526edface001ad7d7f70249a123c8a122d71 |
python/cpython | python__cpython-130133 | # `test_coverage_ignore` in `test_trace` fails if setuptools is installed
# Bug report
### Bug description:
The `test_coverage_ignore` test fails if setuptools is installed:
```pytb
======================================================================
FAIL: test_coverage_ignore (test.test_trace.TestCoverage.test_coverage_ignore)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/raid/sgross/cpython/Lib/test/test_trace.py", line 398, in test_coverage_ignore
self.assertEqual(files, ['_importlib.cover']) # Ignore __import__
~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
AssertionError: Lists differ: ['_distutils_hack.__init__.cover', 're.__ini[36 chars]ver'] != ['_importlib.cover']
First differing element 0:
'_distutils_hack.__init__.cover'
'_importlib.cover'
First list contains 2 additional elements.
First extra element 1:
're.__init__.cover'
+ ['_importlib.cover']
- ['_distutils_hack.__init__.cover',
- 're.__init__.cover',
- 'collections.__init__.cover']
----------------------------------------------------------------------
```
You can see this with an installed Python:
```
python3 -m test test_trace -v
```
Or from the cpython build without installing CPython:
```
...
make -j
./python -m ensurepip
./python -m pip install setuptools
./python -m test test_trace -v
```
### CPython versions tested on:
CPython main branch
### Operating systems tested on:
Linux
<!-- gh-linked-prs -->
### Linked PRs
* gh-130133
* gh-130357
* gh-130358
<!-- /gh-linked-prs -->
| 35925e952911aba97bfdaee85b09d734ceac4fea | 8cbcf51d614815df3ab7ea557f04e6b4b386968e |
python/cpython | python__cpython-129252 | # iOS buildbot failure comments don't contain failure details anymore
Specificially, they don't contain:
* The names of the failed tests
* The tracebacks
It looks like this changed in December 2024, probably because the test script was updated to add a prefix to every log line. The relevant regexes are [here](https://github.com/python/buildmaster-config/blob/main/master/custom/testsuite_utils.py).
Before:
* https://github.com/python/cpython/pull/122992#issuecomment-2287532747
* https://github.com/python/cpython/pull/124646#issuecomment-2378299107
After:
* https://github.com/python/cpython/pull/128925#issuecomment-2598703448
* https://github.com/python/cpython/pull/127412#issuecomment-2546494964
@freakboy3742: FYI
<!-- gh-linked-prs -->
### Linked PRs
* gh-129252
* gh-129283
<!-- /gh-linked-prs -->
| a58083811a51764c47fbb7cbd67e87e1124d53e8 | d40692db06cdae89447c26b6c1b5d2a682c0974f |
python/cpython | python__cpython-129263 | # Remove #pragma that disables optimization on _PyEval_EvalFrameDefault on MSVC
# Feature or enhancement
### Proposal:
The bug that caused a compiler crash has now been fixed upstream. We should remove our workaround to disable optimization on PGO builds on MSVC (and presumably get a modest speedup).
### Has this already been discussed elsewhere?
No response given
### Links to previous discussion of this feature:
_No response_
<!-- gh-linked-prs -->
### Linked PRs
* gh-129263
* gh-130011
<!-- /gh-linked-prs -->
| 9e52e553f4a906c120f807e940891f7325011b67 | 9682a88683367f79c5b626b2ad809d569e37f602 |
python/cpython | python__cpython-129240 | # Use `_PyInterpreterFrame.stackpointer` in free threaded GC when traversing threads' stacks
# Feature or enhancement
The free threading GC traverses the active frames of threads in two places:
* When `GC_MARK_ALIVE_STACKS` during the marking phase (see https://github.com/python/cpython/issues/128807)
* During `deduce_unreachable_heap` to ensure that deferred reference counted objects are kept alive.
Previously, the frame's stackpointer was frequently not set during GC. To avoid missing some deferred references, we looked at the frame up to the maximum size (`co->co_stacksize`). That behavior isn't great:
* We can only look at deferred references, because non-deferred references may be dead objects reclaimed by normal reference counting operations.
* We need to [zero-initialize](https://github.com/python/cpython/blob/c05a851ac59e6fb7bd433677b9c116fb8336a8b1/Include/internal/pycore_frame.h#L163-L168) the entire stack when pushing a frame
Now that the frame's stackpointer is nearly always set during GC, we should only look at the frame up to `f->stackpointer`. There are still some cases where the stackpointer isn't set currently; we'll (temporarily) need some fallback for those cases. Once the stackpointer is *always* valid during GC (ideally before the 3.14 release), we can remove the fallback and assert that `f->stackpointer` is not `NULL`.
cc @nascheme
<!-- gh-linked-prs -->
### Linked PRs
* gh-129240
<!-- /gh-linked-prs -->
| 5ff2fbc026b14eadd41b8c14d83bb1eb832292dd | a29221675e7367608961c3484701ab2671ec6f3c |
python/cpython | python__cpython-129232 | # JIT: lay out memory to group executable code together
When allocating memory to host JIT traces, we have the following layout:
* code
* data
* trampolines
* padding
We should group executable code together to improve icache locality. The new layout should be
* code
* trampolines
* data
* padding
<!-- gh-linked-prs -->
### Linked PRs
* gh-129232
<!-- /gh-linked-prs -->
| a29a9c0f3890fec843b7151f6a1defa25f570504 | 567394517a10c9a9f3af25a31009589ae2c50f1b |
python/cpython | python__cpython-129225 | # The compiler may optimise away globals with debug offsets
# Bug report
### Bug description:
When compiling with LTO and other aggressive optimization levels, is possible that the compiler will optimise away globals containing debug offsets. This is currently happening in https://buildbot.python.org/#/builders/125/builds/6895 for the new AsyncioDebug section, making the test fail because the information is not burned in the binary.
### CPython versions tested on:
CPython main branch
### Operating systems tested on:
Linux
<!-- gh-linked-prs -->
### Linked PRs
* gh-129225
* gh-129262
<!-- /gh-linked-prs -->
| 3a3a6b86f4069a5a3561c65692937eb798053ae5 | e635bf2e49797ecb976ce45a67fce2201a25ca68 |
python/cpython | python__cpython-129211 | # Add os.readinto API for reading data into a caller provided buffer
# Feature or enhancement
### Proposal:
Code reading data in pure python tends to make a buffer variable, call `os.read()` which returns a separate newly allocated buffer of data, then copy/append that data onto the pre-allocated buffer[0]. That creates unnecessary extra buffer objects, as well as unnecessary copies. Provide `os.readinto` for directly filling a [Buffer Protocol](https://docs.python.org/3/c-api/buffer.html) object.
`os.readinto` should closely mirror [`_Py_read`](https://github.com/python/cpython/blob/298dda57709c45cbcb44831e0d682dc071af5293/Python/fileutils.c#L1857-L1927) which underlies os.read in order to get the same behaviors around retries as well as well-tested cross-platform support.
Move simple cases that use os.read (ex. [0]) to use the new API when it makes code simpler and more efficient. Potentially adding `readinto` to more readable/writeable file-like proxy objects or objects which transform the data (ex. [`Lib/_compression`](https://github.com/python/cpython/blob/298dda57709c45cbcb44831e0d682dc071af5293/Lib/_compression.py#L66-L70)) is out of scope for this issue.
[0]
https://github.com/python/cpython/blob/298dda57709c45cbcb44831e0d682dc071af5293/Lib/subprocess.py#L1914-L1921
https://github.com/python/cpython/blob/298dda57709c45cbcb44831e0d682dc071af5293/Lib/multiprocessing/forkserver.py#L384-L392
https://github.com/python/cpython/blob/298dda57709c45cbcb44831e0d682dc071af5293/Lib/_pyio.py#L1695-L1701
# `os.read` loops to migrate
## Well contained `os.read` loops
- [x] [`multiprocessing.forkserver read_signed`](https://github.com/python/cpython/blob/9abbb58e3f023555473d9e8b82738ef44077cfa8/Lib/multiprocessing/forkserver.py#L384-L392) - @cmaloney - https://github.com/python/cpython/pull/129425
- ~~[x] [`subprocess Popen._execute_child`](https://github.com/python/cpython/blob/9abbb58e3f023555473d9e8b82738ef44077cfa8/Lib/subprocess.py#L1916-L1921) - @cmaloney - https://github.com/python/cpython/pull/129498~~
## `os.read` loop interleaved with other code
- [ ] [`_pyio FileIO.read FileIO.readall FileIO.readinto`](https://github.com/python/cpython/blob/9abbb58e3f023555473d9e8b82738ef44077cfa8/Lib/_pyio.py#L1636-L1701) see, gh-129005 -- @cmaloney
- [ ] [`_pyrepl.unix_console UnixConsole.input_buffer`](https://github.com/python/cpython/blob/9abbb58e3f023555473d9e8b82738ef44077cfa8/Lib/_pyrepl/unix_console.py#L202-L217) -- fixed length underlying buffer with "pos" / window on top.
- [ ] [`pty _copy`](https://github.com/python/cpython/blob/9abbb58e3f023555473d9e8b82738ef44077cfa8/Lib/pty.py#L93-L156). Operates around a "high waterlevel" / attempt to have a fixed-ish size buffer. Wraps `os.read` with a `_read` function.
- [ ] [`subprocess Popen.communicate`](https://github.com/python/cpython/blob/9abbb58e3f023555473d9e8b82738ef44077cfa8/Lib/subprocess.py#L2093-L2153). Note, this feels like something non-contiguous Py_buffer would be really good for, particularly in `self.text_mode` where currently all the bytes are "copied" into a contiguous `bytes` to turn then turn into text...
- [ ] [`tarfile _Stream._read and _Stream.__read`](https://github.com/python/cpython/blob/9abbb58e3f023555473d9e8b82738ef44077cfa8/Lib/tarfile.py#L530-L571). Note, builds _LowLevelFile around `os.read`, but other read methods also available.
### Has this already been discussed elsewhere?
No response given
### Links to previous discussion of this feature:
https://github.com/python/cpython/issues/129005#issuecomment-2608196581
<!-- gh-linked-prs -->
### Linked PRs
* gh-129211
* gh-129316
* gh-129425
* gh-129498
* gh-130098
<!-- /gh-linked-prs -->
| 1ed44879686c4b893d92a89d9259da3cbed6e166 | 0ef8d470b79889de065e94cecd0ee01e45037d3a |
python/cpython | python__cpython-132184 | # Doc: mention a minimal version of QEMU user emulation necessary for 3.13+?
# Documentation
I ran into an issue with running Python 3.13 under QEMU on Ubuntu 22.04.
```
>>> subprocess.run([sys.executable], check=True)
Traceback (most recent call last):
File "<python-input-10>", line 1, in <module>
subprocess.run([sys.executable], check=True)
~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.13/subprocess.py", line 577, in run
raise CalledProcessError(retcode, process.args,
output=stdout, stderr=stderr)
subprocess.CalledProcessError: Command '['/usr/local/bin/python3']' returned non-zero exit status 127.
>>> subprocess._USE_POSIX_SPAWN = False
>>> subprocess.run([sys.executable], check=True)
Python 3.13.1 (main, Jan 21 2025, 16:37:53) [GCC 12.2.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
```
Note that rc 127 is:
```c
#define SPAWN_ERROR 127
```
I traced this down to https://github.com/python/cpython/commit/2b93f5224216d10f8119373e72b5c2b3984e0af6 from https://github.com/python/cpython/pull/113118 which added support for using `posix_spawn` if `posix_spawn_file_actions_addclosefrom_np` exists.
The problem seems to be that while the libc may support `posix_spawn_file_actions_addclosefrom_np`, there needs to be equivalent support within the QEMU user emulator to support any syscalls made.
In this case, glibc uses the `close_range` syscall.
```
186898 Unknown syscall 436
```
On arm64, syscall 436 is `close_range`. (https://gpages.juszkiewicz.com.pl/syscalls-table/syscalls.html)
We'd expect:
```
27184 close_range(3,4294967295,0) = 0
```
glibc does try to fall back to `closefrom`, however that call can fail if the foreign chroot doesn't have `/proc/` mounted, which is a situation often encountered when building foreign chroots without privilege escalation.
```
279077 openat(AT_FDCWD,"/proc/self/fd/",O_RDONLY|O_DIRECTORY) = -1 errno=2 (No such file or directory)
279077 exit_group(127)
= 279077
```
`close_range` wasn't stubbed out in QEMU until 7.2 https://github.com/qemu/qemu/commit/af804f39cc, meaning anyone running Ubuntu 22.04, Debian Bullseye, or potentially other non-EOL distributions are likely to run into problems unless they:
* deploy a hand built QEMU
* pull a newer QEMU package from a newer version of their distro
* upgrade their distribution
I don't think there's a great way to determine if the interpreter is running under user emulation, so Python likely can't handle this itself and avoid the `posix_spawn` call. The knob to disable `posix_spawn`, `_USE_POSIX_SPAWN`, requires manually flipping the flag which may not be possible when invoking scripts maintained by others (In my case, trying to install Poetry will call `ensurepip` which uses `subprocess`). There doesn't appear to be any environment variable knob for this either, so it's not as simple as exporting a variable when you know you're running a script in an emulated environment (though i'm not sure environment variable hand off is always guaranteed).
I'm wondering if there needs to be some documentation somewhere, either in `subprocess` or `os.posix_spawn` that calls out that users running Python under QEMU user emulation needs to have an emulator capable of handling this syscall, hopefully just to save them time debugging the issue.
I realize that making support guarantees under user emulation would be difficult, it's definitely not listed in the support tiers https://peps.python.org/pep-0011/ so understand if this gets closed without further discussion.
Tangentially related, `os.closerange` falls back to alternative solutions if `close_range` returns an error. The way Python has it wrapped, most cases will not use `closefrom` unless the caller specifies a rather large number for the high FD. This could cause problems for environments that don't have a mounted `/proc` since the libc implementation of `closefrom` raises an error if it can't query this directory.
@gpshead sorry to ping you but saw you looked over the original PR so figured I'd ask if you had thoughts on this?
<!-- gh-linked-prs -->
### Linked PRs
* gh-132184
* gh-132191
<!-- /gh-linked-prs -->
| 4c5dcc6d8292d5142aff8401cb9b9d18b49c6c89 | 6eaa4aeef25f77a31768d8ba5a03f614766aba95 |
python/cpython | python__cpython-129203 | # Use "prefetch" CPU instructions during the marking phase of the GC
# Feature or enhancement
### Proposal:
This change is partially inspired by a similar change made to the [OCaml GC](https://github.com/ocaml/ocaml/pull/10195). Sam wrote a [prototype implementation](https://github.com/colesbury/cpython/commits/gc-experiment/) of the idea and that seemed to show promise. Now that we have a "mark alive" phase in the free-threaded GC, it is easier to add the prefetch buffer. Doing it only for the marking phase would seem to provide most of the benefit for the minimal amount of code complexity.
It is expected that using "prefetch" will only provide a benefit when the working set of objects exceeds the size of the CPU cache. If that's not the case, the prefetch logic should not (much) hurt performance. There would be a small increase in the code complexity for traversing the object graph (to selectively use the prefetch buffer or use the stack). However, on small object graphs, the time spent in the GC is also small.
Note this change is proposed for the free-threaded version of the cyclic GC. It might be possible to use prefetching in the default build GC but the design would need to be fairly different due to the next/prev GC linked lists. A separate issue should be created if someone wants to try to implement that optimization.
### Has this already been discussed elsewhere?
This is a minor feature, which does not need previous discussion elsewhere
### Links to previous discussion of this feature:
_No response_
<!-- gh-linked-prs -->
### Linked PRs
* gh-129203
<!-- /gh-linked-prs -->
| cdcacec79f7a216c3c988baa4dc31ce4e76c97ac | 5fb019fc29a90e722aff20a9522bf588351358cd |
python/cpython | python__cpython-130564 | # iOS testbed runner prone to failure when two instances start near the same time
# Bug report
### Bug description:
The iOS testbed runner starts an iOS simulator; however, the mechanism that starts the simulator doesn't return the ID of the simulator that was started. The runner needs to know the ID of the simulator that was started so that future commands can be directed at that simulator.
The current implementation of the testbed finds this ID by generating a list of know simulators, and then waiting until another entry appears. This works fine until there are 2 test runs started at near the same time, and the runner is unable to identify which simulator *this* test run has started. This results in test failures like the following:
```
note: Run script build phase 'Prepare Python Binary Modules' will be run during every build because the option to run the script phase "Based on dependency analysis" is unchecked. (in target 'iOSTestbed' from project 'iOSTestbed')
note: Run script build phase 'Install Target Specific Python Standard Library' will be run during every build because the option to run the script phase "Based on dependency analysis" is unchecked. (in target 'iOSTestbed' from project 'iOSTestbed')
Found more than one new device: {'5CAA0336-9CE1-4222-BFE3-ADA405F766DE', 'DD108383-685A-4400-BF30-013AA82C4A61'}
make: *** [testios] Error 1
program finished with exit code 2
```
A re-run of the same test job will usually fix the problem, but it shouldn't happen in the first place.
This almost never happens locally, because users don't normally start to test runs in parallel. However, it's a set of conditions that is reasonably likely on the CI machine, as it's very easy for a PR and a main CI job to start at the same time (most commonly, when you merge a PR and immediately backport to a previous release).
### CPython versions tested on:
3.13
### Operating systems tested on:
macOS, Other
<!-- gh-linked-prs -->
### Linked PRs
* gh-130564
* gh-130657
<!-- /gh-linked-prs -->
| 9211b3dabeacb92713ac3f5f0fa43bc7cf69afd8 | 6140b0896e95ca96aa15472e14d0502966391485 |
python/cpython | python__cpython-129253 | # asyncio staggered race missing calls to asyncio.future_add_to_awaited_by() and asyncio.future_discard_from_awaited_by()
# Bug report
### Bug description:
asyncio staggered race missing calls to asyncio.future_add_to_awaited_by() and asyncio.future_discard_from_awaited_by()
A possible oversight in https://github.com/python/cpython/pull/124640/files
### CPython versions tested on:
3.14
### Operating systems tested on:
_No response_
<!-- gh-linked-prs -->
### Linked PRs
* gh-129253
<!-- /gh-linked-prs -->
| fccbfc40b546630fa7ee404c0949d52ab2921a90 | a156b1942476809d9aca9899ee6465f10671c5c6 |
python/cpython | python__cpython-129193 | # `test_zipfile` fails (env changed)
# Bug report
### Bug description:
```python
eclips4@nixos ~/p/p/cpython (main)> ./python -m test -q test_zipfile
Using random seed: 315532800
0:00:00 load avg: 2.32 Run 1 test sequentially in a single process
test_zipfile failed (env changed)
== Tests result: SUCCESS ==
1 test altered the execution environment (env changed):
test_zipfile
Total duration: 19.2 sec
Total tests: run=338 skipped=5
Total test files: run=1/1
Result: SUCCESS
```
### CPython versions tested on:
CPython main branch
### Operating systems tested on:
Linux
<!-- gh-linked-prs -->
### Linked PRs
* gh-129193
<!-- /gh-linked-prs -->
| fc6d4b71eb6bb4dcef3def286302e6bec37aec9f | 67d804b494d5f9f13fff088b50ff488b3701979d |
python/cpython | python__cpython-129174 | # Refactor codecs error handlers to use `_PyUnicodeError_GetParams` and extract complex logic into separate functions
# Feature or enhancement
### Proposal:
I want to refactor the different codecs handlers in `Python/codecs.c` to use `_PyUnicodeError_GetParams`. Some codecs handlers will be refactored as part of #126004 but some others are not subject to issues (namely, the `ignore`, `namereplace`, `surrogateescape`, and `surrogatepass` handlers do not suffer from crashes, or at least I wasn't able to make them crash easily).
In addition, I also plan to split the handlers into functions instead of 2 or 3 big blocks of code handling a specific exception. For that reason, I will introduce the following helper macros:
```c
#define _PyIsUnicodeEncodeError(EXC) \
PyObject_TypeCheck(EXC, (PyTypeObject *)PyExc_UnicodeEncodeError)
#define _PyIsUnicodeDecodeError(EXC) \
PyObject_TypeCheck(EXC, (PyTypeObject *)PyExc_UnicodeDecodeError)
#define _PyIsUnicodeTranslateError(EXC) \
PyObject_TypeCheck(EXC, (PyTypeObject *)PyExc_UnicodeTranslateError)
```
For handlers that need to be fixed, I will first fix them in-place (no refactorization). Afterwards, I will refactor them and extract the relevant part of the code into functions. That way, the diff will be easier to follow (~~I've observed that it's much harder to read the diff where I did both so I will revert that part in the existing PRs~~; EDIT: actually there is no PR doing both fixes and split...).
I'm creating this issue to track the progression of the refactorization if no issue occurs.
cc @vstinner @encukou
### Has this already been discussed elsewhere?
This is a minor feature, which does not need previous discussion elsewhere
### Links to previous discussion of this feature:
_No response_
<!-- gh-linked-prs -->
### Linked PRs
* gh-129174
* gh-129135
* gh-129134
* gh-129175
* gh-129893
* gh-129894
* gh-129895
<!-- /gh-linked-prs -->
| 36bb22993307b25593237a1022790d329b7c75e0 | 732670d93b9b0c0ff8adde07418fd6f8397893ef |
python/cpython | python__cpython-129160 | # test_asyncio leaks references after commit 9d2e1ea3862e5950d48b45ac57995a206e33f38b
# Bug report
### Bug description:
Executing
```
./python -m test test_asyncio.test_subprocess test_asyncio.test_waitfor -R 3:
```
leaks references on `main`. Bisecting points to
```
9d2e1ea3862e5950d48b45ac57995a206e33f38b is the first bad commit
commit 9d2e1ea3862e5950d48b45ac57995a206e33f38b
Author: Kumar Aditya <kumaraditya@python.org>
Date: Sun Jun 23 18:38:50 2024 +0530
GH-120804: Remove `PidfdChildWatcher`, `ThreadedChildWatcher` and `AbstractChildWatcher` from asyncio APIs (#120893)
Lib/asyncio/unix_events.py | 199 ++++--------------------------
Lib/test/test_asyncio/test_events.py | 26 +---
Lib/test/test_asyncio/test_subprocess.py | 30 ++---
Lib/test/test_asyncio/test_unix_events.py | 26 ----
Lib/test/test_asyncio/utils.py | 17 ---
5 files changed, 39 insertions(+), 259 deletions(-)
bisect found first bad commit
```
Unclear why the buildbots don't catch this but I am catching it everytime I check test_asyncio for refleaks.
CC: @kumaraditya303
### CPython versions tested on:
CPython main branch
### Operating systems tested on:
Linux
<!-- gh-linked-prs -->
### Linked PRs
* gh-129160
* gh-129180
* gh-129181
<!-- /gh-linked-prs -->
| a9f5edbf5fb141ad172978b25483342125184ed2 | 9012fa741d55419dc77c5c191794eb93e71ae9a4 |
python/cpython | python__cpython-129321 | # ./android.py configure-host HOST fails because export CXXFLAGS is unquoted
I encountered an issue during ./android.py configure-host in a Docker container that failed the build:
**error**:
```
/bin/sh: 90: export: -I/cpython/cross-build/aarch64-linux-android/prefix/include: bad variable name
```
**workaround**: patch ./android-env.sh to fix the export
```
RUN sed -i 's/export CXXFLAGS=\$CFLAGS/export CXXFLAGS="\$CFLAGS"/' ./android-env.sh
```
Reference: [Android/android-env.sh#L90](https://github.com/python/cpython/blob/d147e5e52cdb90496ae5fe85b3263cdfa9407a28/Android/android-env.sh#L90)
<!-- gh-linked-prs -->
### Linked PRs
* gh-129321
* gh-129332
<!-- /gh-linked-prs -->
| a49225cc66680129f129d1fcf6e20afb37a1a877 | a8dc6d6d44a141a8f839deb248a02148dcfb509e |
python/cpython | python__cpython-129168 | # Missing fast path in PyLong_From*() functions for compact integers
# Feature or enhancement
### Proposal:
See e.g. [the fast path using_PyLong_FromMedium()](https://github.com/python/cpython/blob/5aaf41685834901e4ed0a40f4c055b92991a0bb5/Objects/longobject.c#L329-L331) in `PyLong_FromLong()`. `PyLong_FromLongLong()` is almost identical.
Maybe then implement `PyLong_FromSsize_t()`, `PyLong_FromLong()` and `PyLong_FromLongLong()`) using a macro similar to [PYLONG_FROM_UINT](https://github.com/python/cpython/blob/5aaf41685834901e4ed0a40f4c055b92991a0bb5/Objects/longobject.c#L357-L379) to get rid of the repetitive code?
[PYLONG_FROM_UINT](https://github.com/python/cpython/blob/5aaf41685834901e4ed0a40f4c055b92991a0bb5/Objects/longobject.c#L357-L379) is missing the fast path for medium values, too.
### Has this already been discussed elsewhere?
This is a minor feature, which does not need previous discussion elsewhere
### Links to previous discussion of this feature:
Per encouragement from @iritkatriel in https://github.com/python/cpython/pull/128927#discussion_r1924265225
<!-- gh-linked-prs -->
### Linked PRs
* gh-129168
* gh-129215
* gh-129301
* gh-131211
<!-- /gh-linked-prs -->
| ab353b31bb300d55e845476a8e265366455d93fc | 75f59bb62938fc9acac7a4ede19404e889417c6b |
python/cpython | python__cpython-129950 | # Logging Handler.close does not remove instance from shutdown list.
# Bug report
### Bug description:
The close method on my custom handler was being called during shutdown despite me having closed it myself already. The close method (according to the documentation) is supposed to remove itself from the list to be called during shutdown. Took a look at it though and unforunately it doesn't actually do this as it's written.
The only thing it does is remove itself from the weak value dictionary `_handlers` which is just used to find handlers by name.
The collection that dictates what is used at shutdown (`_handlerList`) only gets the items removed if the weak reference is broken so close will be called as long as the handler hasn't been disposed of.
To make the docmentation accurate some new code needs to be implemented to loop though list and check the weak references against the current instance which would have the side benefit of tidying up any broken references. I guess could also leave it be and update the documentation for how it actually works, seems to have been this way for awhile and no one else seems to have issues. If it had been documented correctly could have worked around it as it is from the begining.
### CPython versions tested on:
3.9
### Operating systems tested on:
Windows
<!-- gh-linked-prs -->
### Linked PRs
* gh-129950
* gh-129951
* gh-129952
<!-- /gh-linked-prs -->
| 7c156a63d3d5aadff0d40af73c0f622f7a0fcea5 | b8f7bddd6c6b5d2d13c97882042ce808aceca5a8 |
python/cpython | python__cpython-129142 | # Build fails with gcc 9.4.0 after d3b1bb2
# Bug report
### Bug description:
gcc-9.4.0 (from Ubuntu 20.04) fails to build latest CPython main:
```
In file included from ./Modules/_ctypes/callproc.c:87:
./Modules/_ctypes/ctypes.h:12:72: error: missing binary operator before token "(" 12 | # if USING_APPLE_OS_LIBFFI && defined(__has_builtin) && __has_builtin(__builtin_available) |
```
The first bad commit seems to be d3b1bb228c95. Fix incoming.
### CPython versions tested on:
CPython main branch
### Operating systems tested on:
Linux
<!-- gh-linked-prs -->
### Linked PRs
* gh-129142
<!-- /gh-linked-prs -->
| 3d7c0e5366c941f29350897ee97469f3aa9d6eab | 05d12eecbde1ace39826320cadf8e673d709b229 |
python/cpython | python__cpython-129108 | # bytearray is not free-thread safe
# Crash report
### What happened?
The `bytearray` object is not free-thread safe. Below is a script which will elicit free-thread associated errors (segfault and incorrect values) from the various `bytearray` methods:
```python
import threading
def clear(b, a, *args): # MODIFIES!
b.wait()
try: a.clear()
except BufferError: pass
def clear2(b, a, c): # MODIFIES c!
b.wait()
try: c.clear()
except BufferError: pass
def pop1(b, a): # MODIFIES!
b.wait()
try: a.pop()
except IndexError: pass
def append1(b, a): # MODIFIES!
b.wait()
a.append(0)
def insert1(b, a): # MODIFIES!
b.wait()
a.insert(0, 0)
def extend(b, a): # MODIFIES!
c = bytearray(b'0' * 0x400000)
b.wait()
a.extend(c)
def remove(b, a): # MODIFIES!
c = ord('0')
b.wait()
try: a.remove(c)
except ValueError: pass
def reverse(b, a): # modifies inplace
b.wait()
a.reverse()
def reduce(b, a):
b.wait()
a.__reduce__()
def reduceex2(b, a):
b.wait()
a.__reduce_ex__(2)
def reduceex3(b, a):
b.wait()
c = a.__reduce_ex__(3)
# print(c)
assert not c[1] or b'\xdd' not in c[1][0]
def count0(b, a):
b.wait()
a.count(0)
def decode(b, a):
b.wait()
a.decode()
def find(b, a):
c = bytearray(b'0' * 0x40000)
b.wait()
a.find(c)
def hex(b, a):
b.wait()
a.hex('_')
def join(b, a):
b.wait()
a.join([b'1', b'2', b'3'])
def replace(b, a):
b.wait()
a.replace(b'0', b'')
def maketrans(b, a, c):
b.wait()
try: a.maketrans(a, c)
except ValueError: pass
def translate(b, a, c):
b.wait()
a.translate(c)
def copy(b, a):
b.wait()
c = a.copy()
if c: assert c[0] == 48 # '0'
def endswith(b, a):
b.wait()
assert not a.endswith(b'\xdd')
def index(b, a):
b.wait()
try: a.index(0xdd)
except ValueError: return
assert False
def lstrip(b, a):
b.wait()
assert not a.lstrip(b'0')
def partition(b, a):
b.wait()
assert not a.partition(b'\xdd')[2]
def removeprefix(b, a):
b.wait()
assert not a.removeprefix(b'0')
def removesuffix(b, a):
b.wait()
assert not a.removesuffix(b'0')
def rfind(b, a):
b.wait()
assert a.rfind(b'\xdd') == -1
def rindex(b, a):
b.wait()
try: a.rindex(b'\xdd')
except ValueError: return
assert False
def rpartition(b, a):
b.wait()
assert not a.rpartition(b'\xdd')[0]
def rsplit(b, a):
b.wait()
assert len(a.rsplit(b'\xdd')) == 1
def rstrip(b, a):
b.wait()
assert not a.rstrip(b'0')
def split(b, a):
b.wait()
assert len(a.split(b'\xdd')) == 1
def splitlines(b, a):
b.wait()
l = len(a.splitlines())
assert l > 1 or l == 0
def startswith(b, a):
b.wait()
assert not a.startswith(b'\xdd')
def strip(b, a):
b.wait()
assert not a.strip(b'0')
def repeat(b, a):
b.wait()
a * 2
def contains(b, a):
b.wait()
assert 0xdd not in a
def iconcat(b, a): # MODIFIES!
c = bytearray(b'0' * 0x400000)
b.wait()
a += c
def irepeat(b, a): # MODIFIES!
b.wait()
a *= 2
def subscript(b, a):
b.wait()
try: assert a[0] != 0xdd
except IndexError: pass
def ass_subscript(b, a): # MODIFIES!
c = bytearray(b'0' * 0x400000)
b.wait()
a[:] = c
def mod(b, a):
c = tuple(range(4096))
b.wait()
try: a % c
except TypeError: pass
def repr_(b, a):
b.wait()
repr(a)
def capitalize(b, a):
b.wait()
c = a.capitalize()
assert not c or c[0] not in (0xdd, 0xcd)
def center(b, a):
b.wait()
c = a.center(0x60000)
assert not c or c[0x20000] not in (0xdd, 0xcd)
def expandtabs(b, a):
b.wait()
c = a.expandtabs()
assert not c or c[0] not in (0xdd, 0xcd)
def ljust(b, a):
b.wait()
c = a.ljust(0x600000)
assert not c or c[0] not in (0xdd, 0xcd)
def lower(b, a):
b.wait()
c = a.lower()
assert not c or c[0] not in (0xdd, 0xcd)
def rjust(b, a):
b.wait()
c = a.rjust(0x600000)
assert not c or c[-1] not in (0xdd, 0xcd)
def swapcase(b, a):
b.wait()
c = a.swapcase()
assert not c or c[-1] not in (0xdd, 0xcd)
def title(b, a):
b.wait()
c = a.title()
assert not c or c[-1] not in (0xdd, 0xcd)
def upper(b, a):
b.wait()
c = a.upper()
assert not c or c[-1] not in (0xdd, 0xcd)
def zfill(b, a):
b.wait()
c = a.zfill(0x400000)
assert not c or c[-1] not in (0xdd, 0xcd)
def iter_next(b, a, it):
b.wait()
list(it)
def iter_reduce(b, a, it):
b.wait()
c = it.__reduce__()
assert not c[1] or b'\xdd' not in c[1][0]
def check(funcs, a=None, *args):
if a is None:
a = bytearray(b'0' * 0x400000)
barrier = threading.Barrier(len(funcs))
thrds = []
for func in funcs:
thrd = threading.Thread(target=func, args=(barrier, a, *args))
thrds.append(thrd)
thrd.start()
for thrd in thrds:
thrd.join()
if __name__ == "__main__":
while True:
# hard errors
check([clear] + [reduce] * 10)
check([clear] + [reduceex2] * 10)
check([clear] + [append1] * 10)
check([clear] * 10)
check([clear] + [count0] * 10)
check([clear] + [decode] * 10)
check([clear] + [extend] * 10)
check([clear] + [find] * 10)
check([clear] + [hex] * 10)
check([clear] + [insert1] * 10)
check([clear] + [join] * 10)
check([clear] + [pop1] * 10)
check([clear] + [remove] * 10)
check([clear] + [replace] * 10)
check([clear] + [reverse] * 10)
check([clear, clear2] + [maketrans] * 10, bytearray(range(128)), bytearray(range(128)))
check([clear] + [translate] * 10, None, bytearray.maketrans(bytearray(range(128)), bytearray(range(128))))
check([clear] + [repeat] * 10)
check([clear] + [iconcat] * 10)
check([clear] + [irepeat] * 10)
check([clear] + [ass_subscript] * 10)
check([clear] + [repr_] * 10)
check([clear] + [iter_next] * 10, b := bytearray(b'0' * 0x400), iter(b))
# value errors
check([clear] + [reduceex3] * 10, bytearray(b'a' * 0x40000))
check([clear] + [copy] * 10)
check([clear] + [endswith] * 10)
check([clear] + [index] * 10)
check([clear] + [lstrip] * 10)
check([clear] + [partition] * 10)
check([clear] + [removeprefix] * 10, bytearray(b'0'))
check([clear] + [removesuffix] * 10, bytearray(b'0'))
check([clear] + [rfind] * 10)
check([clear] + [rindex] * 10)
check([clear] + [rpartition] * 10)
check([clear] + [rsplit] * 10, bytearray(b'0' * 0x4000))
check([clear] + [rstrip] * 10)
check([clear] + [split] * 10, bytearray(b'0' * 0x4000))
check([clear] + [splitlines] * 10, bytearray(b'\n' * 0x400))
check([clear] + [startswith] * 10)
check([clear] + [strip] * 10)
check([clear] + [contains] * 10)
check([clear] + [subscript] * 10)
check([clear] + [mod] * 10, bytearray(b'%d' * 4096))
check([clear] + [capitalize] * 10, bytearray(b'a' * 0x40000))
check([clear] + [center] * 10, bytearray(b'a' * 0x40000))
check([clear] + [expandtabs] * 10, bytearray(b'0\t' * 4096))
check([clear] + [ljust] * 10, bytearray(b'0' * 0x400000))
check([clear] + [lower] * 10, bytearray(b'A' * 0x400000))
check([clear] + [rjust] * 10, bytearray(b'0' * 0x400000))
check([clear] + [swapcase] * 10, bytearray(b'aA' * 0x200000))
check([clear] + [title] * 10, bytearray(b'aA' * 0x200000))
check([clear] + [upper] * 10, bytearray(b'a' * 0x400000))
check([clear] + [zfill] * 10, bytearray(b'1' * 0x200000))
check([clear] + [iter_reduce] * 10, b := bytearray(b'0' * 0x400), iter(b))
# Not tested:
# getitem (masked by subscript), setitem (masked by ass_subscript)
# str (just calls repr), iter_setstate and iter_length_hint (can't really fail)
```
### CPython versions tested on:
3.14
### Operating systems tested on:
Linux
### Output from running 'python -VV' on the command line:
Python 3.14.0a4+ experimental free-threading build (heads/main:f3980af38b, Jan 20 2025, 18:31:15) [GCC 11.4.0]
<!-- gh-linked-prs -->
### Linked PRs
* gh-129108
* gh-130096
* gh-130227
<!-- /gh-linked-prs -->
| a05433f24a6e1c6d2fbcc82b7a372c376a230897 | d2e60d8e598b622573b68b80b4fbf98c021dd087 |
python/cpython | python__cpython-129159 | # f-string evaluation of conditional expressions with != operator seems to fail
# Bug report
### Bug description:
Hi,
I really feel puzzled by this behaviour:
```
>>> print(f'{True == True=}')
True == True=True
>>> print(f'{True != True=}')
True False
```
I expected that the output of `print(f'{True != True=}')` would be `True != True=False`.
According to [docs](https://docs.python.org/3/reference/lexical_analysis.html#f-strings), "[t]o display **both the expression text and its value** after evaluation, (useful in debugging), an equal sign '=' may be added after the expression". So `print(f'{True != True=}')` fails because the expression text is not fully displayed -- even "=" is omitted.
Sorry if this is a duplicate -- I was not able to find any other report here and in Cython issue tracker.
Thank you.
### CPython versions tested on:
3.12, 3.13
### Operating systems tested on:
macOS
<!-- gh-linked-prs -->
### Linked PRs
* gh-129159
* gh-129163
* gh-129164
<!-- /gh-linked-prs -->
| 767cf708449fbf13826d379ecef64af97d779510 | 01bcf13a1c5bfca5124cf2e0679c9d1b25b04708 |
python/cpython | python__cpython-129082 | # Deprecate `sysconfig.expand_makefile_vars`
# Feature or enhancement
### Proposal:
Per https://github.com/python/cpython/issues/128978, the function was not working (since 3.13 after 4a53a397c311567f05553bc25a28aebaba4f6f65) because of a `NameError`. There are very few occurrences of that function: https://github.com/search?q=expand_makefile_vars+language%3APython&type=code&l=Python and most of them are actually *bundled* (and working) versions of that function using `distutils.sysconfig.expand_makefile_vars` instead of `sysconfig.expand_makefile_vars`.
Since `distutils` was removed in Python 3.12, users will need to access `sysconfig.expand_makefile_vars` directly if they use it (and if the function is called!). Now, AFAIK, there are some usages in the wild (e.g., https://github.com/ncbi/ncbi-cxx-toolkit-public/blob/52b6b3c2cea0af8553b026f84cb697afb16ecb64/src/dbapi/lang_bind/python/setup.py#L66) that could hit it at runtime, so if they decide to drop `distutils` support for whatever reason, and use our `sysconfig`, they would be a bit annoyed =/
So we can either decide to deprecate it (since it's not documented nor tested actually), or we can document it and properly test it. It's easy to deprecate it but it would add some burden on downstream users, so maybe we can just test it and document it?
cc @vstinner
### Has this already been discussed elsewhere?
This is a minor feature, which does not need previous discussion elsewhere
### Links to previous discussion of this feature:
_No response_
<!-- gh-linked-prs -->
### Linked PRs
* gh-129082
* gh-129131
<!-- /gh-linked-prs -->
| 0a6412f9cc9e694e76299cfbd73ec969b7d47af6 | 38a99568763604ccec5d5027f0658100ad76876f |
python/cpython | python__cpython-129140 | # Empty values of FORCE_COLOR and NO_COLOR are not ignored
# Bug report
### Bug description:
https://no-color.org/
> Command-line software which adds ANSI color to its output by default should check for a NO_COLOR environment variable that, when present **and not an empty string** (regardless of its value), prevents the addition of ANSI color.
https://force-color.org/
> Command-line software which outputs colored text should check for a FORCE_COLOR environment variable. When this variable is present and **not an empty string** (regardless of its value), it should force the addition of ANSI color.
Both "standards" state that these environment variables has only effect if their value is not an empty string. But Python currently only tests that the environment variables are defined.
### CPython versions tested on:
CPython main branch
### Operating systems tested on:
Linux
<!-- gh-linked-prs -->
### Linked PRs
* gh-129140
* gh-129360
<!-- /gh-linked-prs -->
| 9546fe2ef2db921709f3cb355b33bba977658672 | d6f010dead1a0d4b8a9e51f0187617b0394c9c2a |
python/cpython | python__cpython-129073 | # Glossary entry for "loader" references a long-deprecated method to explain it
# Documentation
The glossary entry for [loader](https://docs.python.org/3.14/glossary.html#term-loader) explains it as
> An object that loads a module. It must define a method named `load_module()`.
However, the [Loader API](https://docs.python.org/3.14/library/importlib.html#importlib.abc.Loader) marks `load_module()` as
> Deprecated since version 3.4
in favor of `exec_module()` and `create_module()`.
<!-- gh-linked-prs -->
### Linked PRs
* gh-129073
* gh-129077
* gh-129130
<!-- /gh-linked-prs -->
| e1fa2fcc7c1bf5291a7f71300b7828b49be9ab72 | 8ceb6cb117c8eda8c6913547f3a7de032ed25880 |
python/cpython | python__cpython-129034 | # Remove _Py_InitializeMain() private function
[PEP 587](https://peps.python.org/pep-0587/) "Python Initialization Configuration" added a "Multi-Phase Initialization Private Provisional API":
* `PyConfig._init_main`
* [_Py_InitializeMain()](https://docs.python.org/dev/c-api/init_config.html#c._Py_InitializeMain)
The plan was to make the "core phase" more usable. But since Python 3.8, Python doesn't use this "Multi-Phase Initialization" API and I failed to find any user in 3rd party code.
I propose to remove this API in Python 3.14 without any deprecation since the feature was [provisional](https://docs.python.org/dev/glossary.html#term-provisional-API) and it's a private API.
Provisional:
> A provisional API is one which has been **deliberately excluded** from the standard library’s **backwards compatibility guarantees**.
<!-- gh-linked-prs -->
### Linked PRs
* gh-129034
* gh-129048
* gh-137379
* gh-137382
<!-- /gh-linked-prs -->
| 07c3518ffb27875b14a0f1637aa85f773ff2f9ff | c463270c73a61ef8106ee7bd0571c7c6143e2c20 |
python/cpython | python__cpython-129043 | # Raise `DeprecationWarning` for `sys._clear_type_cache`
# Feature or enhancement
### Proposal:
`sys._clear_type_cache` was [deprecated in 3.13](https://docs.python.org/3.14/library/sys.html#sys._clear_type_cache) in favour of `sys._clear_internal_caches`, however there is no runtime `DeprecationWarning` raised. Should we add one? AFAICT, it is not soft-deprecated, so the intent is to eventually remove it.
cc @brandtbucher since you deprecated `sys._clear_type_cache` in https://github.com/python/cpython/pull/115152.
### Has this already been discussed elsewhere?
This is a minor feature, which does not need previous discussion elsewhere
### Links to previous discussion of this feature:
_No response_
<!-- gh-linked-prs -->
### Linked PRs
* gh-129043
<!-- /gh-linked-prs -->
| 8783cec9b67b3860bda9496611044b6f310c6761 | b402a4889b690876c488a1d1ccc6d33add3f69c6 |
python/cpython | python__cpython-129021 | # Confusing return type in `tokenize.untokenize` docs
# Documentation
The docs for [tokenize.untokenize](https://docs.python.org/3.14/library/tokenize.html#tokenize.untokenize) are somewhat confusing
regarding the return type of the function. The second paragraphs says:
> The reconstructed script is returned as a single string.
while the third paragraph clarifies that it can be both str and bytes:
https://github.com/python/cpython/blob/6c52ada5512ccd1a0891c00cc84c7d3170d3328c/Doc/library/tokenize.rst?plain=1#L100-L102
I think we should remove the first sentence (`The reconstructed script is returned as a single string.`)
because it's wrong and if someone is just skimming the docs they might not realize that the full
explanation is only in the next paragraph.
<!-- gh-linked-prs -->
### Linked PRs
* gh-129021
* gh-129035
* gh-129036
<!-- /gh-linked-prs -->
| bca35f0e782848ae2acdcfbfb000cd4a2af49fbd | 3b6a27c5607d2b199f127c2a5ef5316bbc30ae42 |
python/cpython | python__cpython-129562 | # Notes comparing `NotImplementedError` and `NotImplemented` have inconsistent linking
# Documentation
The link to the `OtherItem` is in different locations in the otherwise similar notes. One with the *See `OtherItem`*, the other in the sentence calling out they are different. They should probably be consistent.
I personally think they should be like in `constants`, with the *See `OtherItem`* sentence.
`constants.html` reads:
> **Note:** `NotImplementedError` and `NotImplemented` are not interchangeable, even though they have similar names and purposes. See [`NotImplementedError`](https://docs.python.org/3.14/library/exceptions.html#NotImplementedError) for details on when to use it.
`exceptions.html` reads:
> **Note:** `NotImplementedError` and [`NotImplemented`](https://docs.python.org/3.14/library/constants.html#NotImplemented) are not interchangeable, even though they have similar names and purposes. See `NotImplemented` for details on when to use it.
https://github.com/python/cpython/blob/5aaf41685834901e4ed0a40f4c055b92991a0bb5/Doc/library/constants.rst?plain=1#L51-L53
https://github.com/python/cpython/blob/5aaf41685834901e4ed0a40f4c055b92991a0bb5/Doc/library/exceptions.rst?plain=1#L338-L340
I don't know RST. Is it simply moving the `!` in `exceptions` from line 340 to line 338?
<!-- gh-linked-prs -->
### Linked PRs
* gh-129562
* gh-130776
* gh-130777
<!-- /gh-linked-prs -->
| a85eeb97710617404ba7a0fac3b264f586caf70c | 097846502b7f33cb327d512e2a396acf4f4de46e |
python/cpython | python__cpython-128999 | # Problematic numbered list in Programming FAQ
# Documentation
Problematic numbered list from Programming FAQ breaking translated Python documentation, showing in English and emits Sphinx warning even without any translation:
```
faq/programming.rst:1912:<translated>:3: WARNING: Literal block expected; none found. [docutils]
```
This happens in these lines (permalink for latest commit from 3.13 so far):
https://github.com/python/cpython/blob/b7ddcc3efb4e3610471880e1e904f42f39bbb2f4/Doc/faq/programming.rst?plain=1#L1908-L1930
<!-- gh-linked-prs -->
### Linked PRs
* gh-128999
* gh-129000
* gh-129001
<!-- /gh-linked-prs -->
| e8092e5cdcd6707ac0b16d8fb37fa080a88bcc97 | fba475ae6f932d0aaee6832b4102b2d4c50df70f |
python/cpython | python__cpython-128992 | # bdb holds the reference to the enter frame unnecessarily long
# Feature or enhancement
### Proposal:
bdb records the frame that enters the debugger as a utility. However, that could potentially cause a temporary refleak. It will probably clean up eventually, but a temp leak on a frame object is expensive - it links to all the caller frames and their local variables.
I did an attempt in #128190, but it's not perfect. There are two major issues:
1. The reference will only be released when the debugger is detached (when the user does a continue and there's no breakpoint). In many cases, the reference could be hold for a long time, but it's something that's only needed on the particular callback.
2. Because of 1., in 3.12, where there are multiple pdb instances created for different inline-breakpoints, the reference could still be leaked (because we did continue in a different pdb instance).
I will refactor the code a bit so the reference to enter frame will always be released before the debugger callback is finished.
### Has this already been discussed elsewhere?
This is a minor feature, which does not need previous discussion elsewhere
### Links to previous discussion of this feature:
_No response_
<!-- gh-linked-prs -->
### Linked PRs
* gh-128992
* gh-129002
* gh-129003
<!-- /gh-linked-prs -->
| 61b35f74aa4a6ac606635e245147ff3658628d99 | e8092e5cdcd6707ac0b16d8fb37fa080a88bcc97 |
python/cpython | python__cpython-128979 | # Leftover code in `sysconfig.expand_makefile_vars`
# Bug report
### Bug description:
I think there is some leftover code in `sysconfig.expand_makefile_vars` (4a53a397c311567f05553bc25a28aebaba4f6f65):
https://github.com/python/cpython/blob/4dade055f4e18a7e91bc70293abb4db745ad16ca/Lib/sysconfig/__init__.py#L727-L734
The `_findvar1_rx` and `_findvar2_rx` variables are not declared at all (they were moved to `sysconfig/__main__.py`). Since the function is publicly named (but not exported nor documented), I prefer backporting the changes of #110785, namely re-use the patterns as is.
cc @FFY00
### CPython versions tested on:
CPython main branch
### Operating systems tested on:
_No response_
<!-- gh-linked-prs -->
### Linked PRs
* gh-128979
* gh-129065
* gh-129066 (not needed)
<!-- /gh-linked-prs -->
| df66ff14b49f4388625212f6bc86b754cb51d4eb | 59fcae793f94be977c56c1f3c2988bd93d6b1564 |
python/cpython | python__cpython-128975 | # Crash in `UnicodeError.__str__` with attributes have custom `__str__`
# Crash report
### What happened?
```py
class Evil(str):
def __str__(self):
del exc.object
return 'evil'
exc = UnicodeEncodeError(Evil(), "object", 0, 0, Evil())
str(exc)
```
results in `Segmentation fault (core dumped)`. Another possibility for a crash:
```py
class Evil(str):
def __str__(self):
del exc.object
return 'evil'
exc = UnicodeEncodeError(Evil(), "object", 0, 0, Evil())
str(exc)
```
results in
```text
python: ./Include/cpython/unicodeobject.h:286: PyUnicode_GET_LENGTH: Assertion `PyUnicode_Check(op)' failed.
Aborted (core dumped)
```
The segmentation fault is quite easy to fix:
```C
reason_str = PyObject_Str(exc->reason);
if (reason_str == NULL) {
goto done;
}
encoding_str = PyObject_Str(exc->encoding);
if (encoding_str == NULL) {
goto done;
}
Py_ssize_t len = PyUnicode_GET_LENGTH(exc->object);
```
It occurs in `PyUnicode_GET_LENGTH(exc->object);`. And the reason is that `PyObject_Str(...)` may call artrbitary code.
I have a PR ready that I will post soon.
See https://github.com/python/cpython/pull/128975#issuecomment-2692145314 for the rationale of not backporting it.
### CPython versions tested on:
CPython main branch
<!-- gh-linked-prs -->
### Linked PRs
* gh-128975
<!-- /gh-linked-prs -->
| ddc27f9c385f57db1c227b655ec84dcf097a8976 | 75f38af7810af1c3ca567d6224a975f85aef970f |
python/cpython | python__cpython-133085 | # Incompatible change in internal string representation (encountered with Rust bindings) (?)
Not sure if it's the right place to ask, but here it is.
I'm attempting to introduce 3.14 support in `pyo3` (PyO3/pyo3#4811). I've encountered test failures involving UCS and refcounts targeting Windows and Linux. Invoking `cargo test --no-default-features --lib --tests`, one example of failure is here: https://github.com/clin1234/pyo3/actions/runs/12834317749/job/35791414368. The Python version used for testing 3.14 is 3.14.0a4.
However, running the same command with Python 3.14.0a3 running on Fedora Rawhide locally, I have encountered no test failures whatsoever.
And yes, I have pinged @davidhewitt on this matter.
<!-- gh-linked-prs -->
### Linked PRs
* gh-133085
<!-- /gh-linked-prs -->
| 987e45e6326c6174fb7a300f44b9d8e4e26370c9 | d78768e3d69fec730760f6e59c91fd12f2a00d11 |
python/cpython | python__cpython-128962 | # array iterator segfault on __setstate__() when exhausted
# Crash report
### What happened?
```python
>>> import array
... a = array.array('i')
... it = iter(a)
... list(it)
... it.__setstate__(0)
...
Segmentation fault (core dumped)
```
This goes back at least to py 3.8 (checked on that one), PR fix incoming.
### CPython versions tested on:
3.14
### Operating systems tested on:
Linux
### Output from running 'python -VV' on the command line:
Python 3.14.0a4+ (heads/arrayiter:7d97bc8eda, Jan 17 2025, 16:24:27) [GCC 11.4.0]
<!-- gh-linked-prs -->
### Linked PRs
* gh-128962
* gh-128976
* gh-128977
<!-- /gh-linked-prs -->
| 4dade055f4e18a7e91bc70293abb4db745ad16ca | 9ed7bf2cd783b78b28f18abc090f43467a28f4aa |
python/cpython | python__cpython-128957 | # Uninitialized variable `next_instr` in error code path
# Bug report
https://github.com/python/cpython/blob/d95ba9fa1110534b7247fa2ff12b90e930c93256/Python/ceval.c#L843-L855
Clang warns:
```
Python/ceval.c:848:17: warning: variable 'next_instr' is used uninitialized whenever 'if' condition is true [-Wsometimes-uninitialized]
848 | if (bytecode == NULL) {
| ^~~~~~~~~~~~~~~~
Python/ceval.c:957:45: note: uninitialized use occurs here
957 | _PyEval_MonitorRaise(tstate, frame, next_instr-1);
| ^~~~~~~~~~
Python/ceval.c:848:13: note: remove the 'if' if its condition is always false
848 | if (bytecode == NULL) {
| ^~~~~~~~~~~~~~~~~~~~~~~
849 | goto error;
| ~~~~~~~~~~~
850 | }
| ~
```
The warning looks legitimate to me. Nearby code mostly uses `goto exit_unwind`. Maybe we should use that instead of `goto error`?
cc @mpage
<!-- gh-linked-prs -->
### Linked PRs
* gh-128957
<!-- /gh-linked-prs -->
| 13c4def692228f09df0b30c5f93bc515e89fc77f | d95ba9fa1110534b7247fa2ff12b90e930c93256 |
python/cpython | python__cpython-128925 | # Reference counting bug with manually allocated heap types
# Bug report
Found by @vfdev-5.
This is specific to the free threading build and 3.14.
XLA/Jax uses the following code to create a heap type:
```c++
// We need to use heap-allocated type objects because we want to add
// additional methods dynamically.
...
nb::str name = nb::str("PmapFunction");
nb::str qualname = nb::str("PmapFunction");
PyHeapTypeObject* heap_type = reinterpret_cast<PyHeapTypeObject*>(
PyType_Type.tp_alloc(&PyType_Type, 0));
// Caution: we must not call any functions that might invoke the GC until
// PyType_Ready() is called. Otherwise the GC might see a half-constructed
// type object.
CHECK(heap_type) << "Unable to create heap type object";
heap_type->ht_name = name.release().ptr();
heap_type->ht_qualname = qualname.release().ptr();
...
```
https://github.com/openxla/xla/blob/19a8e8e05fb34c5c4b8c38c9a8225e89f008c8c1/xla/python/pmap_lib.cc#L1027-L1058
In other words, the heap type is created by by calling `PyType_Type.tp_alloc` and filling in the fields, instead of the more common use of `PyType_FromSpec`. This leaves [`unique_id`](https://github.com/python/cpython/blob/211f41316b7f205d18eb65c1ececd7f7fb30b02d/Include/cpython/object.h#L284-L286) zero initialized. The problem is that `unique_id=0` currently looks like a valid unique id for per-thread reference counting, which leads to reference counting errors and use-after-frees.
I think we should change the per-thread reference counting so that `unique_id=0` is the sentinel value indicating that it's not assigned instead of the current `unique_id=-1` convention.
### Full repro
* https://gist.github.com/vfdev-5/a2f8f0716611afe2e0721c4332bedcd5
<!-- gh-linked-prs -->
### Linked PRs
* gh-128925
* gh-128951
<!-- /gh-linked-prs -->
| d66c08aa757f221c0e8893300edac105dfcde7e8 | 767c89ba7c5a70626df6e75eb56b546bf911b997 |
python/cpython | python__cpython-128933 | # `socketserver.Unix*Server` may attempt to set `SO_REUSEPORT` on Unix sockets, causing `EOPNOTSUPP`, e.g. in `test_asyncio`
# Bug report
### Bug description:
When `socketserver.Unix*Server.server_bind()` is called with `allow_reuse_port` attribute set to `True`, it attempts to call `self.socket.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEPORT, 1)` on a Unix socket, and it fails with: `OSError: [Errno 95] Operation not supported`. In particular, this happens in `test_asyncio` tests.
This seems to be a recent change on my system (probably in Linux kernel), as the relevant tests used to pass when I tested CPython 3.14.0a3, and now the same version failed. Now I'm at 6.12.9, back then it was 6.12.5 (but I don't think there's a point in actually verifying that).
<details>
<summary>Output from `test_asyncio.test_sock_lowlevel`</summary>
```pytb
$ ./python -E -m test -v test_asyncio.test_sock_lowlevel
== CPython 3.14.0a4+ (heads/main:3193cb5ef87, Jan 16 2025, 13:54:51) [GCC 14.2.1 20241221]
== Linux-6.12.9-gentoo-dist-x86_64-AMD_Ryzen_5_3600_6-Core_Processor-with-glibc2.40 little-endian
== Python build: release
== cwd: /home/mgorny/git/cpython/build/test_python_worker_532605æ
== CPU count: 12
== encodings: locale=UTF-8 FS=utf-8
== resources: all test resources are disabled, use -u option to unskip tests
Using random seed: 3831794787
0:00:00 load avg: 0.54 Run 1 test sequentially in a single process
0:00:00 load avg: 0.54 [1/1] test_asyncio.test_sock_lowlevel
test_cancel_sock_accept (test.test_asyncio.test_sock_lowlevel.EPollEventLoopTests.test_cancel_sock_accept) ... ok
test_create_connection_sock (test.test_asyncio.test_sock_lowlevel.EPollEventLoopTests.test_create_connection_sock) ... ok
test_huge_content (test.test_asyncio.test_sock_lowlevel.EPollEventLoopTests.test_huge_content) ... ok
test_huge_content_recvinto (test.test_asyncio.test_sock_lowlevel.EPollEventLoopTests.test_huge_content_recvinto) ... ok
test_recvfrom (test.test_asyncio.test_sock_lowlevel.EPollEventLoopTests.test_recvfrom) ... ok
test_recvfrom_into (test.test_asyncio.test_sock_lowlevel.EPollEventLoopTests.test_recvfrom_into) ... ok
test_sendto_blocking (test.test_asyncio.test_sock_lowlevel.EPollEventLoopTests.test_sendto_blocking) ... ok
test_sock_accept (test.test_asyncio.test_sock_lowlevel.EPollEventLoopTests.test_sock_accept) ... ok
test_sock_client_connect_racing (test.test_asyncio.test_sock_lowlevel.EPollEventLoopTests.test_sock_client_connect_racing) ... ok
test_sock_client_fail (test.test_asyncio.test_sock_lowlevel.EPollEventLoopTests.test_sock_client_fail) ... ok
test_sock_client_ops (test.test_asyncio.test_sock_lowlevel.EPollEventLoopTests.test_sock_client_ops) ... ok
test_sock_client_racing (test.test_asyncio.test_sock_lowlevel.EPollEventLoopTests.test_sock_client_racing) ... ok
test_unix_sock_client_ops (test.test_asyncio.test_sock_lowlevel.EPollEventLoopTests.test_unix_sock_client_ops) ... ERROR
test_cancel_sock_accept (test.test_asyncio.test_sock_lowlevel.PollEventLoopTests.test_cancel_sock_accept) ... ok
test_create_connection_sock (test.test_asyncio.test_sock_lowlevel.PollEventLoopTests.test_create_connection_sock) ... ok
test_huge_content (test.test_asyncio.test_sock_lowlevel.PollEventLoopTests.test_huge_content) ... ok
test_huge_content_recvinto (test.test_asyncio.test_sock_lowlevel.PollEventLoopTests.test_huge_content_recvinto) ... ok
test_recvfrom (test.test_asyncio.test_sock_lowlevel.PollEventLoopTests.test_recvfrom) ... ok
test_recvfrom_into (test.test_asyncio.test_sock_lowlevel.PollEventLoopTests.test_recvfrom_into) ... ok
test_sendto_blocking (test.test_asyncio.test_sock_lowlevel.PollEventLoopTests.test_sendto_blocking) ... ok
test_sock_accept (test.test_asyncio.test_sock_lowlevel.PollEventLoopTests.test_sock_accept) ... ok
test_sock_client_connect_racing (test.test_asyncio.test_sock_lowlevel.PollEventLoopTests.test_sock_client_connect_racing) ... ok
test_sock_client_fail (test.test_asyncio.test_sock_lowlevel.PollEventLoopTests.test_sock_client_fail) ... ok
test_sock_client_ops (test.test_asyncio.test_sock_lowlevel.PollEventLoopTests.test_sock_client_ops) ... ok
test_sock_client_racing (test.test_asyncio.test_sock_lowlevel.PollEventLoopTests.test_sock_client_racing) ... ok
test_unix_sock_client_ops (test.test_asyncio.test_sock_lowlevel.PollEventLoopTests.test_unix_sock_client_ops) ... ERROR
test_cancel_sock_accept (test.test_asyncio.test_sock_lowlevel.SelectEventLoopTests.test_cancel_sock_accept) ... ok
test_create_connection_sock (test.test_asyncio.test_sock_lowlevel.SelectEventLoopTests.test_create_connection_sock) ... ok
test_huge_content (test.test_asyncio.test_sock_lowlevel.SelectEventLoopTests.test_huge_content) ... ok
test_huge_content_recvinto (test.test_asyncio.test_sock_lowlevel.SelectEventLoopTests.test_huge_content_recvinto) ... ok
test_recvfrom (test.test_asyncio.test_sock_lowlevel.SelectEventLoopTests.test_recvfrom) ... ok
test_recvfrom_into (test.test_asyncio.test_sock_lowlevel.SelectEventLoopTests.test_recvfrom_into) ... ok
test_sendto_blocking (test.test_asyncio.test_sock_lowlevel.SelectEventLoopTests.test_sendto_blocking) ... ok
test_sock_accept (test.test_asyncio.test_sock_lowlevel.SelectEventLoopTests.test_sock_accept) ... ok
test_sock_client_connect_racing (test.test_asyncio.test_sock_lowlevel.SelectEventLoopTests.test_sock_client_connect_racing) ... ok
test_sock_client_fail (test.test_asyncio.test_sock_lowlevel.SelectEventLoopTests.test_sock_client_fail) ... ok
test_sock_client_ops (test.test_asyncio.test_sock_lowlevel.SelectEventLoopTests.test_sock_client_ops) ... ok
test_sock_client_racing (test.test_asyncio.test_sock_lowlevel.SelectEventLoopTests.test_sock_client_racing) ... ok
test_unix_sock_client_ops (test.test_asyncio.test_sock_lowlevel.SelectEventLoopTests.test_unix_sock_client_ops) ... ERROR
======================================================================
ERROR: test_unix_sock_client_ops (test.test_asyncio.test_sock_lowlevel.EPollEventLoopTests.test_unix_sock_client_ops)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/home/mgorny/git/cpython/Lib/test/test_asyncio/test_sock_lowlevel.py", line 462, in test_unix_sock_client_ops
with test_utils.run_test_unix_server() as httpd:
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^^
File "/home/mgorny/git/cpython/Lib/contextlib.py", line 141, in __enter__
return next(self.gen)
File "/home/mgorny/git/cpython/Lib/test/test_asyncio/utils.py", line 261, in run_test_unix_server
yield from _run_test_server(address=path, use_ssl=use_ssl,
server_cls=SilentUnixWSGIServer,
server_ssl_cls=UnixSSLWSGIServer)
File "/home/mgorny/git/cpython/Lib/test/test_asyncio/utils.py", line 188, in _run_test_server
httpd = server_class(address, SilentWSGIRequestHandler)
File "/home/mgorny/git/cpython/Lib/socketserver.py", line 457, in __init__
self.server_bind()
~~~~~~~~~~~~~~~~^^
File "/home/mgorny/git/cpython/Lib/test/test_asyncio/utils.py", line 217, in server_bind
UnixHTTPServer.server_bind(self)
~~~~~~~~~~~~~~~~~~~~~~~~~~^^^^^^
File "/home/mgorny/git/cpython/Lib/test/test_asyncio/utils.py", line 207, in server_bind
socketserver.UnixStreamServer.server_bind(self)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^^^^^^
File "/home/mgorny/git/cpython/Lib/socketserver.py", line 472, in server_bind
self.socket.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEPORT, 1)
~~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
OSError: [Errno 95] Operation not supported
======================================================================
ERROR: test_unix_sock_client_ops (test.test_asyncio.test_sock_lowlevel.PollEventLoopTests.test_unix_sock_client_ops)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/home/mgorny/git/cpython/Lib/test/test_asyncio/test_sock_lowlevel.py", line 462, in test_unix_sock_client_ops
with test_utils.run_test_unix_server() as httpd:
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^^
File "/home/mgorny/git/cpython/Lib/contextlib.py", line 141, in __enter__
return next(self.gen)
File "/home/mgorny/git/cpython/Lib/test/test_asyncio/utils.py", line 261, in run_test_unix_server
yield from _run_test_server(address=path, use_ssl=use_ssl,
server_cls=SilentUnixWSGIServer,
server_ssl_cls=UnixSSLWSGIServer)
File "/home/mgorny/git/cpython/Lib/test/test_asyncio/utils.py", line 188, in _run_test_server
httpd = server_class(address, SilentWSGIRequestHandler)
File "/home/mgorny/git/cpython/Lib/socketserver.py", line 457, in __init__
self.server_bind()
~~~~~~~~~~~~~~~~^^
File "/home/mgorny/git/cpython/Lib/test/test_asyncio/utils.py", line 217, in server_bind
UnixHTTPServer.server_bind(self)
~~~~~~~~~~~~~~~~~~~~~~~~~~^^^^^^
File "/home/mgorny/git/cpython/Lib/test/test_asyncio/utils.py", line 207, in server_bind
socketserver.UnixStreamServer.server_bind(self)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^^^^^^
File "/home/mgorny/git/cpython/Lib/socketserver.py", line 472, in server_bind
self.socket.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEPORT, 1)
~~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
OSError: [Errno 95] Operation not supported
======================================================================
ERROR: test_unix_sock_client_ops (test.test_asyncio.test_sock_lowlevel.SelectEventLoopTests.test_unix_sock_client_ops)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/home/mgorny/git/cpython/Lib/test/test_asyncio/test_sock_lowlevel.py", line 462, in test_unix_sock_client_ops
with test_utils.run_test_unix_server() as httpd:
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^^
File "/home/mgorny/git/cpython/Lib/contextlib.py", line 141, in __enter__
return next(self.gen)
File "/home/mgorny/git/cpython/Lib/test/test_asyncio/utils.py", line 261, in run_test_unix_server
yield from _run_test_server(address=path, use_ssl=use_ssl,
server_cls=SilentUnixWSGIServer,
server_ssl_cls=UnixSSLWSGIServer)
File "/home/mgorny/git/cpython/Lib/test/test_asyncio/utils.py", line 188, in _run_test_server
httpd = server_class(address, SilentWSGIRequestHandler)
File "/home/mgorny/git/cpython/Lib/socketserver.py", line 457, in __init__
self.server_bind()
~~~~~~~~~~~~~~~~^^
File "/home/mgorny/git/cpython/Lib/test/test_asyncio/utils.py", line 217, in server_bind
UnixHTTPServer.server_bind(self)
~~~~~~~~~~~~~~~~~~~~~~~~~~^^^^^^
File "/home/mgorny/git/cpython/Lib/test/test_asyncio/utils.py", line 207, in server_bind
socketserver.UnixStreamServer.server_bind(self)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^^^^^^
File "/home/mgorny/git/cpython/Lib/socketserver.py", line 472, in server_bind
self.socket.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEPORT, 1)
~~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
OSError: [Errno 95] Operation not supported
----------------------------------------------------------------------
Ran 39 tests in 1.296s
FAILED (errors=3)
test test_asyncio.test_sock_lowlevel failed
test_asyncio.test_sock_lowlevel failed (3 errors)
== Tests result: FAILURE ==
1 test failed:
test_asyncio.test_sock_lowlevel
Total duration: 1.4 sec
Total tests: run=39
Total test files: run=1/1 failed=1
Result: FAILURE
```
</details>
Bisect points to 192d17c3fd9945104bc0303cf248bb0d074d260e:
```
commit 192d17c3fd9945104bc0303cf248bb0d074d260e
Author: Idan Kapustian <71190257+idankap@users.noreply.github.com>
AuthorDate: 2024-06-16 14:15:03 +0200
Commit: GitHub <noreply@github.com>
CommitDate: 2024-06-16 14:15:03 +0200
gh-120485: Add an override of `allow_reuse_port` on classes subclassing `socketserver.TCPServer` (GH-120488)
Co-authored-by: Vinay Sajip <vinay_sajip@yahoo.co.uk>
```
### CPython versions tested on:
3.14, CPython main branch
### Operating systems tested on:
Linux
<!-- gh-linked-prs -->
### Linked PRs
* gh-128933
* gh-128969
* gh-128970
<!-- /gh-linked-prs -->
| 3829104ab412a47bf3f36b8c133c886d2cc9a6d4 | 8174770d311ba09c07a47cc3ae90a1db2e7d7708 |
python/cpython | python__cpython-128918 | # Get rid of conditional inputs and outputs for instructions in bytecodes.c
We should remove the conditional stack effects in instruction definitions in bytecodes.c
Conditional stack effects already complicate code generation and that is only going to get worse with top-of-stack caching and other interpreter/JIT optimizations.
There were two reasons for having conditional stack effects:
1. Ease of porting the old ceval code to bytecodes.c
2. Performance
Reason 1 no longer applies. Instructions are much more regular now and it isn't that much work to remove the remaining conditional stack effects.
That leaves performance. I experimentally removed the conditional stack effects for `LOAD_GLOBAL` and `LOAD_ATTR` which is the worse possible case for performance as it makes no attempt to mitigate the extra dispatch costs and possibly worse specialization.
The results are [here](https://github.com/faster-cpython/benchmarking-public/tree/main/results/bm-20250115-3.14.0a4%2B-d85c001)
Overall we see a 0.8% slowdown. It seems that specialization is not significantly worse, but there is a large increase in `PUSH_NULL` following `LOAD_GLOBAL` that appears to responsible for the slowdown. An extra specialization should fix that.
[Prior discussion](https://github.com/faster-cpython/ideas/issues/716)
<!-- gh-linked-prs -->
### Linked PRs
* gh-128918
* gh-129202
* gh-129226
<!-- /gh-linked-prs -->
| a10f99375e7912df863cf101a38e9703cfcd72f1 | d7d066c3ab6842117f9e0fb1c9dde4bce00fa1e3 |
python/cpython | python__cpython-128912 | # [C API] Add PyImport_ImportModuleAttr(mod_name, attr_name) helper function
# Feature or enhancement
### Proposal:
Python has an internal `_PyImport_GetModuleAttrString(mod_name, attr_name)` helper function to import a module and get a module attribute. I propose to make this function public to be able to use it outside Python.
**UPDATE:** Function renamed to `PyImport_ImportModuleAttrString()`.
The function is convenient to use and is used by the following files in Python:
* Modules/arraymodule.c
* Modules/cjkcodecs/cjkcodecs.h
* Modules/_ctypes/callbacks.c
* Modules/_datetimemodule.c
* Modules/_decimal/_decimal.c
* Modules/_elementtree.c
* Modules/faulthandler.c
* Modules/_lsprof.c
* Modules/_operator.c
* Modules/_pickle.c
* Modules/posixmodule.c
* Modules/selectmodule.c
* Modules/_sqlite/connection.c
* Modules/_sqlite/module.c
* Modules/_sre/sre.c
* Modules/timemodule.c
* Modules/_zoneinfo.c
* Objects/abstract.c
* Objects/fileobject.c
* Objects/memoryobject.c
* Parser/pegen.c
* Parser/tokenizer/file_tokenizer.c
* Python/import.c
* Python/pylifecycle.c
### Has this already been discussed elsewhere?
No response given
### Links to previous discussion of this feature:
_No response_
<!-- gh-linked-prs -->
### Linked PRs
* gh-128912
* gh-128915
* gh-128960
* gh-128989
* gh-129657
<!-- /gh-linked-prs -->
| 3bebe46d3413195ee18c5c9ada83a35d4fd1d0e7 | f927204f64b3f8dbecec784e05bc8e25d2a78b2e |
python/cpython | python__cpython-128919 | # `_PyTrash_begin` and `_PyTrash_end` do not have implementation
We only have this left for `_PyTrash_begin` and `_PyTrash_end`:
```
~/Desktop/cpython2 main ✗
» ag _PyTrash_begin .
Misc/NEWS.d/3.9.0a5.rst
1217:PyThreadState attributes, but call new private _PyTrash_begin() and
Include/cpython/object.h
479:PyAPI_FUNC(int) _PyTrash_begin(PyThreadState *tstate, PyObject *op);
```
```
» ag _PyTrash_end .
Misc/NEWS.d/3.9.0a5.rst
1218:_PyTrash_end() functions which hide implementation details.
Include/cpython/object.h
480:PyAPI_FUNC(void) _PyTrash_end(PyThreadState *tstate);
```
Source: https://github.com/python/cpython/blob/313b96eb8b8d0ad3bac58d633822a0a3705ce60b/Include/cpython/object.h#L478-L481
They don't even have implementations. Looks like that they used to be called from `Py_TRASHCAN_BEGIN` and `Py_TRASHCAN_END` in 3.9: https://github.com/python/cpython/blob/340a82d9cff7127bb5a777d8b9a30b861bb4beee/Include/cpython/object.h#L520-L545
But, they are unused since https://github.com/python/cpython/pull/117763 in ~3.11
What should we do?
1. Nothing
2. Remove them
3. Raise deprecation warning when they are called (but they return `void`, it would be hard to do)
CC @markshannon @vstinner
<!-- gh-linked-prs -->
### Linked PRs
* gh-128919
<!-- /gh-linked-prs -->
| f48702dade921beed3e227d2a5ac82a9ae2533d0 | 3893a92d956363fa2443bc5e47d4bae3deddacef |
python/cpython | python__cpython-128903 | # GCC style fallthrough attribute is used with clang versions that don't support it
# Bug report
### Bug description:
`Include/pyport.h` currently only guards use of `__attribute__((fallthrough))` with `_Py__has_attribute(fallthrough)`. Clang versions prior to 10 do not support this syntax, but do claim to support the fallthrough attribute (because they support the C++11 style `[[fallthrough]]`).
### CPython versions tested on:
3.14
### Operating systems tested on:
macOS
<!-- gh-linked-prs -->
### Linked PRs
* gh-128903
<!-- /gh-linked-prs -->
| edf803345a5c57c38fca3850386530e30b397eca | 86c1a60d5a28cfb51f8843b307f8969c40e3bbec |
python/cpython | python__cpython-128946 | # `raise SyntaxError('hello', 'abcdef')` crashes the new Python 3.13 REPL
# Bug report
### Bug description:
In we enter this in the new interactive interpreter in Python 3.13
```python
raise SyntaxError('hello', 'abcdef')
```
the interpreter window closes without any error message.
This happens because the second argument of `SyntaxError` is interpreted as an iterator that produces (`filename, lineno, offset, text, end_lineno, end_offset`). The constructor doesn't control if offset is an integer; in the example above it is set to the character 'c'.
When an exception is caught by the REPL it is handled by the traceback module. This line
https://github.com/python/cpython/blob/36c5e3bcc25700645d19eba65dcdf22acd99a7a8/Lib/traceback.py#L1302
compares the offset and an integer. This raises an exception which is not caught and makes the program crash.
There are 2 solutions to this issue:
- add more controls on the second argument to `SyntaxError` to make sure that the types are those expected (`str` for filename, `int` for lineno, etc.)
- test the type of `SyntaxError` instances arguments in traceback.py
This issue doesn't happen with previous versions of the REPL
```
Python 3.12.0 (tags/v3.12.0:0fb18b0, Oct 2 2023, 13:03:39) [MSC v.1935 64 bit (AMD64)] on win32
Type "help", "copyright", "credits" or "license" for more information.
>>> raise SyntaxError('hello', 'abcdef')
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
SyntaxError: hello (a)
>>>
```
### CPython versions tested on:
3.13
### Operating systems tested on:
Windows
<!-- gh-linked-prs -->
### Linked PRs
* gh-128946
* gh-129178
<!-- /gh-linked-prs -->
| a16ded10ad3952406280be5ab9ff86a476867b08 | a9f5edbf5fb141ad172978b25483342125184ed2 |
python/cpython | python__cpython-128892 | # opcode.opname does not contain specialised bytecodes
```
>>> import opcode
>>> opcode._specialized_opmap['BINARY_OP_ADD_INT']
151
>>> opcode.opname[151]
'<151>'
>>>
```
Expected:
```
>>> opcode.opname[151]
'BINARY_OP_ADD_INT'
```
<!-- gh-linked-prs -->
### Linked PRs
* gh-128892
<!-- /gh-linked-prs -->
| 5eee2fe2b05c1a2836184e047fbd4176945cbf10 | 256d6d2131541b3ff8f06f42c8157f808fde464c |
python/cpython | python__cpython-128944 | # test_ctypes.test_generated_structs.GeneratedTest.test_generated_data 'ManyTypesU' fails on Python 3.14 built with gcc 15
# Bug report
### Bug description:
When running test suite on Python 3.14.0a4 built with gcc 15.0.3 on Fedora Linux 42 (Rawhide), test_generated_data fails for many of its types.
I suspect this is due to the new gcc version. This doesn't happen when Python is built with gcc 14.
```shell
Check that a ctypes struct/union matches its C equivalent. ...
test_generated_data (test.test_ctypes.test_generated_structs.GeneratedTest.test_generated_data) (field='i8', value=-1, name='ManyTypesU')
Check that a ctypes struct/union matches its C equivalent. ... FAIL
test_generated_data (test.test_ctypes.test_generated_structs.GeneratedTest.test_generated_data) (field='i8', value=1, name='ManyTypesU')
Check that a ctypes struct/union matches its C equivalent. ... FAIL
test_generated_data (test.test_ctypes.test_generated_structs.GeneratedTest.test_generated_data) (field='i8', value=0, name='ManyTypesU')
Check that a ctypes struct/union matches its C equivalent. ... FAIL
test_generated_data (test.test_ctypes.test_generated_structs.GeneratedTest.test_generated_data) (field='u8', value=-1, name='ManyTypesU')
Check that a ctypes struct/union matches its C equivalent. ... FAIL
test_generated_data (test.test_ctypes.test_generated_structs.GeneratedTest.test_generated_data) (field='u8', value=1, name='ManyTypesU')
Check that a ctypes struct/union matches its C equivalent. ... FAIL
test_generated_data (test.test_ctypes.test_generated_structs.GeneratedTest.test_generated_data) (field='u8', value=0, name='ManyTypesU')
Check that a ctypes struct/union matches its C equivalent. ... FAIL
test_generated_data (test.test_ctypes.test_generated_structs.GeneratedTest.test_generated_data) (field='i16', value=-1, name='ManyTypesU')
Check that a ctypes struct/union matches its C equivalent. ... FAIL
test_generated_data (test.test_ctypes.test_generated_structs.GeneratedTest.test_generated_data) (field='i16', value=1, name='ManyTypesU')
Check that a ctypes struct/union matches its C equivalent. ... FAIL
test_generated_data (test.test_ctypes.test_generated_structs.GeneratedTest.test_generated_data) (field='i16', value=0, name='ManyTypesU')
Check that a ctypes struct/union matches its C equivalent. ... FAIL
test_generated_data (test.test_ctypes.test_generated_structs.GeneratedTest.test_generated_data) (field='u16', value=-1, name='ManyTypesU')
Check that a ctypes struct/union matches its C equivalent. ... FAIL
test_generated_data (test.test_ctypes.test_generated_structs.GeneratedTest.test_generated_data) (field='u16', value=1, name='ManyTypesU')
Check that a ctypes struct/union matches its C equivalent. ... FAIL
test_generated_data (test.test_ctypes.test_generated_structs.GeneratedTest.test_generated_data) (field='u16', value=0, name='ManyTypesU')
Check that a ctypes struct/union matches its C equivalent. ... FAIL
test_generated_data (test.test_ctypes.test_generated_structs.GeneratedTest.test_generated_data) (field='i32', value=-1, name='ManyTypesU')
Check that a ctypes struct/union matches its C equivalent. ... FAIL
test_generated_data (test.test_ctypes.test_generated_structs.GeneratedTest.test_generated_data) (field='i32', value=1, name='ManyTypesU')
Check that a ctypes struct/union matches its C equivalent. ... FAIL
test_generated_data (test.test_ctypes.test_generated_structs.GeneratedTest.test_generated_data) (field='i32', value=0, name='ManyTypesU')
Check that a ctypes struct/union matches its C equivalent. ... FAIL
test_generated_data (test.test_ctypes.test_generated_structs.GeneratedTest.test_generated_data) (field='u32', value=-1, name='ManyTypesU')
Check that a ctypes struct/union matches its C equivalent. ... FAIL
test_generated_data (test.test_ctypes.test_generated_structs.GeneratedTest.test_generated_data) (field='u32', value=1, name='ManyTypesU')
Check that a ctypes struct/union matches its C equivalent. ... FAIL
test_generated_data (test.test_ctypes.test_generated_structs.GeneratedTest.test_generated_data) (field='u32', value=0, name='ManyTypesU')
Check that a ctypes struct/union matches its C equivalent. ... FAIL
Example output:
FAIL: test_generated_data (test.test_ctypes.test_generated_structs.GeneratedTest.test_generated_data) (field='i8', value=-1, name='ManyTypesU')
Check that a ctypes struct/union matches its C equivalent.
----------------------------------------------------------------------
Traceback (most recent call last):
File "/builddir/build/BUILD/python3.14-3.14.0_a4-build/Python-3.14.0a4/Lib/test/test_ctypes/test_generated_structs.py", line 473, in test_generated_data
self.assertEqual(py_mem.hex(), c_mem.hex(), m)
~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
AssertionError: 'ff00000000000000' != 'ff8f8783ffff0000'
- ff00000000000000
+ ff8f8783ffff0000
: <FieldInfo for ManyTypesU.i8: <Field type=c_int8, ofs=0, size=1>>
in:
union {
int8_t i8;
uint8_t u8;
int16_t i16;
uint16_t u16;
int32_t i32;
uint32_t u32;
int64_t i64;
uint64_t u64;
}
Ran 1 test in 0.178s
FAILED (failures=18, skipped=11)
```
### CPython versions tested on:
3.14
### Operating systems tested on:
Linux
<!-- gh-linked-prs -->
### Linked PRs
* gh-128944
<!-- /gh-linked-prs -->
| 13475e0a5a317fa61f302f030b0effcb021873d6 | 4b37a6bda236121c130b4a60e573f123cb5e4c58 |
python/cpython | python__cpython-128875 | # Building the documentation fails with newly released blurb 2.0
# Bug report
### Bug description:
Example of failure: https://github.com/python/cpython/actions/runs/12789465122/job/35652975974?pr=128868
```
blurb.blurb.BlurbError: Error in Misc/NEWS.d/3.11.0b1.rst:581:
'gh-issue:' or 'bpo:' must be specified in the metadata
```
### CPython versions tested on:
CPython main branch
### Operating systems tested on:
Linux
<!-- gh-linked-prs -->
### Linked PRs
* gh-128875
* gh-128877
* gh-128878
* gh-128879
* gh-128890
<!-- /gh-linked-prs -->
| 40a4d88a14c741172a158683c39d232c587c6f11 | 599be687ec7327c30c6469cf743aa4ee9e82232d |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.