repo stringclasses 1 value | instance_id stringlengths 20 22 | problem_statement stringlengths 126 60.8k | merge_commit stringlengths 40 40 | base_commit stringlengths 40 40 |
|---|---|---|---|---|
python/cpython | python__cpython-136129 | # math.tau broken video link
# Documentation
https://docs.python.org/3/library/math.html#math.tau
attempts to link to a YouTube video "Pi is (still) Wrong" by Vi Hart.
However, they intentionally removed their videos from YouTube (see [their Patreon post](https://www.patreon.com/posts/vihart-channel-129842460) ).
A possible quick fix would be to replace the link with a link to what I think is the same video on Vimeo:
https://vimeo.com/147792667
<!-- gh-linked-prs -->
### Linked PRs
* gh-136129
* gh-136131
* gh-136132
<!-- /gh-linked-prs -->
| a87f3e02828cb4404053384dba18924dcace6596 | ee47670e8b8648b14fd4cb64a9d47d6ed3c5b6b7 |
python/cpython | python__cpython-136088 | # Outdated `os.linesep` docs in `os.py`
https://github.com/python/cpython/blob/5334732f9c8a44722e4b339f4bb837b5b0226991/Lib/os.py#L13
Docs claim that `\r` is a possible value for `os.linesep`, however, I can't find any code that actually sets it to `\r`.
But it can only be:
- https://github.com/python/cpython/blob/5334732f9c8a44722e4b339f4bb837b5b0226991/Lib/os.py#L52-L54 for POSIX
- https://github.com/python/cpython/blob/5334732f9c8a44722e4b339f4bb837b5b0226991/Lib/os.py#L76-L78 on Windows
Docs:
```rst
.. data:: linesep
The string used to separate (or, rather, terminate) lines on the current
platform. This may be a single character, such as ``'\n'`` for POSIX, or
multiple characters, for example, ``'\r\n'`` for Windows. Do not use
*os.linesep* as a line terminator when writing files opened in text mode (the
default); use a single ``'\n'`` instead, on all platforms.
```
The internet says:
```
Windows: '\r\n'
Mac (OS 9-): '\r'
Mac (OS 10+): '\n'
Unix/Linux: '\n'
```
So, it looks like an outdated doc?
<!-- gh-linked-prs -->
### Linked PRs
* gh-136088
* gh-136111
* gh-136112
<!-- /gh-linked-prs -->
| 980a56843bf631ea80c1486a367d41031dec6a7e | fb9f933b8eda6cdc1336582dc8709b759ced91af |
python/cpython | python__cpython-136069 | # Un-necessary loglinear complexity in `platform._platform`
**Bug Description:**
A of simple quadratic complexity code in platform.py
**Vulnerability Locations :**
1. https://github.com/python/cpython/blob/98a5b830d2463351800f4d76edba1a306a3e0ec9/Lib/platform.py#L642
**Repair Status:**
None
**Common Information:**
* **CPython Version:** main branch
* **Operating System:** Linux
<!-- gh-linked-prs -->
### Linked PRs
* gh-136069
<!-- /gh-linked-prs -->
| bd928a3035a214c38e8429da8f65f823ca28151b | 30ba03ea8ed98522b0500d6856b22727c88e818f |
python/cpython | python__cpython-136054 | # Memory Safety Issue in marshal.c TYPE_SLICE Case
# Bug report
### Bug description:
### Description
__Location:__ `Python/marshal.c`, function `r_object()`, `TYPE_SLICE` case
__Issue:__ The code didn't validate the return value of `r_ref_reserve()` before passing it to `r_ref_insert()`. If `r_ref_reserve()` fails and returns -1, this would cause an out-of-bounds memory access when `r_ref_insert()` tries to access `p->refs[-1]`.
__Root Cause:__ Inconsistent error handling compared to other similar cases in the same file (e.g., `TYPE_CODE` and `TYPE_FROZENSET` properly check for `r_ref_reserve()` failure).
### Impact
- __Security:__ Potential memory corruption vulnerability exploitable via crafted marshal data
- __Stability:__ Could cause crashes when deserializing slice objects in error conditions
- __Scope:__ Affects applications using the marshal module to deserialize untrusted data
### CPython versions tested on:
CPython main branch
### Operating systems tested on:
Linux
<!-- gh-linked-prs -->
### Linked PRs
* gh-136054
* gh-136092
<!-- /gh-linked-prs -->
| 30ba03ea8ed98522b0500d6856b22727c88e818f | f04d2b8819eb37d5439b7437f1e80a1e5c5c4f07 |
python/cpython | python__cpython-136133 | # argparse.BooleanOptionalAction documentation
# Documentation
Current [`argparse.BooleanOptionalAction`](https://docs.python.org/3/library/argparse.html#argparse.BooleanOptionalAction) documentation is:
>You may also specify an arbitrary action by passing an Action subclass or other object that implements the same interface. The BooleanOptionalAction is available in argparse and adds support for boolean actions such as --foo and --no-foo:
The sentence "You may also specify an arbitrary action by passing an Action subclass or other object that implements the same interface. " should be in the outside of `argparse.BooleanOptionalAction`.
[ver 3.11](https://docs.python.org/3.11/library/argparse.html#action) did not have `argparse.BooleanOptionalAction` section and this was ok, but this section was created in ver 3.12 including the sentence, which is not specific to `argparse.BooleanOptionalAction`.
<!-- gh-linked-prs -->
### Linked PRs
* gh-136133
* gh-136329
* gh-136330
<!-- /gh-linked-prs -->
| 1953713d0d67a4f54ff75bf8449895a2f08cc750 | 5dac137b9f75c5c1d5096101bcd33d565d0526e4 |
python/cpython | python__cpython-136029 | # strptime() fails to parse month names containing İ (U+0130)
# Bug report
### Bug description:
`strptime()` fails to parse month names containing "İ" (U+0130, LATIN CAPITAL LETTER I WITH DOT ABOVE) like "İyun" or "İyul". This affects locales 'az_AZ', 'ber_DZ', 'ber_MA' and 'crh_UA'.
This happens because `'İ'.lower() == 'i\u0307'`, but the `re` module only supports 1-to-1 character matching in case-insensitive mode.
There are several ways to fix this:
1. Do not convert month names (and any other names) to lower case in `_strptime.LocaleTime`. This is a large change and it would make converting names to indices more complex and/or slower. This is universal way which would work with any other letters which are converted to multiple characters in lower case (but currently 'İ' the only such case in Linux locales).
2. Just fix the regular expressions created from lower-cased month names. This only works for 'İ' in month names, but this is all that is needed for now.
The third way -- converting the input string to lower case before parsing -- does not work, because `Z` for timezone is case-sensitive.
### CPython versions tested on:
CPython main branch
### Operating systems tested on:
_No response_
<!-- gh-linked-prs -->
### Linked PRs
* gh-136029
* gh-136037
* gh-136038
<!-- /gh-linked-prs -->
| 731f5b8ab3970e344bfbc4ff86df767a0795f0fc | de0d014815667982c683adb2b2cc16ae2bfb3c82 |
python/cpython | python__cpython-136332 | # Decide the fate of `type_params` in `_eval_type` and `_evaluate`
# Feature or enhancement
Currently there are two places in our typing-related code that raise a `DeprecationWarning` when `type_params` parameter is not provided:
https://github.com/python/cpython/blob/07183ebce36462aaaea4d20e0502b20821dd2682/Lib/typing.py#L432-L442
https://github.com/python/cpython/blob/07183ebce36462aaaea4d20e0502b20821dd2682/Lib/annotationlib.py#L213-L221
The error message of this function says that the behavior will change in 3.15 to disallow this: https://github.com/python/cpython/blob/07183ebce36462aaaea4d20e0502b20821dd2682/Lib/typing.py#L410-L420
So, we have several options:
1. Adding `TypeError('type_params parameter is not provided')` when `type_params is _sentinel`
2. Changing `type_params` to not contain a default value
3. Do nothing?
CC @JelleZijlstra @AlexWaygood
I would like to work on this after we decide the fate of these functions :)
<!-- gh-linked-prs -->
### Linked PRs
* gh-136332
<!-- /gh-linked-prs -->
| c89f76e6c4ca9b0200d5cc8cf0a675a76de50ba8 | 77a8bd29da99e7d4fa8e7f07c4063977c2bb14d3 |
python/cpython | python__cpython-136018 | # Avoid decref in PyObject_RichCompareBool
# Feature or enhancement
### Proposal:
In `PyObject_RichCompareBool` there is a fast path for the common case where `PyObject_RichCompare` returns a `bool`. In that path there is no need to decref the result, since `bool` values are immortal.
### Has this already been discussed elsewhere?
No response given
### Links to previous discussion of this feature:
_No response_
<!-- gh-linked-prs -->
### Linked PRs
* gh-136018
<!-- /gh-linked-prs -->
| e23518fa96583d0190d457adb807b19545df26cf | 07183ebce36462aaaea4d20e0502b20821dd2682 |
python/cpython | python__cpython-135990 | # Missing char in palmos encoding
# Bug report
### Bug description:
When using the `palmos` encoding, 0x8b correctly encodes to ‹, but 0x9b was mistakenly marked as a control character instead of ›. You can see the correct glyphs in this screenshot of Palm OS 3.5, or on [this page](https://web.archive.org/web/20250429234603/https://www.sealiesoftware.com/palmfonts/):

### CPython versions tested on:
3.9, 3.12
### Operating systems tested on:
macOS
<!-- gh-linked-prs -->
### Linked PRs
* gh-135990
* gh-136001
* gh-136002
<!-- /gh-linked-prs -->
| 58a42dea97f4fa0df38ef4a95a2ede65e0549f71 | 642e5dfc74310d15bb81f8e94167590380a5fbfb |
python/cpython | python__cpython-135970 | # Provide iOS stub for `strip`
# Feature or enhancement
### Proposal:
CPython provides wrappers for `clang`, `clang++`, `cpp` and `ar`. These wrappers provide "single word binaries" for tools that are commonly discovered by C builds systems.
Another tool that is commonly needed is `strip`.
`xcrun` can satisfy a `strip` lookup; we should provide a stub for `strip` as well.
### Has this already been discussed elsewhere?
This is a minor feature, which does not need previous discussion elsewhere
### Links to previous discussion of this feature:
_No response_
<!-- gh-linked-prs -->
### Linked PRs
* gh-135970
* gh-136014
* gh-136015
<!-- /gh-linked-prs -->
| 0c6c09b7377e10dcf80844c961b578fbdc6f5375 | 2fc68e180ffdb31886938203e89a75b220a58cec |
python/cpython | python__cpython-135967 | # iOS testbed doesn't honor .pth files
# Bug report
### Bug description:
When running tests through the iOS testbed, `.pth` files that are installed by packaged (such as the setuptools "distutils_hack") will not be processed.
This surfaced during testing of an iOS patch for Pillow, which uses pyroma; pyroma depends on distutils; but because the .pth file installed by setuptools isn't processed, distutils can't be imported, and so neither can pyroma.
### CPython versions tested on:
3.13
### Operating systems tested on:
Other
<!-- gh-linked-prs -->
### Linked PRs
* gh-135967
* gh-136012
* gh-136013
<!-- /gh-linked-prs -->
| b38810bab76c11ea09260a817b3354aebc2af580 | 34ce1920ca33c11ca2c379ed0ef30a91010bef4f |
python/cpython | python__cpython-135964 | # typo in /doc/howto/isolating-extensions
line 456:
```
If you use use :c:func:`PyObject_New` or :c:func:`PyObject_NewVar`:
```
it should be like:
```
If you use :c:func:`PyObject_New` or :c:func:`PyObject_NewVar`:
```
similar issue to #135958 and #135956. I'm sure this typo is not mentioned before.
<!-- gh-linked-prs -->
### Linked PRs
* gh-135964
* gh-135977
* gh-135978
<!-- /gh-linked-prs -->
| ffb2a02f98d904505c8a82d8540c36dee4c67eed | 6be17baeb5bcfc78f0b7fcfe5221df0744c865e8 |
python/cpython | python__cpython-135957 | # datetime.isoformat() docstring has a typo
# Documentation
In 3.13 cpython/Lib/_pydatetime.py line 2046-2047, linked below, the docstring has a typo. It repeats the word "giving".
https://github.com/python/cpython/blob/c148a68efe54b4452cfec6a65205b83bff5fefb3/Lib/_pydatetime.py#L2046-L2047
<!-- gh-linked-prs -->
### Linked PRs
* gh-135957
* gh-135962
* gh-135963
<!-- /gh-linked-prs -->
| e3ea6f2b3b084700a34ce392f5cf897407469b3a | 1f5e23fd7015a8f7b14d0181ec83efa95c5d5b68 |
python/cpython | python__cpython-135935 | # CPython fails to build on MSVC with latest C standard flag
# Bug report
### Bug description:
When passing `/std:clatest` to MSVC, the current CPython fails to build.
### CPython versions tested on:
CPython main branch
### Operating systems tested on:
_No response_
<!-- gh-linked-prs -->
### Linked PRs
* gh-135935
* gh-135987
<!-- /gh-linked-prs -->
| a88b49c3f2184428124890741778e92c0fdb6446 | d2154912b3b10823c138e904e74f2a1e7e7ca96c |
python/cpython | python__cpython-135914 | # Document ob_refcnt, ob_type, ob_size
The fields of `PyObject` and `PyVarObject` are only documented in [static type object docs](https://docs.python.org/3/c-api/typeobj.html#pyobject-slots). They should have entries under their own types.
(Their use is discouraged. That makes proper docs all the more important.)
<!-- gh-linked-prs -->
### Linked PRs
* gh-135914
* gh-136377
<!-- /gh-linked-prs -->
| 73e1207a4ebdb3b43d597cd6c288dae6d7d1dbdb | 11f074b243756bca0db5a7d35dd84f00879de616 |
python/cpython | python__cpython-136009 | # Free threaded builds can double dealloc if .tp_dealloc sleeps
# Bug report
### Bug description:
Hi,
I've been slowly building/debugging my dumb locking idea, and I think I've hit a bug in upstream python.
My understanding of how the GC detects when objects are in use by C, is that it counts the number of references to an object, and subtracts that from the reference count fields. This works, except for if C is holding onto an object with a reference count of 0. Which is the state of the object when passed to .tp_dealloc.
Various parts of gc_free_threading.c increment the reference counter whilst objects are on various queues. However, (due to the paragraph above,) even tho the world is stopped, this is not a safe operation if the reference count is 0.
Order of events:
- Object last reference is deleted. Reference count becomes 0.
- .tp_dealloc is called.
- .tp_dealloc goes to sleep (eg, on a lock) before freeing the object.
- The GC is called from another thread.
- The world is stopped and the GC runs.
- The GC sees that the object has no incoming references and:
-- Increments the reference count to 1
-- Pushes it onto a "to dealloc" list
- The world is started
- The GC thread decrements the reference count back to 0, and calls .tp_dealloc a second time.
This may show up as objects in .tp_dealloc seeing the reference count of objects unexpectedly going to 1. (Which for example, breaks _PyObject_ResurrectStart)
I ended up writing a reproducer to prove it wasn't my code breaking things. (Sorry, it is a bit horrid, but it does trigger the issue reliably on my machine):
```c
#include <Python.h>
typedef struct {
PyObject_HEAD;
int dealloced;
} pyAtest;
static int gc_sync = 0;
static PyMutex lock = {0};
static void
yield(void)
{
#ifdef MS_WINDOWS
SwitchToThread();
#elif defined(HAVE_SCHED_H)
sched_yield();
#endif
}
static void
Atest_dealloc(PyObject *op) {
// RC thread
int val;
pyAtest* self = (pyAtest*) op;
// Check we haven't already been deallocated.
assert(!self->dealloced);
self->dealloced=1;
// Wait for collector to be ready.
Py_BEGIN_ALLOW_THREADS;
do {
val = 1;
yield();
} while (!_Py_atomic_compare_exchange_int(&gc_sync, &val, 2));
Py_END_ALLOW_THREADS;
// Collector holds the lock. Wait for the collector to start the GC
while (PyThreadState_Get()->eval_breaker == 0) {
yield();
}
// Now go to sleep so the GC can run
PyMutex_Lock(&lock);
PyMutex_Unlock(&lock);
PyObject_GC_UnTrack(op);
PyObject_GC_Del(op);
}
static PyObject*
atest_gc(PyObject *self, PyObject *args)
{
// GC thread
int val;
// We pick up the lock first
PyMutex_Lock(&lock);
// Sync the RC thread
Py_BEGIN_ALLOW_THREADS;
_Py_atomic_store_int(&gc_sync, 1);
do {
val = 2;
yield();
} while (!_Py_atomic_compare_exchange_int(&gc_sync, &val, 0));
Py_END_ALLOW_THREADS;
// the stop_the_world inside the PyGC_Collect() will wait till the RC
// thread is on PyMutex_Lock() before doing the GC.
PyGC_Collect();
PyMutex_Unlock(&lock);
return Py_None;
}
// All the boring stuff
static PyObject*
Atest_new(PyTypeObject *type, PyObject *args, PyObject *kwds)
{
return type->tp_alloc(type, 0);
}
static int
Atest_traverse(PyObject *self, visitproc visit, void *arg)
{
return 0;
}
static PyTypeObject AtestObjectType = {
PyVarObject_HEAD_INIT(NULL, 0)
.tp_name = "atest.Atest",
.tp_basicsize = sizeof(pyAtest),
.tp_itemsize = 0,
.tp_new = Atest_new,
.tp_dealloc = (destructor)Atest_dealloc,
.tp_flags = Py_TPFLAGS_DEFAULT | Py_TPFLAGS_HAVE_GC,
.tp_traverse = Atest_traverse,
.tp_doc = "a test",
};
static int
atest_module_exec(PyObject *m)
{
if (PyType_Ready(&AtestObjectType) < 0) {
return -1;
}
if (PyModule_AddObjectRef(m, "ATest", (PyObject *) &AtestObjectType) < 0) {
return -1;
}
return 0;
}
static PyModuleDef_Slot atest_slots[] = {
{Py_mod_exec, atest_module_exec},
{Py_mod_gil, Py_MOD_GIL_NOT_USED},
{0, NULL},
};
static PyMethodDef atest_methods[] = {
{"gc", atest_gc, METH_VARARGS, "issue gc"},
{NULL, NULL, 0, NULL},
};
static struct PyModuleDef atest_module = {
.m_base = PyModuleDef_HEAD_INIT,
.m_name = "atest",
.m_size = 0,
.m_slots = atest_slots,
.m_methods = atest_methods,
};
PyMODINIT_FUNC
PyInit_atest(void)
{
return PyModuleDef_Init(&atest_module);
}
```
```python
import atest
import threading
t = threading.Thread(target=atest.gc)
t.start()
# Create and destroy an ATest
atest.ATest()
t.join()
```
```bash
$ ./python test.py
python: ./Modules/atest.c:27: Atest_dealloc: Assertion `!self->dealloced' failed.
Aborted (core dumped)
```
### CPython versions tested on:
CPython main branch
### Operating systems tested on:
Linux
<!-- gh-linked-prs -->
### Linked PRs
* gh-136009
<!-- /gh-linked-prs -->
| 2500eb96b260b05387d4ab1063fcfafebf37f1a4 | be02e68158aee4d70f15baa1d8329df2c35a57f2 |
python/cpython | python__cpython-135889 | # Segfault by calling `repr(SimpleNamespace)` with `typing.Union` attributes in threads on a free-threading build
# Crash report
### What happened?
It's possible to segfault or abort the interpreter of a free-threading build by calling `repr(SimpleNamespace)` in threads when the instance has `typing.Union` attributes. This seems very similar to https://github.com/python/cpython/issues/132713.
The MRE will rarely hang and is non-deterministic, but is the best I was able to come up with.
MRE:
```python
from functools import reduce
from operator import or_
from threading import Thread
from types import SimpleNamespace
all_types = []
for x in range(400):
class Dummy: pass
all_types.append(Dummy)
big_union = reduce(or_, all_types, int)
for x in range(500):
s = SimpleNamespace()
def stress_simplenamespace():
for x in range(20):
varying_union = big_union | tuple[int, ...] | list[str, ...] | dict[str, int]
repr(s)
attr_name = f"t_attr_{x}"
setattr(s, attr_name, varying_union)
repr(s)
repr(s)
repr(s)
alive = [Thread(target=stress_simplenamespace) for x in range(8)]
for t in alive:
t.start()
for t in alive:
t.join()
```
Segfault backtrace 1:
```shell
Thread 7 "Thread-6 (stres" received signal SIGSEGV, Segmentation fault.
[Switching to Thread 0x7ffff541b640 (LWP 348490)]
0x000055555567cdcd in setup_ga (args=<unknown at remote 0x7ffff5c20640>, origin=<type at remote 0x555555b11d40>, alias=0x2276a0e0140) at Objects/genericaliasobject.c:837
837 if (!PyTuple_Check(args)) {
#0 0x000055555567cdcd in setup_ga (args=<unknown at remote 0x7ffff5c20640>, origin=<type at remote 0x555555b11d40>, alias=0x2276a0e0140)
at Objects/genericaliasobject.c:837
#1 Py_GenericAlias (origin=<type at remote 0x555555b11d40>, args=<unknown at remote 0x7ffff5c20640>) at Objects/genericaliasobject.c:1015
#2 0x0000555555658dd5 in _PyObject_VectorcallTstate (kwnames=0x0, nargsf=9223372036854775809, args=0x7ffff541a428,
callable=<built-in method __class_getitem__ of type object at remote 0x555555b11d40>, tstate=0x555555c3c6e0)
at ./Include/internal/pycore_call.h:169
#3 PyObject_CallOneArg (func=<built-in method __class_getitem__ of type object at remote 0x555555b11d40>,
arg=arg@entry=<unknown at remote 0x7ffff5c20640>) at Objects/call.c:395
#4 0x0000555555634781 in PyObject_GetItem (o=<type at remote 0x555555b11d40>, key=<unknown at remote 0x7ffff5c20640>) at Objects/abstract.c:193
#5 0x00005555555e2f02 in _PyEval_EvalFrameDefault (tstate=<optimized out>, frame=<optimized out>, throwflag=<optimized out>)
at Python/generated_cases.c.h:62
#6 0x00005555557dd13e in _PyEval_EvalFrame (throwflag=0, frame=<optimized out>, tstate=0x555555c3c6e0) at ./Include/internal/pycore_ceval.h:119
#7 _PyEval_Vector (tstate=0x555555c3c6e0, func=0x2275e37fd80, locals=0x0, args=0x7ffff541a8a8, argcount=<optimized out>, kwnames=<optimized out>)
at Python/ceval.c:1975
```
Segfault backtrace 2:
```shell
Thread 3 "Thread-2 (stres" received signal SIGSEGV, Segmentation fault.
[Switching to Thread 0x7ffff742f640 (LWP 343364)]
clear_freelist (dofree=<optimized out>, is_finalization=<optimized out>, freelist=<optimized out>) at Objects/object.c:904
904 dofree(ptr);
#0 clear_freelist (dofree=<optimized out>, is_finalization=<optimized out>, freelist=<optimized out>) at Objects/object.c:904
#1 _PyObject_ClearFreeLists (freelists=0x555555c30948, is_finalization=is_finalization@entry=1) at Objects/object.c:929
#2 0x000055555585c521 in PyThreadState_Clear (tstate=tstate@entry=0x555555c2cf60) at Python/pystate.c:1703
#3 0x0000555555904a10 in thread_run (boot_raw=0x555555c194a0) at ./Modules/_threadmodule.c:390
#4 0x000055555587e27b in pythread_wrapper (arg=<optimized out>) at Python/thread_pthread.h:232
#5 0x00007ffff7d31ac3 in start_thread (arg=<optimized out>) at ./nptl/pthread_create.c:442
#6 0x00007ffff7dc3850 in clone3 () at ../sysdeps/unix/sysv/linux/x86_64/clone3.S:81
```
A backtrace for a segfault I got with a variant of this code, seems more informative but I cannot repro with the MRE:
```shell
Thread 185 "asyncio_4" received signal SIGSEGV, Segmentation fault.
[Switching to Thread 0xffffe31ff060 (LWP 1527140)]
0x0000000000867ce8 in _Py_TYPE (ob=<unknown at remote 0xdddddddddddddddd>) at ./Include/object.h:270
270 return ob->ob_type;
#0 0x0000000000867ce8 in _Py_TYPE (ob=<unknown at remote 0xdddddddddddddddd>) at ./Include/object.h:270
#1 union_repr (self=<unknown at remote 0xffffa425a9d0>) at Objects/unionobject.c:296
#2 0x00000000006e61a8 in PyObject_Repr (v=<unknown at remote 0xffffa425a9d0>) at Objects/object.c:779
#3 0x000000000082ada0 in unicode_fromformat_arg (writer=writer@entry=0xffffe31fb4e0, f=0xd88224 "R", f@entry=0xd88223 "%R", vargs=vargs@entry=0xffffe31fb410)
at Objects/unicodeobject.c:3101
#4 0x000000000082b33c in unicode_from_format (writer=writer@entry=0xffffe31fb4e0, format=format@entry=0xd88220 "%U=%R",
vargs=<error reading variable: Cannot access memory at address 0x1>) at Objects/unicodeobject.c:3220
#5 0x000000000082b68c in PyUnicode_FromFormatV (format=format@entry=0xd88220 "%U=%R", vargs=...) at Objects/unicodeobject.c:3254
#6 0x000000000082b828 in PyUnicode_FromFormat (format=format@entry=0xd88220 "%U=%R") at Objects/unicodeobject.c:3268
#7 0x00000000006e1038 in namespace_repr (ns=<types.SimpleNamespace at remote 0xffffa48e3c60>) at Objects/namespaceobject.c:129
#8 0x000000000075e894 in object_str (self=self@entry=<types.SimpleNamespace at remote 0xffffa48e3c60>) at Objects/typeobject.c:7152
#9 0x0000000000768d18 in wrap_unaryfunc (self=<types.SimpleNamespace at remote 0xffffa48e3c60>, args=<optimized out>, wrapped=0x75e854 <object_str>)
at Objects/typeobject.c:9536
#10 0x00000000005d8ee0 in wrapperdescr_raw_call (kwds=<optimized out>, args=<optimized out>, self=<optimized out>, descr=<optimized out>) at Objects/descrobject.c:532
#11 wrapper_call (self=<optimized out>, args=<optimized out>, kwds=<optimized out>) at Objects/descrobject.c:1437
#12 0x00000000005ab38c in _PyObject_MakeTpCall (tstate=tstate@entry=0xffff48937210,
callable=callable@entry=<method-wrapper '__str__' of types.SimpleNamespace object at 0xffffa48e3c60>, args=args@entry=0xffffe31fbe58, nargs=nargs@entry=0,
keywords=keywords@entry=0x0) at Objects/call.c:242
#13 0x00000000005aba68 in _PyObject_VectorcallTstate (tstate=0xffff48937210, callable=<method-wrapper '__str__' of types.SimpleNamespace object at 0xffffa48e3c60>,
args=0xffffe31fbe58, nargsf=9223372036854775808, kwnames=0x0) at ./Include/internal/pycore_call.h:167
#14 0x00000000005aba98 in PyObject_Vectorcall (callable=callable@entry=<method-wrapper '__str__' of types.SimpleNamespace object at 0xffffa48e3c60>,
args=args@entry=0x0, nargsf=nargsf@entry=9223372036854775808, kwnames=kwnames@entry=0x0) at Objects/call.c:327
```
Abort backtrace:
```shell
*** stack smashing detected ***: terminated
Thread 7 "Thread-6 (stres" received signal SIGABRT, Aborted.
[Switching to Thread 0x7ffff541b640 (LWP 351627)]
__pthread_kill_implementation (no_tid=0, signo=6, threadid=0) at ./nptl/pthread_kill.c:44
#0 __pthread_kill_implementation (no_tid=0, signo=6, threadid=0) at ./nptl/pthread_kill.c:44
#1 __pthread_kill_internal (signo=6, threadid=0) at ./nptl/pthread_kill.c:78
#2 __GI___pthread_kill (threadid=0, signo=signo@entry=6) at ./nptl/pthread_kill.c:89
#3 0x00007ffff7cdf476 in __GI_raise (sig=sig@entry=6) at ../sysdeps/posix/raise.c:26
#4 0x00007ffff7cc57f3 in __GI_abort () at ./stdlib/abort.c:79
#5 0x00007ffff7d26677 in __libc_message (action=action@entry=do_abort, fmt=fmt@entry=0x7ffff7e7892e "*** %s ***: terminated\n")
at ../sysdeps/posix/libc_fatal.c:156
#6 0x00007ffff7dd359a in __GI___fortify_fail (msg=msg@entry=0x7ffff7e78916 "stack smashing detected") at ./debug/fortify_fail.c:26
#7 0x00007ffff7dd3566 in __stack_chk_fail () at ./debug/stack_chk_fail.c:24
#8 0x000055555572321d in _Py_type_getattro_impl (type=<optimized out>, name=<optimized out>,
suppress_missing_attribute=suppress_missing_attribute@entry=0x7ffff541a194) at Objects/typeobject.c:6355
#9 0x00005555556db432 in PyObject_GetOptionalAttr (v=v@entry=<type at remote 0x3b2b2e06800>, name=<optimized out>,
result=result@entry=0x7ffff541a1d0) at Objects/object.c:1345
#10 0x000055555572ea38 in _Py_typing_type_repr (writer=writer@entry=0x3b2be090e80, p=<type at remote 0x3b2b2e06800>) at Objects/typevarobject.c:295
#11 0x000055555577ad6e in union_repr (self=<typing.Union at remote 0x3b2c20e0820>) at Objects/unionobject.c:297
#12 0x00005555556d80db in PyObject_Repr (v=<typing.Union at remote 0x3b2c20e0820>) at ./Include/object.h:277
#13 PyObject_Repr (v=<typing.Union at remote 0x3b2c20e0820>) at Objects/object.c:754
#14 0x0000555555765835 in unicode_fromformat_arg (vargs=0x7ffff541a278, f=0x555555937b12 "R", writer=0x7ffff541a300)
at Objects/unicodeobject.c:3101
#15 unicode_from_format (writer=writer@entry=0x7ffff541a300, format=format@entry=0x555555937b0e "%U=%R", vargs=vargs@entry=0x7ffff541a340)
at Objects/unicodeobject.c:3220
#16 0x000055555576fadf in PyUnicode_FromFormatV (vargs=0x7ffff541a340, format=0x555555937b0e "%U=%R") at Objects/unicodeobject.c:3254
#17 PyUnicode_FromFormat (format=format@entry=0x555555937b0e "%U=%R") at Objects/unicodeobject.c:3268
#18 0x00005555556d5802 in namespace_repr (ns=<types.SimpleNamespace at remote 0x3b2b2152310>) at Objects/namespaceobject.c:129
#19 0x00005555556d80db in PyObject_Repr (v=<types.SimpleNamespace at remote 0x3b2b2152310>) at ./Include/object.h:277
#20 PyObject_Repr (v=<types.SimpleNamespace at remote 0x3b2b2152310>) at Objects/object.c:754
#21 0x00005555555e4883 in _PyEval_EvalFrameDefault (tstate=<optimized out>, frame=<optimized out>, throwflag=<optimized out>)
at Python/generated_cases.c.h:2487
#22 0x00005555557dd13e in _PyEval_EvalFrame (throwflag=0, frame=<optimized out>, tstate=0x555555c3c6e0) at ./Include/internal/pycore_ceval.h:119
#23 _PyEval_Vector (tstate=0x555555c3c6e0, func=0x3b2b237fd80, locals=0x0, args=0x7ffff541a8a8, argcount=<optimized out>, kwnames=<optimized out>)
at Python/ceval.c:1975
```
Found using [fusil](https://github.com/devdanzin/fusil) by @vstinner.
### CPython versions tested on:
CPython main branch
### Operating systems tested on:
Linux
### Output from running 'python -VV' on the command line:
Python 3.15.0a0 experimental free-threading build (heads/main:b706ff003c5, Jun 11 2025, 19:34:04) [GCC 11.4.0]
<!-- gh-linked-prs -->
### Linked PRs
* gh-135889
* gh-135895
* gh-135896
<!-- /gh-linked-prs -->
| b3ab94acd308591bbdf264f1722fedc7ee25d6fa | e5f03b94b6d4decbf433d385f692c1b8d9b7e88d |
python/cpython | python__cpython-135856 | # `_interpreters.set___main___attrs` raises SystemError when passed a non-dict object
# Bug report
### Bug description:
This snippet:
```python
import _interpreters
import types
i = _interpreters.create()
_interpreters.set___main___attrs(i, types.MappingProxyType({"a": 1}))
```
raises:
```python-traceback
Traceback (most recent call last):
File "/home/brian/Projects/open-contrib/cpython/temp.py", line 27, in <module>
_interpreters.set___main___attrs(i, types.MappingProxyType({"a": 1}))
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
SystemError: Objects/dictobject.c:4162: bad argument to internal function
```
This happens during [this call](https://github.com/python/cpython/blob/b3ae76911df465ade76c267c5fa2a02cf1138b48/Modules/_interpretersmodule.c#L1075) to `_PyXI_Enter` which calls `PyDict_Size` on `updates`.
The [public interface](https://github.com/python/cpython/blob/b3ae76911df465ade76c267c5fa2a02cf1138b48/Lib/concurrent/interpreters/__init__.py#L193-L194) always passes a `dict`, and other paths that call `_PyXI_Enter` check for an actual dict (e.g., [`exec`](https://github.com/python/cpython/blob/b3ae76911df465ade76c267c5fa2a02cf1138b48/Modules/_interpretersmodule.c#L1123-L1126)). I think it makes sense to do the same thing here. PR forthcoming.
### CPython versions tested on:
CPython main branch
### Operating systems tested on:
Linux
<!-- gh-linked-prs -->
### Linked PRs
* gh-135856
* gh-135900
* gh-135903
<!-- /gh-linked-prs -->
| 4e6f0d116e9725efb2d8eb6c3274aa3c024a6fb4 | fea5ccc55d8486300beb1d0254da030a4da10394 |
python/cpython | python__cpython-135877 | # Decide the fate of missing C99 functions in `math`
# Feature or enhancement
### Proposal:
I will first post here because I don't want to flood DPO with a controversial thread. I didn't dig much in the issues so it might have already been discussed a bit more in detail elsewhere.
There are some functions in `math.h` that we don't export in `math` as some are useless for Python or just forgotten. We recently added `isnormal` and `issubnormal` but the following are missing:
#### [`scalbn`](https://www.cppreference.com/w/c/numeric/math/scalbn.html)
It's only useful if FLT_RADIX != 2. Python only supports FLT_RADIX = 2 or 16 but I don't know how many systems have FLT_RADIX = 16. So I don't think we need it for now. [`logb`](https://www.cppreference.com/w/c/numeric/math/logb.html) also depends on FLT_RADIX but I think we can ignore it if we don't add `scalbn`.
**DECISION**: let's not add it as numpy doesn't have it and it has a confusing name (at least to me).
#### [`fdim`](https://www.cppreference.com/w/c/numeric/math/fdim.html)
This one could be useful to handle possible signed zeros when doing fp artihmetic. Similarly, `math` currently lacks `fmax` and `fmin` which could useful if we support IEC 60559 because `max(nan, inf) == nan` and `max(inf, nan) == inf` which is incorrect (in both cases `fmax` would have returned `inf` on such systems).
**DECISION**: numpy has `fmax` and `fmin` and treat the NaNs correctly, but not `fdim`. For now, let's not add it.
#### [`remquo`](https://www.cppreference.com/w/c/numeric/math/remquo.html)
This one is useful for implementing $t$-periodic functions for $t$ exactly representable as an fp-value. I don't know if users are expected to actually work in Python for such functions, but I think we can expose it.
**DECISION**: numpy doesn't have it and while it could be fine, it's actually really niche.
#### [`fpclassify`](https://www.cppreference.com/w/c/numeric/math/fpclassify.html)
Now that we have normal and subnormal detection, I suggest we also add this function. This would however require new constants which may or may not be adequate as this would make the module a bit heavier. It was asked in https://github.com/python/cpython/issues/132908#issuecomment-2834032127 but we didn't follow up with that later.
**DECISION**: I would personally like it but it would more for completeness rather than usefulness, and numpy doesn't have it so I won't push for it.
#### [`isunordered`](https://www.cppreference.com/w/c/numeric/math/isunordered.html)
Can be useful to detect if either operand is a NaN. It can save some `math.isnan(x) or math.isnan(y)` expression.
**DECISION**: it is useful but a better named helper in pure Python would do the job.
#### [`signbit`](https://en.cppreference.com/w/c/numeric/math/signbit)
Useful for getting the sign of signed zeroes, infinities and NaN.
**DECISION**: I was wrong in my original statement as Mark pointed out but this one seems the most useful. So I'll work on it tomorrow.
-----
I omitted some functions such as `isless` as they are directly implemented in Python and I don't think we should do something here. I also omitted rounding functions that depend on the rounding mode as those should be handled by `Decimal` instead.
Finally, it's unfortunate that we named `math.gamma` instead of `math.tgamma` as `math.gamma` could have been confused with Euler-Mascheroni's constant.
cc @skirpichev @tim-one
### Has this already been discussed elsewhere?
This is a minor feature, which does not need previous discussion elsewhere
### Links to previous discussion of this feature:
_No response_
<!-- gh-linked-prs -->
### Linked PRs
* gh-135877
* gh-135888
<!-- /gh-linked-prs -->
| 42ccac2d7f29f6befed46ca7a131de82901dcbcf | ff7b5d44a00acfa681afd8aea70abc66ea536723 |
python/cpython | python__cpython-136253 | # Support zstd on Android
# Feature or enhancement
As seen on [the buildbot](https://buildbot.python.org/?#/builders/1594/builds/2712/steps/6/logs/stdio), it's currently being omitted:
```
checking for libzstd >= 1.4.5... no
```
In the current build system, this will require the following:
* Add a libzstd release in https://github.com/beeware/cpython-android-source-deps.
* Update android.py to download it before running `configure`.
Related:
* https://github.com/beeware/cpython-apple-source-deps/issues/60
<!-- gh-linked-prs -->
### Linked PRs
* gh-136253
* gh-136491
<!-- /gh-linked-prs -->
| 61dd9fdad729fe02d91c03804659f7d0c5a89276 | ea45a2f97cb1d4774a6f88e63c6ce0a487f83031 |
python/cpython | python__cpython-135840 | # `traverse_module_state` can return `int`, do not use `(void)` for it
# Bug report
`traverse_module_state` can in theory return non-zero code:
https://github.com/python/cpython/blob/0d9d48959e050b66cb37a333940ebf4dc2a74e15/Modules/_interpchannelsmodule.c#L283-L302
because `PY_VISIT` can return it: https://github.com/python/cpython/blob/0d9d48959e050b66cb37a333940ebf4dc2a74e15/Include/objimpl.h#L193-L200
So, using it like this: https://github.com/python/cpython/blob/0d9d48959e050b66cb37a333940ebf4dc2a74e15/Modules/_interpchannelsmodule.c#L3627-L3629
Is also not really correct. I will send a PR with the fix.
<!-- gh-linked-prs -->
### Linked PRs
* gh-135840
* gh-135918
* gh-135919
* gh-135937
* gh-135939
* gh-135943
<!-- /gh-linked-prs -->
| dd59c786cfb1018eb5abe877bfa7265ea9a3c2b9 | ca87a47b3d92aabaefbbe79c0493d66602184b41 |
python/cpython | python__cpython-135845 | # IndexError when calling asyncio.open_connection
# Bug report
### Bug description:
I unfortunately don't have a minimal reproducible example, as it happens in a complex system under high concurrent load, but I get the following error after calling: `await asyncio.open_connection(host, port)`
```
File "/usr/local/lib/python3.13/asyncio/base_events.py", line 1169, in create_connection
model = str(exceptions[0])
~~~~~~~^^^
IndexError: list index out of range
```
Whatever the issue is, this is not proper error handling on python's part.
### CPython versions tested on:
3.13
### Operating systems tested on:
Linux
<!-- gh-linked-prs -->
### Linked PRs
* gh-135845
* gh-135874
* gh-135875
* gh-136167
* gh-136168
* gh-136221
* gh-136222
<!-- /gh-linked-prs -->
| 0e19db653dfa1a6e750e9cede1f6922e5fd1e808 | 31b56df3bb3c5e3ced10a0ad1eaed43e75f36f22 |
python/cpython | python__cpython-135827 | # Improve `netrc` security check error message
# Feature or enhancement
### Proposal:
When `netrc` security check fails due to ownership issues, we display:
https://github.com/python/cpython/blob/e7295a89b85df69f5d6a60ed50f78e1d8e636434/Lib/netrc.py#L157-L159
The message is a bit misleading, so we should improve it.
### Has this already been discussed elsewhere?
This is a minor feature, which does not need previous discussion elsewhere
### Links to previous discussion of this feature:
https://github.com/python/cpython/pull/135816/files#r2160393000
<!-- gh-linked-prs -->
### Linked PRs
* gh-135827
<!-- /gh-linked-prs -->
| 396ca9a641429e59e09d3a3c28dc85623197797f | 6aa0826ed7688e5f40742cdcaf57420b284e194f |
python/cpython | python__cpython-135816 | # `netrc` security check may fail on WASI due to the lack of `os.getuid`
# Bug report
### Bug description:
Found as part of https://github.com/python/cpython/pull/135802.
For instance:
```py
File "/Lib/test/test_netrc.py", line 69, in use_default_netrc_in_home
return netrc.netrc()
~~~~~~~~~~~^^
File "/Lib/netrc.py", line 76, in __init__
self._parse(file, fp, default_netrc)
~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Lib/netrc.py", line 143, in _parse
self._security_check(fp, default_netrc, self.hosts[entryname][0])
~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Lib/netrc.py", line 148, in _security_check
if prop.st_uid != os.getuid():
^^^^^^^^^
AttributeError: module 'os' has no attribute 'getuid'. Did you mean: 'getpid'?
```
### CPython versions tested on:
CPython main branch
### Operating systems tested on:
_No response_
<!-- gh-linked-prs -->
### Linked PRs
* gh-135816
* gh-135825
* gh-135826
<!-- /gh-linked-prs -->
| b57b619e34cdfc87b47943c988b0b4d69f8f1fe4 | e7295a89b85df69f5d6a60ed50f78e1d8e636434 |
python/cpython | python__cpython-135868 | # -X tlbc undocumented in 3.14 and 3.15 command line options
Windows 10 + Python 3.14 `py.exe --help-xoptions` lists `-X tlbc`, but I wonder why it's not documented at https://docs.python.org/3.14/using/cmdline.html#cmdoption-X.
<!-- gh-linked-prs -->
### Linked PRs
* gh-135868
* gh-135897
<!-- /gh-linked-prs -->
| fea5ccc55d8486300beb1d0254da030a4da10394 | b3ab94acd308591bbdf264f1722fedc7ee25d6fa |
python/cpython | python__cpython-135831 | # venv using symlinks and empty pyvenv.cfg isn't recognized as venv / able to find python home
# Bug report
### Bug description:
Previously, it was possible to have a functional (enough) venv by simply doing two things:
1. Creating an empty pyvenv.cfg file
2. Creating a bin/python3 which symlinked to the actual interpreter
The net effect was you had a sufficiently relocatable venv (at the interpreter level) to make it into a fully relocatable venv (at the application level). This is because Python would follow the symlink to locate its home, which let it get through initial bootstrapping, enough to pick a venv-relative site-packages directory.
The net effect was Python followed the symlink to find home, which let it bootstrap itself enough to find the venv's site-packages. Once at that point, application code is able to hook in and do some fixups.
This bug seems to be introduced by https://github.com/python/cpython/pull/126987
I'm still looking at the changes to figure out why, exactly. I'm pretty confident it's that PR, though -- in my reproducer, things work before it, but break after it.
This behavior of automatically find Python home is something [Bazel's rules_python](https://github.com/bazel-contrib/rules_python) has come to rely on. Insofar as venvs are concerned, the two particular constraints Bazel imposes are:
1. Absolute paths can't be used. This is because they may be machine specific. In Bazel, the machine that generates pyvenv.cfg may be different than who uses it at runtime (e.g a remote build worker generates it, and a user runs it locally).
2. The file system must be assumed to be read-only after building. This prevents creating/modifying a pyvenv.cfg at runtime in the venv's directory to work around (1).
(As an aside, how the env var `PYTHONEXECUTABLE` gets handled might have also been affected by this; I haven't looked deeply yet, but one of my tricks was to set that to the venv bin/python3 path and invoke the real interpreter, but that seems to no longer work, either)
I'll post a more self contained reproducer in a bit. In the meantime, something like this can demonstrate the issue:
```
mkdir myvenv
cd myvenv
touch pyvenv.cfg
mkdir bin
ln -s <path to python> bin/python3
mkdir -p lib/python3.14/site-packages
bin/python3 -c 'import sys; print(sys.prefix); print(sys.base_prefix); print(sys.path)'
```
After the change, it'll print warnings and e.g. /usr/local as the prefixes, which is incorrect. sys.path won't have the venv's site-packages directory, which is incorrect.
Before the change, it'll print the venv directory, underlying interpreter directory, and the venv's site-packages, which is correct.
cc @FFY00 (author of above PR)
### CPython versions tested on:
3.14
### Operating systems tested on:
Linux
<!-- gh-linked-prs -->
### Linked PRs
* gh-135831
* gh-136287
<!-- /gh-linked-prs -->
| 93263d43141a81d369adfcddf325f9a54cb5766d | 8ac7613dc8b8f82253d7c0e2b6ef6ed703a0a1ee |
python/cpython | python__cpython-135767 | # SystemError in OpenSSL's SHAKE when passing negative digest sizes
# Bug report
### Bug description:
```py
import _hashlib
_hashlib.openssl_shake_128(b'').digest(-1) # SystemError
_hashlib.openssl_shake_128(b'').hexdigest(-1) # MemoryError
import _sha3
_sha3.shake_128(b'').digest(-1) # ValueError
_sha3.shake_128(b'').hexdigest(-1) # ValueError
```
The reason is that OpenSSL implementation accepts a `ssize_t` but HACL* accepts a `uint32_t`. I suggest raising a ValueError in OpenSSL's implementation as well. I'll rewrite https://github.com/python/cpython/pull/135744 for this issue at the same time.
Now, to prevent users passing incorrect data length, I also suggest to restrict has lengths to 2 ** 29 as it was done in HACL* and in https://github.com/python/cpython/issues/79103 but I'll do it in a follow-up. In the first time, let's only focus on raising a ValueError for all implementations on negative inputs.
### CPython versions tested on:
CPython main branch
### Operating systems tested on:
_No response_
<!-- gh-linked-prs -->
### Linked PRs
* gh-135767
<!-- /gh-linked-prs -->
> [!NOTE]
> See https://github.com/python/cpython/issues/135759#issuecomment-2992097338 for the rationale of a non-backport. | 7c4361564c3881946a5eca677607b4ffec0a566d | 13cac833471885564cbfde72a4cbac64ade3137a |
python/cpython | python__cpython-135770 | # `tkinter.commondialog.Dialog.show()` documentation has nonexistent `color` parameter
# Documentation
The documentation for [`tkinter.commondialog.Dialog.show()`](https://docs.python.org/3/library/dialog.html#tkinter.commondialog.Dialog.show) contains a `color` argument in the signature.
However, this parameter doesn't actually exist in the code:
https://github.com/python/cpython/blob/313544eb0381d868b4f8b351a0ca808c313af698/Lib/tkinter/commondialog.py#L32
<!-- gh-linked-prs -->
### Linked PRs
* gh-135770
* gh-135776
* gh-135777
<!-- /gh-linked-prs -->
| 4ddf505d9982dc8afead8f52f5754eea5ebde623 | c5ea8e8e8fc725f39ed23ff6259b3cc157a0785f |
python/cpython | python__cpython-136803 | # Inaccurate description of multiprocessing.Queue.close()
# Documentation
https://docs.python.org/3/library/multiprocessing.html#multiprocessing.Queue.close
It says "Indicate that no more data will be put on this queue by the current process." but that looks to only be half the story.
https://github.com/python/cpython/blob/main/Lib/multiprocessing/queues.py#L255
As I understand this is the implementation, which closes **both** the writing and reading ends of the pipe. As result, after close() not only can no more data be written, but also no more data can be read by the **current** process. A reader in a **different** process will be able to drain any buffered messages, but not in the **current** one.
For a writer and reader in the same process, an application-level protocol would be required to first stop writing, then signal the reader to drain everything and terminate (eg if it's a thread) and only then can the queue be closed.
If correct, this should be made explicit in the docs.
The documentation issue looks to not apply to [multiprocessing.SimpleQueue.close()](https://docs.python.org/3/library/multiprocessing.html#multiprocessing.SimpleQueue.close) - this clearly says get() cannot be used on a closed queue.
The documentation issue looks to also not apply to [JoinableQueue](https://docs.python.org/3/library/multiprocessing.html#multiprocessing.JoinableQueue), as that clearly says it's a subclass of multiprocessing.Queue (so things like close() are not described separately).
<!-- gh-linked-prs -->
### Linked PRs
* gh-136803
* gh-136806
* gh-136807
<!-- /gh-linked-prs -->
| f575588ccf27d8d54a1e99cfda944f2614b3255c | d19bb4471331ca2cb87b86e4c904bc9a2bafb044 |
python/cpython | python__cpython-135766 | # `test_capi` fails on wasm buildbots with stack overflow
# Bug report
Link: https://buildbot.python.org/#/builders/1373/builds/539/steps/13/logs/stdio
```
thread 'main' has overflowed its stack
fatal runtime error: stack overflow
```
Affected bots:
- [buildbot/wasm32 WASI 8Core PR](https://buildbot.python.org/#/builders/1341/builds/72)
- [buildbot/wasm32-wasi Non-Debug PR](https://buildbot.python.org/#/builders/1373/builds/539)
<!-- gh-linked-prs -->
### Linked PRs
* gh-135766
* gh-135955
<!-- /gh-linked-prs -->
| 3fb6cfe7a95081e6775ad2dca845713a3ea4c799 | 61532b4bc783c1570c0fd7fb2ff5f8157cf64e82 |
python/cpython | python__cpython-135712 | # Two warnings on `wasm32-wasi Non-Debug PR` buildbot
# Bug report
Link: https://buildbot.python.org/#/builders/1373/builds/538/steps/11/logs/warnings__57_
```
../../Modules/_testcapimodule.c:2428:13: warning: unused function 'finalize_thread_hang_cleanup_callback' [-Wunused-function]
1 warning generated.
../../Modules/_testcapi/long.c:231:28: warning: comparison of integers of different signs: 'long' and 'digit' (aka 'unsigned int') [-Wsign-compare]
1 warning generated.
```
I have a PR with the fix.
<!-- gh-linked-prs -->
### Linked PRs
* gh-135712
* gh-135723
<!-- /gh-linked-prs -->
| 9c3c02019cf0bc7792bbdd3a314e45642178e3b5 | 754190287ece5a2e66684161aadafb18f5f44868 |
python/cpython | python__cpython-135737 | # `shutil.copyfileobj()` doesn't flush at end of copy
# Bug report
### Bug description:
If you use `shutil.copyfileobj()` to copy a file-like object to a file-like object, the destination isn't flushed at the end of the copy. This can lead to the destination file being in a written, but unflushed state, which could be counterintuitive.
This was found as part of the Emscripten build script, which works reliably on Python 3.13.5, but breaks on 3.14.0b3:
```python
import shutil
import tempfile
from urllib.request import urlopen
workingdir = "./tmp"
shutil.rmtree(workingdir, ignore_errors=True)
with tempfile.NamedTemporaryFile(suffix=".tar.gz") as tmp_file:
with urlopen(
"https://github.com/libffi/libffi/releases/download/v3.4.6/libffi-3.4.6.tar.gz"
) as response:
shutil.copyfileobj(response, tmp_file)
# Uncomment this flush to make the example work on 3.14
# tmp_file.flush()
shutil.unpack_archive(tmp_file.name, workingdir)
```
I'm guessing this discrepancy was caused by #119783 (/cc @morotti), which increased the buffer size for `copyfileobj()`. I'm guessing that the bug existed prior to that change, but increasing the buffer size also increased the potential exposure to buffer flush errors/inconsistencies.
I'd argue that the documented API for `copyfileobj()` reads that the method is a "complete" operation, including an implied flush on completion.
However, I can also see the argument that the decision to flush should be left to the user - in which case, the resolution for this issue would be updating the documentation to make it clear that users should flush after a copy.
There could also be an argument for adding a `flush` argument (defaulting to `True`) so that flushing could be optionally omitted for cases where the overhead of a flush might matter.
### CPython versions tested on:
3.14
### Operating systems tested on:
macOS
<!-- gh-linked-prs -->
### Linked PRs
* gh-135737
* gh-135873
<!-- /gh-linked-prs -->
| 34393cbdd46fd965de86f1e7bc89ab111f506723 | 2793b68f758c10fb63b264787f10d46a71fc8086 |
python/cpython | python__cpython-135663 | # Raise consistent `NameError` in `ForwardRef.evaluate()`
# Feature or enhancement
### Proposal:
The new `ForwardRef.evaluate()` implementation had a fast path introduced in https://github.com/python/cpython/pull/124337 to _not_ call `eval()`:
https://github.com/python/cpython/blob/cb394101110e13a27e08bbf2fe9f38d847db004c/Lib/annotationlib.py#L177-L187
However, when the `NameError` is raised, the format is different from the one raised by `eval()`:
The exception message is defined as:
https://github.com/python/cpython/blob/cb394101110e13a27e08bbf2fe9f38d847db004c/Python/ceval_macros.h#L308
and raised this way:
https://github.com/python/cpython/blob/cb394101110e13a27e08bbf2fe9f38d847db004c/Python/ceval.c#L3339-L3351
To have consistent exceptions raised, would it be possible to change the Python fast path implementation to match the C eval code?
```diff
diff --git a/Lib/annotationlib.py b/Lib/annotationlib.py
index 731817a9973..c83a1573ccd 100644
--- a/Lib/annotationlib.py
+++ b/Lib/annotationlib.py
@@ -27,6 +27,9 @@ class Format(enum.IntEnum):
_sentinel = object()
+# Following `NAME_ERROR_MSG` in `ceval_macros.h`:
+_NAME_ERROR_MSG = "name '{name:.200}' is not defined"
+
# Slots shared by ForwardRef and _Stringifier. The __forward__ names must be
# preserved for compatibility with the old typing.ForwardRef class. The remaining
@@ -184,7 +187,7 @@ def evaluate(
elif is_forwardref_format:
return self
else:
- raise NameError(arg)
+ raise NameError(_NAME_ERROR_MSG.format(name=arg), name=arg)
else:
code = self.__forward_code__
try:
```
This requires `_NAME_ERROR_MSG` to be in sync with the one from `ceval_macros.h`.
Or at least, the `NameError` should have its `name` property set (especially as type stubs shows `NameError.name` as `str` currently).
cc @JelleZijlstra.
### Has this already been discussed elsewhere?
This is a minor feature, which does not need previous discussion elsewhere
### Links to previous discussion of this feature:
_No response_
<!-- gh-linked-prs -->
### Linked PRs
* gh-135663
* gh-135673
<!-- /gh-linked-prs -->
| 343719d98e60d28d6102002f8ad3fd9dc5a58bd1 | 9877d191f441741fc27ae5e7a6dd7ab6d4bcc6b7 |
python/cpython | python__cpython-135667 | # 3.14: missing sys.implementation.supports_isolated_interpreters?
# Bug report
### Bug description:
```python
import sys
sys.implementation.supports_isolated_interpreters
```
This is missing, but the accepted PEP 734 [says this](https://peps.python.org/pep-0734/#sys-implementation-supports-isolated-interpreters) is provided so implementations can avoid supporting isolated interpreters (GraalPy, for example, does not yet). Was it overlooked, perhaps?
I was using it like this with 3.14.0b3:
```python
if (
sys.version_info >= (3, 14)
and (
sys.version_info != (3, 14, 0, "beta", 1)
or sys.version_info != (3, 14, 0, "beta", 2)
)
and sys.implementation.supports_isolated_interpreters
):
from concurrent import interpreters
```
CC @ericsnowcurrently
### CPython versions tested on:
3.14
### Operating systems tested on:
macOS
<!-- gh-linked-prs -->
### Linked PRs
* gh-135667
* gh-135786
<!-- /gh-linked-prs -->
| 8ca1e4d846e868a20834cf442c48a3648b558bbe | f4911258a80409cb641f13578137475204ab43b5 |
python/cpython | python__cpython-135642 | # `test_lock_two_threads` flaky in `test_capi`
# Bug report
The lock may have the `_Py_HAS_PARKED` bit set so that `m->_bits == 3` instead of 1. This can happen due to collisions in the parking lot hash table -- that's rare so we don't see it frequently. I think the possible collision here is if the `PyEvent` that the main thread is waiting on uses the same bucket as the `PyMutex`.
We should weaken the assertion in the test case.
See in https://buildbot.python.org/all/#/builders/1609/builds/2878/steps/6/logs/stdio:
```
test_lock_two_threads (test.test_capi.test_misc.Test_PyLock.test_lock_two_threads) ... python: ./Modules/_testinternalcapi/test_lock.c:60: lock_thread: Assertion `m->_bits == 1' failed.
Fatal Python error: Aborted
Current thread's C stack trace (most recent call first):
Binary file "/home/buildbot/buildarea/3.x.itamaro-centos-aws.nogil/build/python", at _Py_DumpStack+0x31 [0x75b0f6]
Binary file "/home/buildbot/buildarea/3.x.itamaro-centos-aws.nogil/build/python" [0x770ae8]
Binary file "/home/buildbot/buildarea/3.x.itamaro-centos-aws.nogil/build/python" [0x770bd5]
Binary file "/lib64/libc.so.6", at +0x3ebf0 [0x7f3a97a3ebf0]
Binary file "/lib64/libc.so.6", at +0x8c1ec [0x7f3a97a8c1ec]
Binary file "/lib64/libc.so.6", at raise+0x16 [0x7f3a97a3eb46]
Binary file "/lib64/libc.so.6", at abort+0xd3 [0x7f3a97a28833]
Binary file "/lib64/libc.so.6", at +0x2875b [0x7f3a97a2875b]
Binary file "/lib64/libc.so.6", at +0x37886 [0x7f3a97a37886]
Binary file "/home/buildbot/buildarea/3.x.itamaro-centos-aws.nogil/build/build/lib.linux-x86_64-3.15/_testinternalcapi.cpython-315td-x86_64-linux-gnu.so", at +0xe2cf [0x7f3a970612cf]
Binary file "/home/buildbot/buildarea/3.x.itamaro-centos-aws.nogil/build/python" [0x756d51]
Binary file "/lib64/libc.so.6", at +0x8a4aa [0x7f3a97a8a4aa]
Binary file "/lib64/libc.so.6", at +0x10f510 [0x7f3a97b0f510]
```
https://github.com/python/cpython/blob/cb394101110e13a27e08bbf2fe9f38d847db004c/Modules/_testinternalcapi/test_lock.c#L60
https://github.com/python/cpython/blob/cb394101110e13a27e08bbf2fe9f38d847db004c/Modules/_testinternalcapi/test_lock.c#L52-L98
<!-- gh-linked-prs -->
### Linked PRs
* gh-135642
* gh-135687
* gh-135688
<!-- /gh-linked-prs -->
| 17ac3933c3c860e08f7963cf270116a39a063be7 | 46c60e0d0b716e8e6f0b74a0f9d0542605b1efd4 |
python/cpython | python__cpython-135643 | # Calling ElementTree.write on an ElementTree instance where ElementTree._root is the wrong type causes possible data loss
# Bug report
### Bug description:
Example:
```
tree = xml.etree.ElementTree.ElementTree(element = "") # element should be an ElementTree.Element object or None
tree.write("file.xml") # This appropriately causes a traceback, but blanks file.xml if it exists
```
PR coming with a fix!
### CPython versions tested on:
CPython main branch, 3.14
### Operating systems tested on:
macOS
<!-- gh-linked-prs -->
### Linked PRs
* gh-135643
* gh-136225
* gh-136226
<!-- /gh-linked-prs -->
| e0245c789f54b63d461717a91eec8ffccbe18966 | 9084b151567d02936ea1374961809b69b4cd883d |
python/cpython | python__cpython-135662 | # `test_cycle` in `test_free_threading.test_itertools` is flaky
# Bug report
### Bug description:
`next()` on a shared `itertools.cycle()` iterator may raise a StopIteration. With the GIL, that would never happen because the iterator loops forever. I think that's fine -- the iterator doesn't need to be thread-safe -- but the test needs to be fixed.
```
test_cycle (test.test_free_threading.test_itertools.ItertoolsThreading.test_cycle) ... Warning -- Uncaught thread exception: StopIteration
Exception in thread Thread-7130 (work):
Traceback (most recent call last):
File "/.../checkout/Lib/threading.py", line 1074, in _bootstrap_inner
self._context.run(self.run)
~~~~~~~~~~~~~~~~~^^^^^^^^^^
File "/.../checkout/Lib/threading.py", line 1016, in run
self._target(*self._args, **self._kwargs)
~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/.../checkout/Lib/test/test_free_threading/test_itertools.py", line 47, in work
_ = next(it)
StopIteration
ok
```
The problem is that the underlying iterator may be exhausted:
https://github.com/python/cpython/blob/cb394101110e13a27e08bbf2fe9f38d847db004c/Modules/itertoolsmodule.c#L1203
But the other thread's may not have added anything to `lz->saved` yet:
https://github.com/python/cpython/blob/cb394101110e13a27e08bbf2fe9f38d847db004c/Modules/itertoolsmodule.c#L1205
Exit code path (leads to StopIteration):
https://github.com/python/cpython/blob/cb394101110e13a27e08bbf2fe9f38d847db004c/Modules/itertoolsmodule.c#L1220-L1221
cc @eendebakpt @kumaraditya303
### CPython versions tested on:
CPython main branch
### Operating systems tested on:
_No response_
<!-- gh-linked-prs -->
### Linked PRs
* gh-135662
<!-- /gh-linked-prs -->
| 4dea6b48cc24aede7250534923d3ce9f9c8b87e6 | c55512311b7cb8b7c27c19f56cd8f872be29aedc |
python/cpython | python__cpython-135632 | # LOAD_CONST_IMMORTAL is documented in dis as an instruction while it is a specialized instruction
# Documentation
The dis documentation does not document specialized instruction but LOAD_CONST_IMMORTAL is documented even though its opcode is 190 and all opcodes beyond 129 are specialized.
<!-- gh-linked-prs -->
### Linked PRs
* gh-135632
* gh-135649
<!-- /gh-linked-prs -->
| 711700259135b5f9e21c56b199f4ebc0048b18b4 | 52be7f445e454ccb44e368a22fe70a0fa6cab7c0 |
python/cpython | python__cpython-136758 | # Misleading pyrepl warnings when _curses module is missing
# Bug report
### Bug description:
If I run 3.14 on Linux without a working `_curses` extension, I get this misleading warning from pyrepl:
```
warning: can't use pyrepl: Windows only
```
I suspect pyrepl is not _actually_ Windows only, and indeed if I fix `_curses` it works again.
Presumably, a better warning would be
```
warning: can't use pyrepl: No module named '_curses'
```
or something like that
### CPython versions tested on:
3.14
### Operating systems tested on:
Linux
<!-- gh-linked-prs -->
### Linked PRs
* gh-136758
* gh-136915
* gh-136916
* gh-136925
<!-- /gh-linked-prs -->
| 09dfb50f1b7c23bc48d86bd579671761bb8ca48b | d1d526afe7ce62c787b150652a2ba136cb949d74 |
python/cpython | python__cpython-135613 | # Segfault/abort in `_Py_uop_sym_new_const` on a JIT build
# Crash report
### What happened?
It's possible to segfault a release build or abort a debug with JIT enabled by running the following MRE:
```python
import email # Any module seems to work, originally this was `sys`
def interact():
email.ps1 = None
prompt = email.ps1
del email.ps1
for _ in range(5000):
interact()
```
Segfault backtrace:
```
Program received signal SIGSEGV, Segmentation fault.
_Py_uop_sym_new_const (ctx=ctx@entry=0x7ffffffe0d20, const_val=0x0) at Python/optimizer_symbols.c:426
426 _Py_uop_sym_set_const(ctx, res, const_val);
#0 _Py_uop_sym_new_const (ctx=ctx@entry=0x7ffffffe0d20, const_val=0x0) at Python/optimizer_symbols.c:426
#1 0x000055555584a183 in optimize_uops (co=0x7ffff7a50270, trace=trace@entry=0x7fffffff8a30, trace_len=trace_len@entry=73,
curr_stacklen=curr_stacklen@entry=2, dependencies=dependencies@entry=0x7fffffff8990) at Python/optimizer_cases.c.h:1265
#2 0x000055555584aaba in _Py_uop_analyze_and_optimize (frame=frame@entry=0x7ffff7fb0020, buffer=buffer@entry=0x7fffffff8a30,
length=length@entry=73, curr_stacklen=curr_stacklen@entry=2, dependencies=dependencies@entry=0x7fffffff8990) at Python/optimizer_analysis.c:682
#3 0x000055555584424b in uop_optimize (frame=frame@entry=0x7ffff7fb0020, instr=instr@entry=0x7ffff7a3a994,
exec_ptr=exec_ptr@entry=0x7fffffffd690, curr_stackentries=<optimized out>, progress_needed=progress_needed@entry=true)
at Python/optimizer.c:1282
#4 0x0000555555844bec in _PyOptimizer_Optimize (frame=frame@entry=0x7ffff7fb0020, start=0x7ffff7a3a994,
executor_ptr=executor_ptr@entry=0x7fffffffd690, chain_depth=chain_depth@entry=0) at Python/optimizer.c:130
#5 0x00005555555e9645 in _PyEval_EvalFrameDefault (tstate=0x555555b77f70 <_PyRuntime+315216>, frame=<optimized out>, throwflag=<optimized out>)
at Python/generated_cases.c.h:7792
#6 0x00005555557bc6f7 in _PyEval_EvalFrame (throwflag=0, frame=0x7ffff7fb0020, tstate=0x555555b77f70 <_PyRuntime+315216>)
at ./Include/internal/pycore_ceval.h:119
#7 _PyEval_Vector (args=0x0, argcount=0, kwnames=0x0, locals=0x7ffff7a13d40, func=0x7ffff7a0f3d0, tstate=0x555555b77f70 <_PyRuntime+315216>)
at Python/ceval.c:1975
```
Abort backtrace:
```shell
python: Python/optimizer_symbols.c:421: _Py_uop_sym_new_const: Assertion `const_val != NULL' failed.
Program received signal SIGABRT, Aborted.
__pthread_kill_implementation (no_tid=0, signo=6, threadid=140737350575936) at ./nptl/pthread_kill.c:44
#0 __pthread_kill_implementation (no_tid=0, signo=6, threadid=140737350575936) at ./nptl/pthread_kill.c:44
#1 __pthread_kill_internal (signo=6, threadid=140737350575936) at ./nptl/pthread_kill.c:78
#2 __GI___pthread_kill (threadid=140737350575936, signo=signo@entry=6) at ./nptl/pthread_kill.c:89
#3 0x00007ffff7cdf476 in __GI_raise (sig=sig@entry=6) at ../sysdeps/posix/raise.c:26
#4 0x00007ffff7cc57f3 in __GI_abort () at ./stdlib/abort.c:79
#5 0x00007ffff7cc571b in __assert_fail_base (fmt=0x7ffff7e7a130 "%s%s%s:%u: %s%sAssertion `%s' failed.\n%n",
assertion=0x555555b5267c "const_val != NULL", file=0x555555b52648 "Python/optimizer_symbols.c", line=421, function=<optimized out>)
at ./assert/assert.c:94
#6 0x00007ffff7cd6e96 in __GI___assert_fail (assertion=assertion@entry=0x555555b5267c "const_val != NULL",
file=file@entry=0x555555b52648 "Python/optimizer_symbols.c", line=line@entry=421,
function=function@entry=0x555555b52de0 <__PRETTY_FUNCTION__.7> "_Py_uop_sym_new_const") at ./assert/assert.c:103
#7 0x000055555595b2dc in _Py_uop_sym_new_const (ctx=ctx@entry=0x7ffffffc4790, const_val=<optimized out>) at Python/optimizer_symbols.c:421
#8 0x0000555555956e33 in optimize_uops (co=0x7ffff7b93940, trace=trace@entry=0x7fffffff3e00, trace_len=trace_len@entry=73,
curr_stacklen=curr_stacklen@entry=2, dependencies=dependencies@entry=0x7fffffff3de0) at Python/optimizer_cases.c.h:1265
#9 0x000055555595a424 in _Py_uop_analyze_and_optimize (frame=frame@entry=0x7ffff7fb0020, buffer=buffer@entry=0x7fffffff3e00,
length=length@entry=73, curr_stacklen=curr_stacklen@entry=2, dependencies=dependencies@entry=0x7fffffff3de0)
at ./Include/internal/pycore_interpframe.h:24
#10 0x0000555555951cd5 in uop_optimize (frame=frame@entry=0x7ffff7fb0020, instr=instr@entry=0x7ffff7a7e744,
exec_ptr=exec_ptr@entry=0x7fffffffd550, curr_stackentries=2, progress_needed=progress_needed@entry=true) at Python/optimizer.c:1282
#11 0x0000555555952253 in _PyOptimizer_Optimize (frame=frame@entry=0x7ffff7fb0020, start=start@entry=0x7ffff7a7e744,
executor_ptr=executor_ptr@entry=0x7fffffffd550, chain_depth=chain_depth@entry=0) at Python/optimizer.c:130
#12 0x0000555555863072 in _PyEval_EvalFrameDefault (tstate=tstate@entry=0x555555d6f7c0 <_PyRuntime+331232>, frame=frame@entry=0x7ffff7fb0020,
throwflag=throwflag@entry=0) at Python/generated_cases.c.h:7792
#13 0x0000555555877340 in _PyEval_EvalFrame (throwflag=0, frame=0x7ffff7fb0020, tstate=0x555555d6f7c0 <_PyRuntime+331232>)
at ./Include/internal/pycore_ceval.h:119
#14 _PyEval_Vector (tstate=tstate@entry=0x555555d6f7c0 <_PyRuntime+331232>, func=func@entry=0x7ffff7a4a750, locals=locals@entry=0x7ffff7a5c110,
args=args@entry=0x0, argcount=argcount@entry=0, kwnames=kwnames@entry=0x0) at Python/ceval.c:1975
#15 0x000055555587743f in PyEval_EvalCode (co=co@entry=0x7ffff7a7e640, globals=globals@entry=0x7ffff7a5c110, locals=locals@entry=0x7ffff7a5c110)
at Python/ceval.c:866
```
Output from `PYTHON_LLTRACE=2`:
```
Optimizing <module> (/home/fusil/runs/python-555/code-cpu_load-assertion/source2.py:1) at byte offset 52
1 ADD_TO_TRACE: _START_EXECUTOR (0, target=26, operand0=0x7ffff7a7e744, operand1=0)
2 ADD_TO_TRACE: _MAKE_WARM (0, target=0, operand0=0, operand1=0)
26: JUMP_BACKWARD_JIT(12)
3 ADD_TO_TRACE: _CHECK_VALIDITY (0, target=26, operand0=0, operand1=0)
4 ADD_TO_TRACE: _SET_IP (0, target=26, operand0=0x7ffff7a7e744, operand1=0)
5 ADD_TO_TRACE: _CHECK_PERIODIC (0, target=26, operand0=0, operand1=0, error_target=0)
16: FOR_ITER_RANGE(10)
6 ADD_TO_TRACE: _CHECK_VALIDITY (0, target=16, operand0=0, operand1=0)
7 ADD_TO_TRACE: _SET_IP (0, target=16, operand0=0x7ffff7a7e730, operand1=0)
8 ADD_TO_TRACE: _ITER_CHECK_RANGE (10, target=16, operand0=0, operand1=0)
9 ADD_TO_TRACE: _GUARD_NOT_EXHAUSTED_RANGE (10, target=16, operand0=0, operand1=0)
10 ADD_TO_TRACE: _ITER_NEXT_RANGE (10, target=16, operand0=0, operand1=0, error_target=0)
18: STORE_NAME(3)
11 ADD_TO_TRACE: _CHECK_VALIDITY (0, target=18, operand0=0, operand1=0)
12 ADD_TO_TRACE: _SET_IP (0, target=18, operand0=0x7ffff7a7e734, operand1=0)
13 ADD_TO_TRACE: _STORE_NAME (3, target=18, operand0=0, operand1=0, error_target=0)
19: LOAD_NAME(1)
14 ADD_TO_TRACE: _CHECK_VALIDITY (0, target=19, operand0=0, operand1=0)
15 ADD_TO_TRACE: _SET_IP (0, target=19, operand0=0x7ffff7a7e736, operand1=0)
16 ADD_TO_TRACE: _LOAD_NAME (1, target=19, operand0=0, operand1=0, error_target=0)
20: PUSH_NULL(0)
17 ADD_TO_TRACE: _CHECK_VALIDITY (0, target=20, operand0=0, operand1=0)
18 ADD_TO_TRACE: _SET_IP (0, target=20, operand0=0x7ffff7a7e738, operand1=0)
19 ADD_TO_TRACE: _PUSH_NULL (0, target=20, operand0=0, operand1=0)
21: CALL_PY_EXACT_ARGS(0)
20 ADD_TO_TRACE: _CHECK_VALIDITY (0, target=21, operand0=0, operand1=0)
21 ADD_TO_TRACE: _SET_IP (0, target=21, operand0=0x7ffff7a7e73a, operand1=0)
22 ADD_TO_TRACE: _CHECK_PEP_523 (0, target=21, operand0=0, operand1=0)
23 ADD_TO_TRACE: _CHECK_FUNCTION_VERSION (0, target=21, operand0=0x2cb, operand1=0)
24 ADD_TO_TRACE: _CHECK_FUNCTION_EXACT_ARGS (0, target=21, operand0=0, operand1=0)
25 ADD_TO_TRACE: _CHECK_STACK_SPACE (0, target=21, operand0=0, operand1=0)
26 ADD_TO_TRACE: _CHECK_RECURSION_REMAINING (0, target=21, operand0=0, operand1=0)
27 ADD_TO_TRACE: _INIT_CALL_PY_EXACT_ARGS (0, target=21, operand0=0, operand1=0)
28 ADD_TO_TRACE: _SAVE_RETURN_OFFSET (4, target=21, operand0=0, operand1=0)
Function: version=0x2cb; new_func=0x7ffff7a4a8d0, new_code=0x7ffff7b93940
29 ADD_TO_TRACE: _PUSH_FRAME (0, target=21, operand0=0x7ffff7a4a8d0, operand1=0)
Continuing in interact (/home/fusil/runs/python-555/code-cpu_load-assertion/source2.py:3) at byte offset 0
0: RESUME_CHECK(0)
30 ADD_TO_TRACE: _CHECK_VALIDITY (0, target=0, operand0=0, operand1=0)
31 ADD_TO_TRACE: _SET_IP (0, target=0, operand0=0x7ffff7b93a10, operand1=0)
32 ADD_TO_TRACE: _RESUME_CHECK (0, target=0, operand0=0, operand1=0)
1: LOAD_CONST(0)
33 ADD_TO_TRACE: _CHECK_VALIDITY (0, target=1, operand0=0, operand1=0)
34 ADD_TO_TRACE: _SET_IP (0, target=1, operand0=0x7ffff7b93a12, operand1=0)
35 ADD_TO_TRACE: _LOAD_CONST (0, target=1, operand0=0, operand1=0)
2: LOAD_GLOBAL_MODULE(0)
36 ADD_TO_TRACE: _CHECK_VALIDITY (0, target=2, operand0=0, operand1=0)
37 ADD_TO_TRACE: _SET_IP (0, target=2, operand0=0x7ffff7b93a14, operand1=0)
38 ADD_TO_TRACE: _NOP (0, target=2, operand0=0, operand1=0)
39 ADD_TO_TRACE: _LOAD_GLOBAL_MODULE (0, target=2, operand0=0x2c, operand1=0)
40 ADD_TO_TRACE: _PUSH_NULL_CONDITIONAL (0, target=2, operand0=0, operand1=0)
7: STORE_ATTR(1)
41 ADD_TO_TRACE: _CHECK_VALIDITY (0, target=7, operand0=0, operand1=0)
42 ADD_TO_TRACE: _SET_IP (0, target=7, operand0=0x7ffff7b93a1e, operand1=0)
43 ADD_TO_TRACE: _STORE_ATTR (1, target=7, operand0=0, operand1=0, error_target=0)
12: LOAD_GLOBAL_MODULE(0)
44 ADD_TO_TRACE: _CHECK_VALIDITY (0, target=12, operand0=0, operand1=0)
45 ADD_TO_TRACE: _SET_IP (0, target=12, operand0=0x7ffff7b93a28, operand1=0)
46 ADD_TO_TRACE: _NOP (0, target=12, operand0=0, operand1=0)
47 ADD_TO_TRACE: _LOAD_GLOBAL_MODULE (0, target=12, operand0=0x2c, operand1=0)
48 ADD_TO_TRACE: _PUSH_NULL_CONDITIONAL (0, target=12, operand0=0, operand1=0)
17: LOAD_ATTR_MODULE(2)
49 ADD_TO_TRACE: _CHECK_VALIDITY (0, target=17, operand0=0, operand1=0)
50 ADD_TO_TRACE: _SET_IP (0, target=17, operand0=0x7ffff7b93a32, operand1=0)
51 ADD_TO_TRACE: _LOAD_ATTR_MODULE (2, target=17, operand0=0x2e, operand1=0)
52 ADD_TO_TRACE: _PUSH_NULL_CONDITIONAL (2, target=17, operand0=0, operand1=0)
27: STORE_FAST(0)
53 ADD_TO_TRACE: _CHECK_VALIDITY (0, target=27, operand0=0, operand1=0)
54 ADD_TO_TRACE: _SET_IP (0, target=27, operand0=0x7ffff7b93a46, operand1=0)
55 ADD_TO_TRACE: _STORE_FAST (0, target=27, operand0=0, operand1=0)
28: LOAD_GLOBAL_MODULE(0)
56 ADD_TO_TRACE: _CHECK_VALIDITY (0, target=28, operand0=0, operand1=0)
57 ADD_TO_TRACE: _SET_IP (0, target=28, operand0=0x7ffff7b93a48, operand1=0)
58 ADD_TO_TRACE: _NOP (0, target=28, operand0=0, operand1=0)
59 ADD_TO_TRACE: _LOAD_GLOBAL_MODULE (0, target=28, operand0=0x2c, operand1=0)
60 ADD_TO_TRACE: _PUSH_NULL_CONDITIONAL (0, target=28, operand0=0, operand1=0)
33: DELETE_ATTR(1)
61 ADD_TO_TRACE: _CHECK_VALIDITY (0, target=33, operand0=0, operand1=0)
62 ADD_TO_TRACE: _SET_IP (0, target=33, operand0=0x7ffff7b93a52, operand1=0)
63 ADD_TO_TRACE: _DELETE_ATTR (1, target=33, operand0=0, operand1=0, error_target=0)
34: LOAD_CONST(0)
64 ADD_TO_TRACE: _CHECK_VALIDITY (0, target=34, operand0=0, operand1=0)
65 ADD_TO_TRACE: _SET_IP (0, target=34, operand0=0x7ffff7b93a54, operand1=0)
66 ADD_TO_TRACE: _LOAD_CONST (0, target=34, operand0=0, operand1=0)
35: RETURN_VALUE(0)
67 ADD_TO_TRACE: _CHECK_VALIDITY (0, target=35, operand0=0, operand1=0)
68 ADD_TO_TRACE: _SET_IP (0, target=35, operand0=0x7ffff7b93a56, operand1=0)
69 ADD_TO_TRACE: _RETURN_VALUE (0, target=35, operand0=0x7ffff7a4a750, operand1=0)
Returning to <module> (/home/fusil/runs/python-555/code-cpu_load-assertion/source2.py:1) at byte offset 50
25: POP_TOP(0)
70 ADD_TO_TRACE: _CHECK_VALIDITY (0, target=25, operand0=0, operand1=0)
71 ADD_TO_TRACE: _SET_IP (0, target=25, operand0=0x7ffff7a7e742, operand1=0)
72 ADD_TO_TRACE: _POP_TOP (0, target=25, operand0=0, operand1=0)
73 ADD_TO_TRACE: _JUMP_TO_TOP (0, target=0, operand0=0, operand1=0)
Created a proto-trace for <module> (/home/fusil/runs/python-555/code-cpu_load-assertion/source2.py:1) at byte offset 52 -- length 73
```
Output from `PYTHON_OPT_DEBUG=4`
```
0 abs: _START_EXECUTOR (0, target=26, operand0=0x7f167283ad34, operand1=0) stack_level 2
1 abs: _MAKE_WARM (0, target=0, operand0=0, operand1=0) stack_level 2
2 abs: _CHECK_VALIDITY (0, target=26, operand0=0, operand1=0) stack_level 2
3 abs: _SET_IP (0, target=26, operand0=0x7f167283ad34, operand1=0) stack_level 2
4 abs: _CHECK_PERIODIC (0, target=26, operand0=0, operand1=0, error_target=0) stack_level 2
5 abs: _CHECK_VALIDITY (0, target=16, operand0=0, operand1=0) stack_level 2
6 abs: _SET_IP (0, target=16, operand0=0x7f167283ad20, operand1=0) stack_level 2
7 abs: _ITER_CHECK_RANGE (10, target=16, operand0=0, operand1=0) stack_level 2
8 abs: _GUARD_NOT_EXHAUSTED_RANGE (10, target=16, operand0=0, operand1=0) stack_level 2
9 abs: _ITER_NEXT_RANGE (10, target=16, operand0=0, operand1=0, error_target=0) stack_level 3
10 abs: _CHECK_VALIDITY (0, target=18, operand0=0, operand1=0) stack_level 3
11 abs: _SET_IP (0, target=18, operand0=0x7f167283ad24, operand1=0) stack_level 3
12 abs: _STORE_NAME (3, target=18, operand0=0, operand1=0, error_target=0) stack_level 2
13 abs: _CHECK_VALIDITY (0, target=19, operand0=0, operand1=0) stack_level 2
14 abs: _SET_IP (0, target=19, operand0=0x7f167283ad26, operand1=0) stack_level 2
15 abs: _LOAD_NAME (1, target=19, operand0=0, operand1=0, error_target=0) stack_level 3
16 abs: _CHECK_VALIDITY (0, target=20, operand0=0, operand1=0) stack_level 3
17 abs: _SET_IP (0, target=20, operand0=0x7f167283ad28, operand1=0) stack_level 3
18 abs: _PUSH_NULL (0, target=20, operand0=0, operand1=0) stack_level 4
19 abs: _CHECK_VALIDITY (0, target=21, operand0=0, operand1=0) stack_level 4
20 abs: _SET_IP (0, target=21, operand0=0x7f167283ad2a, operand1=0) stack_level 4
21 abs: _CHECK_PEP_523 (0, target=21, operand0=0, operand1=0) stack_level 4
22 abs: _CHECK_FUNCTION_VERSION (0, target=21, operand0=0x760, operand1=0) stack_level 4
23 abs: _CHECK_FUNCTION_EXACT_ARGS (0, target=21, operand0=0, operand1=0) stack_level 4
24 abs: _CHECK_STACK_SPACE (0, target=21, operand0=0, operand1=0x7ffda9a1bf20) stack_level 4
25 abs: _CHECK_RECURSION_REMAINING (0, target=21, operand0=0, operand1=0x56468d06ff63) stack_level 4
26 abs: _INIT_CALL_PY_EXACT_ARGS (0, target=21, operand0=0, operand1=0x3c) func=0x7f16728068d0 code=0x7f1672953940 stack_level 3
27 abs: _SAVE_RETURN_OFFSET (4, target=21, operand0=0, operand1=0x56468d081cef) stack_level 3
28 abs: _PUSH_FRAME (0, target=21, operand0=0x7f16728068d0, operand1=0x56468d620878) stack_level 0
29 abs: _CHECK_VALIDITY (0, target=0, operand0=0, operand1=0x56468cff61e1) stack_level 0
30 abs: _SET_IP (0, target=0, operand0=0x7f1672953a10, operand1=0) stack_level 0
31 abs: _RESUME_CHECK (0, target=0, operand0=0, operand1=0) stack_level 0
32 abs: _CHECK_VALIDITY (0, target=1, operand0=0, operand1=0) stack_level 0
33 abs: _SET_IP (0, target=1, operand0=0x7f1672953a12, operand1=0) stack_level 0
34 abs: _LOAD_CONST (0, target=1, operand0=0, operand1=0) stack_level 1
35 abs: _CHECK_VALIDITY (0, target=2, operand0=0, operand1=0) stack_level 1
36 abs: _SET_IP (0, target=2, operand0=0x7f1672953a14, operand1=0) stack_level 1
37 abs: _CHECK_FUNCTION (0, target=2, operand0=0x760, operand1=0) stack_level 1
38 abs: _LOAD_CONST_INLINE (0, target=2, operand0=0x7f1672884bf0, operand1=0x8) stack_level 2
39 abs: _PUSH_NULL_CONDITIONAL (0, target=2, operand0=0, operand1=0) stack_level 2
40 abs: _CHECK_VALIDITY (0, target=7, operand0=0, operand1=0) stack_level 2
41 abs: _SET_IP (0, target=7, operand0=0x7f1672953a1e, operand1=0) stack_level 2
42 abs: _STORE_ATTR (1, target=7, operand0=0, operand1=0, error_target=0) stack_level 0
43 abs: _CHECK_VALIDITY (0, target=12, operand0=0, operand1=0) stack_level 0
44 abs: _SET_IP (0, target=12, operand0=0x7f1672953a28, operand1=0) stack_level 0
45 abs: _NOP (0, target=12, operand0=0, operand1=0) stack_level 0
46 abs: _LOAD_CONST_INLINE (0, target=12, operand0=0x7f1672884bf0, operand1=0x8) stack_level 1
47 abs: _PUSH_NULL_CONDITIONAL (0, target=12, operand0=0, operand1=0) stack_level 1
48 abs: _CHECK_VALIDITY (0, target=17, operand0=0, operand1=0) stack_level 1
49 abs: _SET_IP (0, target=17, operand0=0x7f1672953a32, operand1=0) stack_level 1
```
Please let me know whether PYTHON_LLTRACE output is useful in JIT issues. Any suggestions about format or content also very welcome :)
Found using [fusil](https://github.com/devdanzin/fusil) by @vstinner.
### CPython versions tested on:
CPython main branch
### Operating systems tested on:
Linux
### Output from running 'python -VV' on the command line:
Python 3.15.0a0 (heads/main:b706ff003c5, Jun 17 2025, 06:07:17) [GCC 11.4.0]
<!-- gh-linked-prs -->
### Linked PRs
* gh-135613
* gh-135739
<!-- /gh-linked-prs -->
| b53b0c14da58b73955b8989e4a9ce19ea67db7a3 | c8c13f8036051c2858956905769b7b9dfde4bbd7 |
python/cpython | python__cpython-135614 | # data race with extension modules checking for empty weaklist
Extension modules which check for `self->weakreflist != NULL` in `tp_dealloc` before calling `PyObject_ClearWeakRefs(self)` leads to to data race if another thread is concurrently mutating the weaklist.
In free-threading the weaklist is modified atomically as such it can cause data race with the non atomic reads in extension modules. This can be avoided in extension modules by calling `PyObject_ClearWeakRefs` always and removing checking for `weaklist == NULL`.
I'll try to find a smaller reproducer for this.
<!-- gh-linked-prs -->
### Linked PRs
* gh-135614
* gh-136119
* gh-136126
<!-- /gh-linked-prs -->
| b1056c2a446b43452e457d5fd5f1bde66afd3883 | 0533c1faf27d1e50b062bb623dfad93288757f57 |
python/cpython | python__cpython-135572 | # Guard call to _hashlib in test_hashlib.py
# Feature or enhancement
### Proposal:
# Feature or enhancement
### Proposal:
This is a trivial test modification, [spun out](https://github.com/python/cpython/pull/135402#issuecomment-2970965387) into a separate issue + PR from PR 135402.
### Has this already been discussed elsewhere?
I have already discussed this feature proposal on Discourse and in PR 135402.
### Links to previous discussion of this feature:
https://discuss.python.org/t/support-building-ssl-and-hashlib-modules-against-aws-lc/44505/14
### Has this already been discussed elsewhere?
No response given
### Links to previous discussion of this feature:
_No response_
<!-- gh-linked-prs -->
### Linked PRs
* gh-135572
* gh-136041
* gh-136042
<!-- /gh-linked-prs -->
| 065194c1a971b59547f1bb2cc64760c4bf0ee674 | 731f5b8ab3970e344bfbc4ff86df767a0795f0fc |
python/cpython | python__cpython-135562 | # `_hmac`: GIL may be released while attempting to set an exception
# Bug report
### Bug description:
When calling `hmac.update()`, if the HACL* call fails, an exception is set. But this happens while we're releasing the GIL, which is not good.
### CPython versions tested on:
CPython main branch
### Operating systems tested on:
Other, Linux
<!-- gh-linked-prs -->
### Linked PRs
* gh-135562
* gh-135725
<!-- /gh-linked-prs -->
| c76568339867422eca35876cabf82c06a55bbf56 | 9c3c02019cf0bc7792bbdd3a314e45642178e3b5 |
python/cpython | python__cpython-135601 | # data race between lock free list reads and heapq
Reproducer:
```py
import heapq
l = []
def writer():
while True:
heapq.heappush(l, 1)
heapq.heappop(l)
def reader():
while True:
try:
l[0]
except IndexError:
pass
def main():
import threading
threads = []
for _ in range(10):
t1 = threading.Thread(target=writer)
t2 = threading.Thread(target=reader)
threads.append(t1)
threads.append(t2)
t1.start()
t2.start()
for t in threads:
t.join()
main()
```
TSAN data race on current main:
```console
==================
WARNING: ThreadSanitizer: data race (pid=32935)
Write of size 8 at 0x7f90240300f8 by thread T3:
#0 PyList_SET_ITEM /home/realkumaraditya/cpython/./Include/cpython/listobject.h:47:26 (_heapq.cpython-315t-x86_64-linux-gnu.so+0x43fc) (BuildId: d2d0e185522b78f855f3798fbcc693ee0dba06e3)
#1 heappop_internal /home/realkumaraditya/cpython/./Modules/_heapqmodule.c:175:5 (_heapq.cpython-315t-x86_64-linux-gnu.so+0x43fc)
#2 _heapq_heappop_impl /home/realkumaraditya/cpython/./Modules/_heapqmodule.c:197:12 (_heapq.cpython-315t-x86_64-linux-gnu.so+0x2ae7) (BuildId: d2d0e185522b78f855f3798fbcc693ee0dba06e3)
#3 _heapq_heappop /home/realkumaraditya/cpython/./Modules/clinic/_heapqmodule.c.h:68:20 (_heapq.cpython-315t-x86_64-linux-gnu.so+0x2ae7)
#4 cfunction_vectorcall_O /home/realkumaraditya/cpython/Objects/methodobject.c:537:24 (python+0x2a2d0a) (BuildId: 957e91b4c4b85547a8053d7d39b3c2228cdefcde)
#5 _PyObject_VectorcallTstate /home/realkumaraditya/cpython/./Include/internal/pycore_call.h:169:11 (python+0x1f61b3) (BuildId: 957e91b4c4b85547a8053d7d39b3c2228cdefcde)
#6 PyObject_Vectorcall /home/realkumaraditya/cpython/Objects/call.c:327:12 (python+0x1f61b3)
#7 _PyEval_EvalFrameDefault /home/realkumaraditya/cpython/Python/generated_cases.c.h:1629:35 (python+0x404042) (BuildId: 957e91b4c4b85547a8053d7d39b3c2228cdefcde)
#8 _PyEval_EvalFrame /home/realkumaraditya/cpython/./Include/internal/pycore_ceval.h:119:16 (python+0x3ff030) (BuildId: 957e91b4c4b85547a8053d7d39b3c2228cdefcde)
#9 _PyEval_Vector /home/realkumaraditya/cpython/Python/ceval.c:1975:12 (python+0x3ff030)
#10 _PyFunction_Vectorcall /home/realkumaraditya/cpython/Objects/call.c (python+0x1f680f) (BuildId: 957e91b4c4b85547a8053d7d39b3c2228cdefcde)
#11 _PyObject_VectorcallTstate /home/realkumaraditya/cpython/./Include/internal/pycore_call.h:169:11 (python+0x1fb226) (BuildId: 957e91b4c4b85547a8053d7d39b3c2228cdefcde)
#12 method_vectorcall /home/realkumaraditya/cpython/Objects/classobject.c:72:20 (python+0x1fb226)
#13 _PyObject_VectorcallTstate /home/realkumaraditya/cpython/./Include/internal/pycore_call.h:169:11 (python+0x457ac1) (BuildId: 957e91b4c4b85547a8053d7d39b3c2228cdefcde)
#14 context_run /home/realkumaraditya/cpython/Python/context.c:728:29 (python+0x457ac1)
#15 method_vectorcall_FASTCALL_KEYWORDS /home/realkumaraditya/cpython/Objects/descrobject.c:421:24 (python+0x20f499) (BuildId: 957e91b4c4b85547a8053d7d39b3c2228cdefcde)
#16 _PyObject_VectorcallTstate /home/realkumaraditya/cpython/./Include/internal/pycore_call.h:169:11 (python+0x1f61b3) (BuildId: 957e91b4c4b85547a8053d7d39b3c2228cdefcde)
#17 PyObject_Vectorcall /home/realkumaraditya/cpython/Objects/call.c:327:12 (python+0x1f61b3)
#18 _PyEval_EvalFrameDefault /home/realkumaraditya/cpython/Python/generated_cases.c.h:1629:35 (python+0x404042) (BuildId: 957e91b4c4b85547a8053d7d39b3c2228cdefcde)
#19 _PyEval_EvalFrame /home/realkumaraditya/cpython/./Include/internal/pycore_ceval.h:119:16 (python+0x3ff030) (BuildId: 957e91b4c4b85547a8053d7d39b3c2228cdefcde)
#20 _PyEval_Vector /home/realkumaraditya/cpython/Python/ceval.c:1975:12 (python+0x3ff030)
#21 _PyFunction_Vectorcall /home/realkumaraditya/cpython/Objects/call.c (python+0x1f680f) (BuildId: 957e91b4c4b85547a8053d7d39b3c2228cdefcde)
#22 _PyObject_VectorcallTstate /home/realkumaraditya/cpython/./Include/internal/pycore_call.h:169:11 (python+0x1fb226) (BuildId: 957e91b4c4b85547a8053d7d39b3c2228cdefcde)
#23 method_vectorcall /home/realkumaraditya/cpython/Objects/classobject.c:72:20 (python+0x1fb226)
#24 _PyVectorcall_Call /home/realkumaraditya/cpython/Objects/call.c:273:16 (python+0x1f64a9) (BuildId: 957e91b4c4b85547a8053d7d39b3c2228cdefcde)
#25 _PyObject_Call /home/realkumaraditya/cpython/Objects/call.c:348:16 (python+0x1f64a9)
#26 PyObject_Call /home/realkumaraditya/cpython/Objects/call.c:373:12 (python+0x1f6515) (BuildId: 957e91b4c4b85547a8053d7d39b3c2228cdefcde)
#27 thread_run /home/realkumaraditya/cpython/./Modules/_threadmodule.c:373:21 (python+0x5ad102) (BuildId: 957e91b4c4b85547a8053d7d39b3c2228cdefcde)
#28 pythread_wrapper /home/realkumaraditya/cpython/Python/thread_pthread.h:232:5 (python+0x5047c7) (BuildId: 957e91b4c4b85547a8053d7d39b3c2228cdefcde)
Previous atomic read of size 8 at 0x7f90240300f8 by thread T2:
#0 _Py_atomic_load_ptr /home/realkumaraditya/cpython/./Include/cpython/pyatomic_gcc.h:300:18 (python+0x24bbb8) (BuildId: 957e91b4c4b85547a8053d7d39b3c2228cdefcde)
#1 _Py_TryXGetRef /home/realkumaraditya/cpython/./Include/internal/pycore_object.h:632:23 (python+0x24bbb8)
#2 list_get_item_ref /home/realkumaraditya/cpython/Objects/listobject.c:366:22 (python+0x24bbb8)
#3 _PyList_GetItemRef /home/realkumaraditya/cpython/Objects/listobject.c:417:12 (python+0x24bd1e) (BuildId: 957e91b4c4b85547a8053d7d39b3c2228cdefcde)
#4 _PyEval_EvalFrameDefault /home/realkumaraditya/cpython/Python/generated_cases.c.h:736:35 (python+0x401216) (BuildId: 957e91b4c4b85547a8053d7d39b3c2228cdefcde)
#5 _PyEval_EvalFrame /home/realkumaraditya/cpython/./Include/internal/pycore_ceval.h:119:16 (python+0x3ff030) (BuildId: 957e91b4c4b85547a8053d7d39b3c2228cdefcde)
#6 _PyEval_Vector /home/realkumaraditya/cpython/Python/ceval.c:1975:12 (python+0x3ff030)
#7 _PyFunction_Vectorcall /home/realkumaraditya/cpython/Objects/call.c (python+0x1f680f) (BuildId: 957e91b4c4b85547a8053d7d39b3c2228cdefcde)
#8 _PyObject_VectorcallTstate /home/realkumaraditya/cpython/./Include/internal/pycore_call.h:169:11 (python+0x1fb226) (BuildId: 957e91b4c4b85547a8053d7d39b3c2228cdefcde)
#9 method_vectorcall /home/realkumaraditya/cpython/Objects/classobject.c:72:20 (python+0x1fb226)
#10 _PyObject_VectorcallTstate /home/realkumaraditya/cpython/./Include/internal/pycore_call.h:169:11 (python+0x457ac1) (BuildId: 957e91b4c4b85547a8053d7d39b3c2228cdefcde)
#11 context_run /home/realkumaraditya/cpython/Python/context.c:728:29 (python+0x457ac1)
#12 method_vectorcall_FASTCALL_KEYWORDS /home/realkumaraditya/cpython/Objects/descrobject.c:421:24 (python+0x20f499) (BuildId: 957e91b4c4b85547a8053d7d39b3c2228cdefcde)
#13 _PyObject_VectorcallTstate /home/realkumaraditya/cpython/./Include/internal/pycore_call.h:169:11 (python+0x1f61b3) (BuildId: 957e91b4c4b85547a8053d7d39b3c2228cdefcde)
#14 PyObject_Vectorcall /home/realkumaraditya/cpython/Objects/call.c:327:12 (python+0x1f61b3)
#15 _PyEval_EvalFrameDefault /home/realkumaraditya/cpython/Python/generated_cases.c.h:1629:35 (python+0x404042) (BuildId: 957e91b4c4b85547a8053d7d39b3c2228cdefcde)
#16 _PyEval_EvalFrame /home/realkumaraditya/cpython/./Include/internal/pycore_ceval.h:119:16 (python+0x3ff030) (BuildId: 957e91b4c4b85547a8053d7d39b3c2228cdefcde)
#17 _PyEval_Vector /home/realkumaraditya/cpython/Python/ceval.c:1975:12 (python+0x3ff030)
#18 _PyFunction_Vectorcall /home/realkumaraditya/cpython/Objects/call.c (python+0x1f680f) (BuildId: 957e91b4c4b85547a8053d7d39b3c2228cdefcde)
#19 _PyObject_VectorcallTstate /home/realkumaraditya/cpython/./Include/internal/pycore_call.h:169:11 (python+0x1fb226) (BuildId: 957e91b4c4b85547a8053d7d39b3c2228cdefcde)
#20 method_vectorcall /home/realkumaraditya/cpython/Objects/classobject.c:72:20 (python+0x1fb226)
#21 _PyVectorcall_Call /home/realkumaraditya/cpython/Objects/call.c:273:16 (python+0x1f64a9) (BuildId: 957e91b4c4b85547a8053d7d39b3c2228cdefcde)
#22 _PyObject_Call /home/realkumaraditya/cpython/Objects/call.c:348:16 (python+0x1f64a9)
#23 PyObject_Call /home/realkumaraditya/cpython/Objects/call.c:373:12 (python+0x1f6515) (BuildId: 957e91b4c4b85547a8053d7d39b3c2228cdefcde)
#24 thread_run /home/realkumaraditya/cpython/./Modules/_threadmodule.c:373:21 (python+0x5ad102) (BuildId: 957e91b4c4b85547a8053d7d39b3c2228cdefcde)
#25 pythread_wrapper /home/realkumaraditya/cpython/Python/thread_pthread.h:232:5 (python+0x5047c7) (BuildId: 957e91b4c4b85547a8053d7d39b3c2228cdefcde)
Thread T3 'Thread-3 (push)' (tid=32945, running) created by main thread at:
#0 pthread_create <null> (python+0xe21ef) (BuildId: 957e91b4c4b85547a8053d7d39b3c2228cdefcde)
#1 do_start_joinable_thread /home/realkumaraditya/cpython/Python/thread_pthread.h:279:14 (python+0x5038a8) (BuildId: 957e91b4c4b85547a8053d7d39b3c2228cdefcde)
#2 PyThread_start_joinable_thread /home/realkumaraditya/cpython/Python/thread_pthread.h:321:9 (python+0x5036ca) (BuildId: 957e91b4c4b85547a8053d7d39b3c2228cdefcde)
#3 ThreadHandle_start /home/realkumaraditya/cpython/./Modules/_threadmodule.c:459:9 (python+0x5acc97) (BuildId: 957e91b4c4b85547a8053d7d39b3c2228cdefcde)
#4 do_start_new_thread /home/realkumaraditya/cpython/./Modules/_threadmodule.c:1869:9 (python+0x5acc97)
#5 thread_PyThread_start_joinable_thread /home/realkumaraditya/cpython/./Modules/_threadmodule.c:1984:14 (python+0x5aba61) (BuildId: 957e91b4c4b85547a8053d7d39b3c2228cdefcde)
#6 cfunction_call /home/realkumaraditya/cpython/Objects/methodobject.c:565:18 (python+0x2a34f7) (BuildId: 957e91b4c4b85547a8053d7d39b3c2228cdefcde)
#7 _PyObject_MakeTpCall /home/realkumaraditya/cpython/Objects/call.c:242:18 (python+0x1f5661) (BuildId: 957e91b4c4b85547a8053d7d39b3c2228cdefcde)
#8 _PyObject_VectorcallTstate /home/realkumaraditya/cpython/./Include/internal/pycore_call.h:167:16 (python+0x1f6271) (BuildId: 957e91b4c4b85547a8053d7d39b3c2228cdefcde)
#9 PyObject_Vectorcall /home/realkumaraditya/cpython/Objects/call.c:327:12 (python+0x1f6271)
#10 _PyEval_EvalFrameDefault /home/realkumaraditya/cpython/Python/generated_cases.c.h:3245:35 (python+0x40a1e0) (BuildId: 957e91b4c4b85547a8053d7d39b3c2228cdefcde)
#11 _PyEval_EvalFrame /home/realkumaraditya/cpython/./Include/internal/pycore_ceval.h:119:16 (python+0x3feb6f) (BuildId: 957e91b4c4b85547a8053d7d39b3c2228cdefcde)
#12 _PyEval_Vector /home/realkumaraditya/cpython/Python/ceval.c:1975:12 (python+0x3feb6f)
#13 PyEval_EvalCode /home/realkumaraditya/cpython/Python/ceval.c:866:21 (python+0x3feb6f)
#14 run_eval_code_obj /home/realkumaraditya/cpython/Python/pythonrun.c:1365:12 (python+0x4e12dc) (BuildId: 957e91b4c4b85547a8053d7d39b3c2228cdefcde)
#15 run_mod /home/realkumaraditya/cpython/Python/pythonrun.c:1436:19 (python+0x4e12dc)
#16 pyrun_file /home/realkumaraditya/cpython/Python/pythonrun.c:1293:15 (python+0x4dc83e) (BuildId: 957e91b4c4b85547a8053d7d39b3c2228cdefcde)
#17 _PyRun_SimpleFileObject /home/realkumaraditya/cpython/Python/pythonrun.c:521:13 (python+0x4dc83e)
#18 _PyRun_AnyFileObject /home/realkumaraditya/cpython/Python/pythonrun.c:81:15 (python+0x4dbf98) (BuildId: 957e91b4c4b85547a8053d7d39b3c2228cdefcde)
#19 pymain_run_file_obj /home/realkumaraditya/cpython/Modules/main.c:410:15 (python+0x52215f) (BuildId: 957e91b4c4b85547a8053d7d39b3c2228cdefcde)
#20 pymain_run_file /home/realkumaraditya/cpython/Modules/main.c:429:15 (python+0x52215f)
#21 pymain_run_python /home/realkumaraditya/cpython/Modules/main.c:691:21 (python+0x521488) (BuildId: 957e91b4c4b85547a8053d7d39b3c2228cdefcde)
#22 Py_RunMain /home/realkumaraditya/cpython/Modules/main.c:772:5 (python+0x521488)
#23 pymain_main /home/realkumaraditya/cpython/Modules/main.c:802:12 (python+0x5219f8) (BuildId: 957e91b4c4b85547a8053d7d39b3c2228cdefcde)
#24 Py_BytesMain /home/realkumaraditya/cpython/Modules/main.c:826:12 (python+0x521a7b) (BuildId: 957e91b4c4b85547a8053d7d39b3c2228cdefcde)
#25 main /home/realkumaraditya/cpython/./Programs/python.c:15:12 (python+0x1607fb) (BuildId: 957e91b4c4b85547a8053d7d39b3c2228cdefcde)
Thread T2 'Thread-2 (pop)' (tid=32944, running) created by main thread at:
#0 pthread_create <null> (python+0xe21ef) (BuildId: 957e91b4c4b85547a8053d7d39b3c2228cdefcde)
#1 do_start_joinable_thread /home/realkumaraditya/cpython/Python/thread_pthread.h:279:14 (python+0x5038a8) (BuildId: 957e91b4c4b85547a8053d7d39b3c2228cdefcde)
#2 PyThread_start_joinable_thread /home/realkumaraditya/cpython/Python/thread_pthread.h:321:9 (python+0x5036ca) (BuildId: 957e91b4c4b85547a8053d7d39b3c2228cdefcde)
#3 ThreadHandle_start /home/realkumaraditya/cpython/./Modules/_threadmodule.c:459:9 (python+0x5acc97) (BuildId: 957e91b4c4b85547a8053d7d39b3c2228cdefcde)
#4 do_start_new_thread /home/realkumaraditya/cpython/./Modules/_threadmodule.c:1869:9 (python+0x5acc97)
#5 thread_PyThread_start_joinable_thread /home/realkumaraditya/cpython/./Modules/_threadmodule.c:1984:14 (python+0x5aba61) (BuildId: 957e91b4c4b85547a8053d7d39b3c2228cdefcde)
#6 cfunction_call /home/realkumaraditya/cpython/Objects/methodobject.c:565:18 (python+0x2a34f7) (BuildId: 957e91b4c4b85547a8053d7d39b3c2228cdefcde)
#7 _PyObject_MakeTpCall /home/realkumaraditya/cpython/Objects/call.c:242:18 (python+0x1f5661) (BuildId: 957e91b4c4b85547a8053d7d39b3c2228cdefcde)
#8 _PyObject_VectorcallTstate /home/realkumaraditya/cpython/./Include/internal/pycore_call.h:167:16 (python+0x1f6271) (BuildId: 957e91b4c4b85547a8053d7d39b3c2228cdefcde)
#9 PyObject_Vectorcall /home/realkumaraditya/cpython/Objects/call.c:327:12 (python+0x1f6271)
#10 _PyEval_EvalFrameDefault /home/realkumaraditya/cpython/Python/generated_cases.c.h:3245:35 (python+0x40a1e0) (BuildId: 957e91b4c4b85547a8053d7d39b3c2228cdefcde)
#11 _PyEval_EvalFrame /home/realkumaraditya/cpython/./Include/internal/pycore_ceval.h:119:16 (python+0x3feb6f) (BuildId: 957e91b4c4b85547a8053d7d39b3c2228cdefcde)
#12 _PyEval_Vector /home/realkumaraditya/cpython/Python/ceval.c:1975:12 (python+0x3feb6f)
#13 PyEval_EvalCode /home/realkumaraditya/cpython/Python/ceval.c:866:21 (python+0x3feb6f)
#14 run_eval_code_obj /home/realkumaraditya/cpython/Python/pythonrun.c:1365:12 (python+0x4e12dc) (BuildId: 957e91b4c4b85547a8053d7d39b3c2228cdefcde)
#15 run_mod /home/realkumaraditya/cpython/Python/pythonrun.c:1436:19 (python+0x4e12dc)
#16 pyrun_file /home/realkumaraditya/cpython/Python/pythonrun.c:1293:15 (python+0x4dc83e) (BuildId: 957e91b4c4b85547a8053d7d39b3c2228cdefcde)
#17 _PyRun_SimpleFileObject /home/realkumaraditya/cpython/Python/pythonrun.c:521:13 (python+0x4dc83e)
#18 _PyRun_AnyFileObject /home/realkumaraditya/cpython/Python/pythonrun.c:81:15 (python+0x4dbf98) (BuildId: 957e91b4c4b85547a8053d7d39b3c2228cdefcde)
#19 pymain_run_file_obj /home/realkumaraditya/cpython/Modules/main.c:410:15 (python+0x52215f) (BuildId: 957e91b4c4b85547a8053d7d39b3c2228cdefcde)
#20 pymain_run_file /home/realkumaraditya/cpython/Modules/main.c:429:15 (python+0x52215f)
#21 pymain_run_python /home/realkumaraditya/cpython/Modules/main.c:691:21 (python+0x521488) (BuildId: 957e91b4c4b85547a8053d7d39b3c2228cdefcde)
#22 Py_RunMain /home/realkumaraditya/cpython/Modules/main.c:772:5 (python+0x521488)
#23 pymain_main /home/realkumaraditya/cpython/Modules/main.c:802:12 (python+0x5219f8) (BuildId: 957e91b4c4b85547a8053d7d39b3c2228cdefcde)
#24 Py_BytesMain /home/realkumaraditya/cpython/Modules/main.c:826:12 (python+0x521a7b) (BuildId: 957e91b4c4b85547a8053d7d39b3c2228cdefcde)
#25 main /home/realkumaraditya/cpython/./Programs/python.c:15:12 (python+0x1607fb) (BuildId: 957e91b4c4b85547a8053d7d39b3c2228cdefcde)
SUMMARY: ThreadSanitizer: data race /home/realkumaraditya/cpython/./Include/cpython/listobject.h:47:26 in PyList_SET_ITEM
==================
```
In https://github.com/python/cpython/pull/135036/ the heapq module was made thread safe but it still uses non atomic writes so it races with lock free reads in list.
<!-- gh-linked-prs -->
### Linked PRs
* gh-135601
* gh-135787
<!-- /gh-linked-prs -->
| 13cac833471885564cbfde72a4cbac64ade3137a | 8ca1e4d846e868a20834cf442c48a3648b558bbe |
python/cpython | python__cpython-135553 | # Change how sorting picks minimum run length
# Bug report
### Bug description:
On randomly ordered data, Python's sort ends up artificially forcing all initial runs to length `minrun`, whose value is a function of `len(list)`, picked in a cheap way to avoid the very worst cases of imbalance in the final merge.
Stefan Pochmann has invented a much better way, which may vary `minrun` a little from one run to the next. Under his scheme, at all levels of the merge tree:
- The number of runs is a power of 2.
- At most two different values of run lengths are in play.
- The lengths of run pairs to be merged differ by no more than 1.
So, in all cases, in all ways it's as close to perfectly balanced as possible.
For randomly ordered inputs, this eliminates all cases of "bad imbalance", cuts the average number of compares a little, and cuts the amount of pointer copying due to unbalanced merges.
Following is a Python prototype verifying the claims. Variables starting with "mr_" will become part of the `MergeState` struct, and the loop body in the generator will become a brief inlined function.
```python
from itertools import accumulate
try:
from itertools import batched
except ImportError:
from itertools import islice
def batched(xs, k):
it = iter(xs)
while chunk := tuple(islice(it, k)):
yield chunk
MAX_MINRUN = 64
def gen_minruns(n):
# mr_int = minrun's integral part
# mr_frac = minrun's fractional part with mr_e bits and
# mask mr_mask
mr_int = n
mr_e = 0
while mr_int >= MAX_MINRUN:
mr_int >>= 1
mr_e += 1
mr_mask = (1 << mr_e) - 1
mr_frac = n & mr_mask
mr_current_frac = 0
while True:
mr_current_frac += mr_frac
assert mr_current_frac >> mr_e <= 1
yield mr_int + (mr_current_frac >> mr_e)
mr_current_frac &= mr_mask
def chew(n, show=False):
if n < 1:
return
sizes = []
tot = 0
for size in gen_minruns(n):
sizes.append(size)
tot += size
if tot >= n:
break
assert tot == n
print(n, len(sizes))
small, large = 32, 64
while len(sizes) > 1:
assert not len(sizes) & 1
assert len(sizes).bit_count() == 1 # i.e., power of 2
assert sum(sizes) == n
assert min(sizes) >= min(n, small)
assert max(sizes) <= large
d = set(sizes)
assert len(d) <= 2
if len(d) == 2:
lo, hi = sorted(d)
assert lo + 1 == hi
mr = n / len(sizes)
for i, s in enumerate(accumulate(sizes, initial=0)):
assert int(mr * i) == s
newsizes = []
for a, b in batched(sizes, 2):
assert abs(a - b) <= 1
newsizes.append(a + b)
sizes = newsizes
smsll = large
large *= 2
assert sizes[0] == n
for n in range(2_000_001):
chew(n)
```
### CPython versions tested on:
3.14, CPython main branch
### Operating systems tested on:
Windows
<!-- gh-linked-prs -->
### Linked PRs
* gh-135553
<!-- /gh-linked-prs -->
| 2fc68e180ffdb31886938203e89a75b220a58cec | b38810bab76c11ea09260a817b3354aebc2af580 |
python/cpython | python__cpython-135544 | # Add new audit event for `sys.remote_exec()`
I think we should get an audit event in ` sys.remote_exec()` yes
_Originally posted by @pablogsal in https://github.com/python/cpython/pull/135362#discussion_r2140528333_
<!-- gh-linked-prs -->
### Linked PRs
* gh-135544
* gh-135732
<!-- /gh-linked-prs -->
| 1ddfe593200fec992d283a9b4d6ad2f1b535c018 | bb9596fcfa50eac31c728b23a614bcbbf0386cd9 |
python/cpython | python__cpython-135742 | # Rewrite & cleanup HACL*-based extension modules
I recently rewrote the `_blake2` module. I want to the same for other modules where I can align the naming of functions across the different modules. The reason why I'm doing this is because it's becoming harder to make "similar" changes everywhere. In general, if I need to change something in MD5, then I also need to change it in SHA1/SHA2/SHA3, and it's easier if the code looks similar elsewhere.
There are some places that need to be updated because they are dead code, e.g.:
```c
/*[clinic input]
module _sha2
class SHA256Type "SHA256object *" "&PyType_Type"
class SHA512Type "SHA512object *" "&PyType_Type"
[clinic start generated code]*/
```
should be
```c
/*[clinic input]
module _sha2
class SHA256Type "SHA256object *" "clinic_state()->sha256_type"
class SHA512Type "SHA512object *" "clinic_state()->sha512_type"
[clinic start generated code]*/
```
<!-- gh-linked-prs -->
### Linked PRs
* gh-135742
* gh-135744
* gh-135822
* gh-135838
* gh-135844
<!-- /gh-linked-prs -->
### Bug fixes (3.14+)
* gh-135741
* gh-135745
### Backported PRs (HMAC-only)
* gh-135740
* gh-135743
### Abandoned
* gh-135536
* gh-135821
* https://github.com/picnixz/cpython/pull/2 | eec7a8ff22dcf409717a21a9aeab28b55526ee24 | 57dba7c9a59ab345f859810bad6608a9d1a0fdd6 |
python/cpython | python__cpython-135514 | # "variable tstate set but not used" in `crossinterp.c`
# Bug report
<img width="990" alt="Image" src="https://github.com/user-attachments/assets/0928b942-40e5-462e-bf3e-44ca12463647" />
I have a PR ready.
<!-- gh-linked-prs -->
### Linked PRs
* gh-135514
* gh-135577
<!-- /gh-linked-prs -->
| 4c15505071498439407483004721d0369f110229 | f0799795994bfd9ab0740c4d70ac54270991ba47 |
python/cpython | python__cpython-135505 | # Document `LIBZSTD_CFLAGS` and `LIBZSTD_LIBS` in `configure.rst`
# Feature or enhancement
Link: https://docs.python.org/3/using/configure.html#options-for-third-party-dependencies
Refs https://github.com/python/cpython/issues/132983
CC @emmatyping
I have a PR ready.
<!-- gh-linked-prs -->
### Linked PRs
* gh-135505
* gh-135515
<!-- /gh-linked-prs -->
| fc413ecb8f4bf1c59b29932695e3538548eb1a8a | 028309fb47869b665f55d10e9eabf7952bf7dbd3 |
python/cpython | python__cpython-135508 | # os.getlogin() fails with OSError: [Errno 34] for usernames > 9 characters
# Bug report
### Bug description:
```python
import os
print(os.getlogin())
```
```console
$ python3 -c "import os; print(os.getlogin())"
Traceback (most recent call last):
File "<string>", line 1, in <module>
import os; print(os.getlogin())
~~~~~~~~~~~^^
OSError: [Errno -34] Unknown error: -34
```
It appears that if Python is compiled in an environment without `HAVE_MAXLOGNAME` but with `HAVE_UT_NAMESIZE`, `getlogin_r` will return error 34 if username is >9 characters.
This appears to be a regression introduced in #132751:
https://github.com/duaneg/cpython/blob/675342cf59ffe53337d92af989b97dad687a10ea/configure.ac#L5434
The correct header file to search for is `sys/param.h`, not `sys/params.h`. As a result, `HAVE_UT_NAMESIZE` is satisfied (via `utmp.h`) but `HAVE_MAXLOGNAME` is not.
This was originally reported to Homebrew via https://github.com/Homebrew/homebrew-core/issues/226857.
### CPython versions tested on:
3.15
### Operating systems tested on:
macOS
<!-- gh-linked-prs -->
### Linked PRs
* gh-135508
* gh-135516
* gh-135517
<!-- /gh-linked-prs -->
| 2e15a50851da66eb8227ec6ea07a9cc7ed08fbf3 | fc413ecb8f4bf1c59b29932695e3538548eb1a8a |
python/cpython | python__cpython-135495 | # Typo in f-string conversion type error
# Bug report
### Bug description:
```python
f"{1! r}"
```
Gives the error
```
SyntaxError: f-string: conversion type must come right after the exclamanation mark
```
There is a typo in the word "exclamanation" (should be "exclamation")
### CPython versions tested on:
3.13, 3.12
### Operating systems tested on:
Windows
<!-- gh-linked-prs -->
### Linked PRs
* gh-135495
* gh-135499
* gh-135501
<!-- /gh-linked-prs -->
| c2bb3f9843bc4763d6d41e883dbe9525f5155a4a | 56eabea056ae1da49b55e0f794c9957008f8d20a |
python/cpython | python__cpython-135713 | # Regression tests do not support exclusion and pgo in the same invocation
# Bug report
### Bug description:
## Background
I've been working to build from source on Alpine. Starting in 3.13 pgo optimizations now error and that led to the test_re tests failing and blocking builds. I didn't want to fully disable pgo optimizations and miss out on all of the related performance wins.
## Attempted Fix and Problem Identification
I initially attempted to preclude this by leveraging `PROFILE_TASK="-m test --pgo -x test_re"` when calling make. This had the impact of making all tests *except* the pgo tests run. After some spelunking I found that this logic from `Lib/test/libregrtest/main.py` is the culprit.
```python
def find_tests(self, tests: TestList | None = None) -> tuple[TestTuple, TestList | None]:
if tests is None:
tests = []
if self.single_test_run:
self.next_single_filename = os.path.join(self.tmp_dir, 'pynexttest')
try:
with open(self.next_single_filename, 'r') as fp:
next_test = fp.read().strip()
tests = [next_test]
except OSError:
pass
if self.fromfile:
tests = []
# regex to match 'test_builtin' in line:
# '0:00:00 [ 4/400] test_builtin -- test_dict took 1 sec'
regex = re.compile(r'\btest_[a-zA-Z0-9_]+\b')
with open(os.path.join(os_helper.SAVEDCWD, self.fromfile)) as fp:
for line in fp:
line = line.split('#', 1)[0]
line = line.strip()
match = regex.search(line)
if match is not None:
tests.append(match.group())
strip_py_suffix(tests)
if self.pgo:
# add default PGO tests if no tests are specified
setup_pgo_tests(self.cmdline_args, self.pgo_extended)
if self.tsan:
setup_tsan_tests(self.cmdline_args)
if self.tsan_parallel:
setup_tsan_parallel_tests(self.cmdline_args)
exclude_tests = set()
if self.exclude:
for arg in self.cmdline_args:
exclude_tests.add(arg)
self.cmdline_args = []
alltests = findtests(testdir=self.test_dir,
exclude=exclude_tests)
if not self.fromfile:
selected = tests or self.cmdline_args
if selected:
selected = split_test_packages(selected)
else:
selected = alltests
else:
selected = tests
if self.single_test_run:
selected = selected[:1]
try:
pos = alltests.index(selected[0])
self.next_single_test = alltests[pos + 1]
except IndexError:
pass
# Remove all the selected tests that precede start if it's set.
if self.starting_test:
try:
del selected[:selected.index(self.starting_test)]
except ValueError:
print(f"Cannot find starting test: {self.starting_test}")
sys.exit(1)
random.seed(self.random_seed)
if self.randomize:
random.shuffle(selected)
for priority_test in reversed(self.prioritize_tests):
try:
selected.remove(priority_test)
except ValueError:
print(f"warning: --prioritize={priority_test} used"
f" but test not actually selected")
continue
else:
selected.insert(0, priority_test)
return (tuple(selected), tests)
```
Due to the order of operations what ends up happening is the pgo test suite gets added to `cmdline_args` and then they all get excluded leaving everything else to run.
## Hacky Workaround
My eventual workaround is... less than great. I ultimately settled on two choices: explicitly maintain my own copy of the PGO test list in my dockerfile or update the list defined in the testing framework. I opted for the later and have added this to my build script.
```bash
sed -Ei "s/'test_re',?$//g" Lib/test/libregrtest/pgo.py
```
This felt like the less likely option to break over time as we not only build 3.13 but other versions as well.
## Suggested Improvement Options
**1. Automatic Exclusion for Musl:**
Automatically exclude incompatible tests on musl-based systems, which appears consistent with other test behaviors. This minimizes friction for maintainers who otherwise may forgo PGO optimizations.
From my view, this is the preferred option.
**2. Improved Test Exclusion Logic:**
Adjust `--pgo` and `-x` flag logic to allow intuitive exclusion of PGO tests, or otherwise enable this behavior, and combine with clearer documentation for builders.
**3. Explicit Error Messaging:**
Explicitly error when these flags are used together to reduce confusion and accidental behaviors. This aligns with other checks for mutually exclusive flags.
### CPython versions tested on:
3.13
### Operating systems tested on:
Other
<!-- gh-linked-prs -->
### Linked PRs
* gh-135713
* gh-135880
* gh-135881
<!-- /gh-linked-prs -->
| 15c6d63fe6fc62c6d78d2fad81965a8e6f7b7b98 | 2060089254f0b00199b99dd1ae83a3fb139e890c |
python/cpython | python__cpython-135506 | # Using `reprlib.repr` fails for very large numbers
# Bug report
### Bug description:
### The bug
`reprlib.Repr.repr_int` fails if the integer exceeds the string conversion limit (on line 184)
https://github.com/python/cpython/blob/6eb6c5dbfb528bd07d77b60fd71fd05d81d45c41/Lib/reprlib.py#L183-L189
This impacts other parts of the standard library which use `reprlib.Repr`. A [quick code search](https://github.com/search?q=repo%3Apython%2Fcpython+reprlib+language%3APython+NOT+path%3A%2F%5ELib%5C%2Ftest%5C%2F%2F&type=code&l=Python) reveals that this affects/could affect:
- `pydoc`
- `bdb`
- `idlelib` & IDLE (debugger-related things)
- `asyncio`
See https://github.com/python/cpython/issues/135487#issuecomment-2971467201 for detailed rationale for why IMHO this is a bug.
### `pydoc` example
Given the following code (in a file called `temp11.py`),
```python
a = 1 << 19000
```
running `pydoc` on it like `python -m pydoc '.\temp11.py'` gives the following error:
<details><summary>Stack trace</summary>
<p>
<pre><code>Traceback (most recent call last):
File "<frozen runpy>", line 198, in _run_module_as_main
File "<frozen runpy>", line 88, in _run_code
File "C:\Python313\Lib\pydoc.py", line 2846, in <module>
cli()
~~~^^
File "C:\Python313\Lib\pydoc.py", line 2807, in cli
help.help(arg, is_cli=True)
~~~~~~~~~^^^^^^^^^^^^^^^^^^
File "C:\Python313\Lib\pydoc.py", line 2062, in help
else: doc(request, 'Help on %s:', output=self._output, is_cli=is_cli)
~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Python313\Lib\pydoc.py", line 1773, in doc
pager(render_doc(thing, title, forceload), f'Help on {what!s}')
~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Python313\Lib\pydoc.py", line 1758, in render_doc
return title % desc + '\n\n' + renderer.document(object, name)
~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^
File "C:\Python313\Lib\pydoc.py", line 543, in document
if inspect.ismodule(object): return self.docmodule(*args)
~~~~~~~~~~~~~~^^^^^^^
File "C:\Python313\Lib\pydoc.py", line 1375, in docmodule
contents.append(self.docother(value, key, name, maxlen=70))
~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Python313\Lib\pydoc.py", line 1642, in docother
repr = self.repr(object)
File "C:\Python313\Lib\reprlib.py", line 71, in repr
return self.repr1(x, self.maxlevel)
~~~~~~~~~~^^^^^^^^^^^^^^^^^^
File "C:\Python313\Lib\pydoc.py", line 1234, in repr1
return getattr(self, methodname)(x, level)
~~~~~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^
File "C:\Python313\Lib\reprlib.py", line 184, in repr_int
s = builtins.repr(x) # XXX Hope this isn't too slow...
ValueError: Exceeds the limit (4300 digits) for integer string conversion; use sys.set_int_max_str_digits() to increase the limit
</pre></code>
</p>
</details>
### Possible fix
The underlying issue is that the size of the integer isn't checked before calling builtin `repr`.
A way to fix this may be to display it as hexadecimal over a certain threshold (then truncate the middle like usual). (IMHO it's not worth it to add a special method to compute the first and last `n` digits of a number without calculating the middle ones)
### CPython versions tested on:
3.13
### Operating systems tested on:
Windows
<!-- gh-linked-prs -->
### Linked PRs
* gh-135506
* gh-135886
* gh-135887
<!-- /gh-linked-prs -->
| e5f03b94b6d4decbf433d385f692c1b8d9b7e88d | 15c6d63fe6fc62c6d78d2fad81965a8e6f7b7b98 |
python/cpython | python__cpython-136797 | # `urllib.request.HTTPRedirectHandler` documentation uses `hdrs` instead of `headers`
# Documentation
Hello, `HTTPRedirectHandler` documentation documents some methods take HTTP headers in a parameter named `hdrs`:
https://github.com/python/cpython/blob/c646846c1e8f60e52b636b68bfd0df5e84229098/Doc/library/urllib.request.rst#L896
https://github.com/python/cpython/blob/c646846c1e8f60e52b636b68bfd0df5e84229098/Doc/library/urllib.request.rst#L915
https://github.com/python/cpython/blob/c646846c1e8f60e52b636b68bfd0df5e84229098/Doc/library/urllib.request.rst#L921
...
However, the actual implementation uses `headers`:
https://github.com/python/cpython/blob/c646846c1e8f60e52b636b68bfd0df5e84229098/Lib/urllib/request.py#L621
https://github.com/python/cpython/blob/c646846c1e8f60e52b636b68bfd0df5e84229098/Lib/urllib/request.py#L660
Would it be possible to use the same name in the documentation and in the implementation?
## Context
In some project, the following class is used to perform HTTPS requests without following redirection:
```python3
class NoRedirect(urllib.request.HTTPRedirectHandler):
def redirect_request(
self, req: Any, fp: Any, code: Any, msg: Any, hdrs: Any, newurl: Any
) -> None:
pass
```
Running a type checker ([basedpyright](https://docs.basedpyright.com/latest/)) on this code reports a warning:
```text
error: Method "redirect_request" overrides class "HTTPRedirectHandler" in an incompatible manner
Parameter 6 name mismatch: base parameter is named "headers",
override parameter is named "hdrs" (reportIncompatibleMethodOverride)
```
This was quite unexpected because the official documentation (https://docs.python.org/3.13/library/urllib.request.html#urllib.request.HTTPRedirectHandler.redirect_request) mentions `hdrs` instead of `headers`. It makes nonetheless more sense to name the parameter `headers` in case callers use the parameter names.
<!-- gh-linked-prs -->
### Linked PRs
* gh-136797
* gh-136825
* gh-136826
<!-- /gh-linked-prs -->
| 57acd65a30f8cb1f3a3cc01322f03215017f5caa | 8ffc3ef01e83ffe629c6107082677de4d23974d5 |
python/cpython | python__cpython-135464 | # Worst case quadratic complexity in HTMLParser
<!-- gh-linked-prs -->
### Linked PRs
* gh-135464
* gh-135481
* gh-135482
* gh-135483
* gh-135484
* gh-135485
* gh-135486
<!-- /gh-linked-prs -->
| 6eb6c5dbfb528bd07d77b60fd71fd05d81d45c41 | 14c1d093d52d8de32c486808c1e1e85e0f61c11c |
python/cpython | python__cpython-135461 | # PyManager packaging of 3.11-3.12 has broken venv scripts
See https://github.com/python/pymanager/issues/133
In short, by using the new `PC/layout` script against older versions, a rename that used to be performed at build time was skipped. As a result, the republished 3.11 and 3.12 packages have a non-functional `venv` module on Windows.
Once the script is fixed (and tested more thoroughly), I'll repackage and republish the 3.11 and 3.12 releases. This will update their hashes, but should update the feed as well so that it still all matches. That's what beta is for I guess.
The fix won't need to be backported, because repackaging uses the script from `main` (which was fundamentally the root cause). I'll do a full diff between the old and current version of the scripts to see if any other changes slipped through, but I don't think there was anything that wasn't purely additive.
FYI @pablogsal and @Yhg1s as RMs for the affected releases. Nothing for you to worry about, though.
<!-- gh-linked-prs -->
### Linked PRs
* gh-135461
* gh-135471
* gh-135472
<!-- /gh-linked-prs -->
| afc5ab6cce9d7095b99c1410a6762bc4a96504dd | f4bc3a932082411243da9bb909841138ee2eea97 |
python/cpython | python__cpython-135466 | # List comprehensions cause subinterpreter crashes
# Crash report
### What happened?
If you use any list comprehension inside a subinterpreter (either through `_interpreters` or through `interpreters`), an assertion fails in `_PyCode_CheckNoExternalState`:
```python
import _interpreters
interp = _interpreters.create()
_interpreters.run_string(interp, "[None for _ in []]")
```
```
python: Objects/codeobject.c:1982: _PyCode_CheckNoExternalState: Assertion `counts->locals.hidden.total == 0' failed.
```
This was originally causing failures in #134606. I think that assertion can be removed, because `_PyCode_CheckNoExternalState` seems to have no need for the number of hidden locals being zero.
cc @ericsnowcurrently
### CPython versions tested on:
CPython main branch
### Operating systems tested on:
Linux
### Output from running 'python -VV' on the command line:
_No response_
<!-- gh-linked-prs -->
### Linked PRs
* gh-135466
* gh-135694
<!-- /gh-linked-prs -->
| 15f2bac02c5e106f04a93ce73fd93cc305253405 | e9b647dd30d22cef465972d898a34c4b1bb6615d |
python/cpython | python__cpython-135445 | # 3.13 asyncio DatagramTransport buffer size accounting regression.
# Bug report
### Bug description:
Commit 73e8637002639e565938d3f205bf46e7f1dbd6a8 added 8 to the buffer_size when
send could not be called right away. However, it did not complete this
accounting by removing 8 from the buffer size when sending did finally
complete.
I have a proposed fix and will file a PR shortly.
### CPython versions tested on:
3.13
### Operating systems tested on:
Linux
<!-- gh-linked-prs -->
### Linked PRs
* gh-135445
* gh-137245
* gh-137246
<!-- /gh-linked-prs -->
| e3ea8613519bd08aa6ce7d142403e644ae32d843 | 9d3b53c47fab9ebf1f40d6f21b7d1ad391c14cd7 |
python/cpython | python__cpython-135491 | # Interpreter.call() Fails For Various Builtins
# Bug report
### Bug description:
This includes any function that calls `_PyEval_GetGlobals()` or `_PyEval_GetFrameLocals()`. Here's a list of the ones I noticed:
* `globals()`
* `locals()`
* `dir()`
* `vars()`
* `exec()`
* `eval()`
For example:
```python
from concurrent import interpreters
interp = interpreters.create()
# This raises SystemError.
interp.call(eval, 'True')
```
Ideally it would fall back to `__main__.__dict__` for the globals and locals.
----
FWIW, this is most obvious with subinterpreters, but a similar problem applies to any user of the C-API that doesn't have a frame set.
### CPython versions tested on:
CPython main branch
### Operating systems tested on:
_No response_
<!-- gh-linked-prs -->
### Linked PRs
* gh-135491
* gh-135593
<!-- /gh-linked-prs -->
| a450a0ddec7a2813fb4603f0df406fa33750654a | 68b7e1a6677d7a8fb47fbd28cb5d39a87217273c |
python/cpython | python__cpython-135438 | # _PyCode_VerifyStateless(): Assert Fails If a Name Is Both a Global And an Attr
# Crash report
### Bug description:
For example, the following function will trigger the problem:
```python
import pprint
def spam():
pprint.pprint([])
```
Crash in `test_code` (on a debug build):
```
python: Objects/codeobject.c:1944: _PyCode_SetUnboundVarCounts: Assertion `unbound.total <= counts->unbound.total' failed.
```
### CPython versions tested on:
CPython main branch
### Operating systems tested on:
_No response_
<!-- gh-linked-prs -->
### Linked PRs
* gh-135438
* gh-135493
<!-- /gh-linked-prs -->
| 56eabea056ae1da49b55e0f794c9957008f8d20a | c7f4a80079eefc02839f193ba07ce1b33d067d8d |
python/cpython | python__cpython-135442 | # 3.14.0b2 yield gives TypeError: _pystart_callback expected 2 arguments, got 3
# Bug report
### Bug description:
I thought I add 3.14-dev to our CI on github actions and after fixing some obvious issues, this one was left (the code is tested successfully on 3.10 .. 3.13):
```python
File "/home/runner/work/borg/borg/src/borg/archiver/create_cmd.py", line 472, in _rec_walk
with OsOpen(
~~~~~~^
path=path, parent_fd=parent_fd, name=name, flags=flags_dir, noatime=True, op="dir_open"
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
) as child_fd:
^
File "/opt/hostedtoolcache/Python/3.14.0-beta.2/x64/lib/python3.14/contextlib.py", line 162, in __exit__
self.gen.throw(value)
~~~~~~~~~~~~~~^^^^^^^
File "/home/runner/work/borg/borg/src/borg/archive.py", line 254, in OsOpen
yield fd
TypeError: _pystart_callback expected 2 arguments, got 3
```
https://github.com/borgbackup/borg/pull/8919
We use tox and pytest as test runner, in case that matters. Tried updating these (plus their plugins) to the latest versions, but it did not help.
### CPython versions tested on:
3.14.0b2
### Operating systems tested on:
Linux
<!-- gh-linked-prs -->
### Linked PRs
* gh-135442
* gh-135446
<!-- /gh-linked-prs -->
| b03309fe5fca2eef51bf739fb13d9acef70cb964 | f273fd77d790300506c6443baa94d027b643f603 |
python/cpython | python__cpython-135423 | # Invalid error messages after GH-134077
# Bug report
In https://github.com/python/cpython/issues/134036 (PR: #134077) I broke this use-case:
```python
# ex.py
raise AssertionError() from None
print(1,,2)
```
Running `./python.exe ex.py` on it produces:
```
» ./python.exe ex.py
File "/Users/sobolev/Desktop/cpython/ex.py", line 1
raise AssertionError() from None
^^^^
SyntaxError: did you forget an expression after 'from'?
```
Which is not correct. I am fixing it right now, sorry! :(
CC @pablogsal @lysnikolaou
<!-- gh-linked-prs -->
### Linked PRs
* gh-135423
<!-- /gh-linked-prs -->
| 7e3355845558dd8d488f463b166c2fe6e549f433 | fc82cb91ba262275fd91bc249ed891afce60f24a |
python/cpython | python__cpython-135402 | # Add ssl module test for AWS-LC
# Feature or enhancement
### Proposal:
As a final step in this integration, I’d like to propose adding an AWS-LC CI job to CPython’s CI alongside [the OpenSSL 3.x tests](https://github.com/python/cpython/blob/076004ae5461cf3a7fe248a38e28afff33acdd14/.github/workflows/build.yml#L267). We’ve tested this integration in AWS-LC’s CI [for well over a year now](https://github.com/aws/aws-lc/pull/1359), and are committed to preserving CPython 3.10+ compatibility in future AWS-LC releases.
### Has this already been discussed elsewhere?
I have already discussed this feature proposal on Discourse
### Links to previous discussion of this feature:
https://discuss.python.org/t/support-building-ssl-and-hashlib-modules-against-aws-lc/44505/14
<!-- gh-linked-prs -->
### Linked PRs
* gh-135402
<!-- /gh-linked-prs -->
| db47f4d844acf2b6e52e44f7f3d5f7566b1e402c | 7f1e66ae0e7c6bfa26cc128291d8705abd1a3c93 |
python/cpython | python__cpython-135381 | # Enhance critical section held assertions for objects in free-threading
Currently the critical section held assertions on objects are not helpful as it doesn't prints the object of which the critical section should be held, it is helpful for debugging to print the object along with a message that critical section of object is not held.
Before:
```console
❯ ./python -m test test_asyncio
Using random seed: 3111514708
0:00:00 load avg: 21.32 Run 34 tests sequentially in a single process
0:00:00 load avg: 21.32 [ 1/34] test_asyncio.test_base_events
python: ./Include/internal/pycore_critical_section.h:236: void _PyCriticalSection_AssertHeld(PyMutex *): Assertion `cs != NULL && cs->_cs_mutex == mutex' failed.
Fatal Python error: Aborted
```
After:
```console
./python -m test test_asyncio
Using random seed: 3608858608
0:00:00 load avg: 0.25 Run 34 tests sequentially in a single process
0:00:00 load avg: 0.25 [ 1/34] test_asyncio.test_base_events
./Include/internal/pycore_critical_section.h:259: _PyCriticalSection_AssertHeldObj: Assertion "(cs != ((void*)0) && cs->_cs_mutex == mutex)" failed: Critical section of object is not held
Enable tracemalloc to get the memory block allocation traceback
object address : 0x20000566ce0
object refcount : 3
object type : 0x20000f74610
object type name: _asyncio.Task
object repr : <Task pending name='Task-118' coro=<BaseEventLoop._create_server_getaddrinfo() running at /home/realkumaraditya/cpython/Lib/asyncio/base_events.py:1500>>
Fatal Python error: _PyObject_AssertFailed: _PyObject_AssertFailed
Python runtime state: initialized
```
<!-- gh-linked-prs -->
### Linked PRs
* gh-135381
<!-- /gh-linked-prs -->
| a8ec511900d0d84cffbb4ee6419c9a790d131129 | 3fb6cfe7a95081e6775ad2dca845713a3ea4c799 |
python/cpython | python__cpython-135377 | # Fix and improve test_random
#135326 exposed the fact that there was no tests for objects implementing `__index__` in `test_random`, even if some code in the `random` module explicitly support them.
When trying to add more tests, I discovered that
* Some code is literally duplicated for Random and SystemRandom tests. It is easy to make error by modifying only only one of the copy.
* As consequence of this, some tests were only added for SystemRandom, even if they are not specific for SystemRandom.
There was also no tests for `randint()`.
So the following PR reorganizes `test_random`: removes duplicated code, makes sure that implementation agnostic tests are run for both classes, and add few new tests.
<!-- gh-linked-prs -->
### Linked PRs
* gh-135377
* gh-135680
<!-- /gh-linked-prs -->
| c55512311b7cb8b7c27c19f56cd8f872be29aedc | 21f3d15534c08d9a49d5c119a0e690855173fde4 |
python/cpython | python__cpython-135436 | # `get_all_awaited_by()` shows incorrect call stacks in awaited_by relationships
The current asyncio debugging tools (`python -m asyncio ps` and `python -m asyncio pstree`) provide limited visibility into complex async execution patterns, making it extremely difficult to debug production asyncio applications.
The fundamental issue is that the current implementation only shows external task dependencies without revealing the internal coroutine call stack. This creates a wrong output because developers cannot see where within a task's execution the code is currently suspended.
## Current vs Correct Output
Consider this example of a complex async application with nested coroutines:
```python
async def foo1():
await foo11()
async def foo11():
await foo12()
async def foo12():
x = asyncio.create_task(foo2(), name="foo2")
x2 = asyncio.create_task(foo2(), name="foo2")
await asyncio.gather(x2, x, return_exceptions=True)
async def foo2():
await asyncio.sleep(500)
async def runner():
async with taskgroups.TaskGroup() as g:
g.create_task(foo1(), name="foo1.0")
g.create_task(foo1(), name="foo1.1")
g.create_task(foo1(), name="foo1.2")
g.create_task(foo2(), name="foo2")
await asyncio.sleep(1000)
asyncio.run(runner())
```
### Current Implementation
The existing debugging output makes it nearly impossible to understand what's happening:
**Table output:**
```
tid task id task name coroutine chain awaiter name awaiter id
---------------------------------------------------------------------------------------------------------------------------------------
2103857 0x7f2a3f87d660 Task-1 0x0
2103857 0x7f2a3f154440 foo1.0 sleep -> runner Task-1 0x7f2a3f87d660
2103857 0x7f2a3f154630 foo1.1 sleep -> runner Task-1 0x7f2a3f87d660
2103857 0x7f2a3fb32be0 foo2 sleep -> runner foo1.0 0x7f2a3f154440
```
**Tree output:**
```
└── (T) Task-1
└── /home/user/app.py 27:runner
└── /home/user/Lib/asyncio/tasks.py 702:sleep
├── (T) foo1.0
│ └── /home/user/app.py 5:foo1
│ └── /home/user/app.py 8:foo11
│ └── /home/user/app.py 13:foo12
│ ├── (T) foo2
│ └── (T) foo2
```
This output is problematic because for the leaf tasks (like the `foo2` tasks), there's no indication of their internal execution state - developers can't tell that these tasks are suspended in `asyncio.sleep()` calls.
### Correct Implementation
The correct debugging output transforms the debugging experience by providing correct execution context:
**Table output with dual information display:**
```
tid task id task name coroutine stack awaiter chain awaiter name awaiter id
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
2139407 0x7f70af08fe30 Task-1 sleep -> runner 0x0
2139407 0x7f70ae424050 foo1.0 foo12 -> foo11 -> foo1 sleep -> runner Task-1 0x7f70af08fe30
2139407 0x7f70ae542890 foo2 sleep -> foo2 foo12 -> foo11 -> foo1 foo1.0 0x7f70ae424050
```
**Tree output with complete execution context:**
```
└── (T) Task-1
└── runner /home/user/app.py:27
└── sleep /home/user/Lib/asyncio/tasks.py:702
├── (T) foo1.0
│ └── foo1 /home/user/app.py:5
│ └── foo11 /home/user/app.py:8
│ └── foo12 /home/user/app.py:13
│ ├── (T) foo2
│ │ └── foo2 /home/user/app.py:18
│ │ └── sleep /home/user/Lib/asyncio/tasks.py:702
│ └── (T) foo2
│ └── foo2 /home/user/app.py:18
│ └── sleep /home/user/Lib/asyncio/tasks.py:702
```
The correct output immediately reveals crucial information that was previously hidden. Developers can now see that the `foo2` tasks are suspended in `sleep` calls, the `foo1.0` task is suspended in the `foo12` function, and the main `Task-1` is suspended in the `runner` function. This level of detail transforms debugging from guesswork into precise analysis.
<!-- gh-linked-prs -->
### Linked PRs
* gh-135436
* gh-135509
* gh-135534
* gh-135545
<!-- /gh-linked-prs -->
| 028309fb47869b665f55d10e9eabf7952bf7dbd3 | 7b15873ed00766ef6760048449f44ea0e41f4a12 |
python/cpython | python__cpython-135421 | # unittest.mock: Regression with create_autospec and dataclasses in 3.14.0b2
# Bug report
### Bug description:
```python
from dataclasses import dataclass
from unittest.mock import create_autospec
@dataclass
class Description:
name: str
mock = create_autospec(Description, instance=True)
print(isinstance(mock, Description))
```
```py
# 3.13.3
True
# 3.14.0b2
False
```
Furthermore the resulting mock doesn't have attributes present on a dataclass instance, like `__annotations__`, `__class__`, `__dataclass_fields__`, `__dataclass_params__`.
```py
print(dir(mock))
print(dir(Description("Hello World"))
```
Likely related to #124429. /CC @sobolevn
### CPython versions tested on:
3.14
### Operating systems tested on:
macOS
<!-- gh-linked-prs -->
### Linked PRs
* gh-135421
* gh-135503
<!-- /gh-linked-prs -->
| c8319a3fea9ff7f9b49008be3b5d681112bbe7f3 | c2bb3f9843bc4763d6d41e883dbe9525f5155a4a |
python/cpython | python__cpython-135362 | # Record the `remote_debugger_script` audit event in doc
# Documentation
(A clear and concise description of the issue.)
<!-- gh-linked-prs -->
### Linked PRs
* gh-135362
* gh-135546
<!-- /gh-linked-prs -->
| 076f87468a570c2a5e2e68d06af0a5267eb81713 | b9a1b049820517d270f5019d5eb52c8e452eb33f |
python/cpython | python__cpython-133239 | # Add fast path to json string encoding
# Feature or enhancement
### Proposal:
Most JSON strings consist solely of ASCII characters that don’t require escaping, so adding a fast path for those would make a big difference. In these cases, we can avoid allocating a new string by writing the original string directly:
```c
PyUnicodeWriter_WriteChar(writer, '"')
PyUnicodeWriter_WriteStr(writer, pystr) // original string
PyUnicodeWriter_WriteChar(writer, '"')
```
Benchmarks have shown that this roughly 2.2x faster, see PR.
### Has this already been discussed elsewhere?
This is a minor feature, which does not need previous discussion elsewhere
### Links to previous discussion of this feature:
_No response_
<!-- gh-linked-prs -->
### Linked PRs
* gh-133239
<!-- /gh-linked-prs -->
| dec624e0afe6d22d38409d2e7dd9636ea0170378 | 7ab68cd50658f76abc9e0f12e6212736e2440720 |
python/cpython | python__cpython-135338 | # Fork server doesn't flush stdout/stderr after preloading modules, potentially leaving buffered data to be inherited by child processes
# Bug report
### Bug description:
## Files
<details><summary>a/__init__.py</summary>
```python
import os
import time
print(f"init {os.getpid()} at {time.clock_gettime_ns(time.CLOCK_MONOTONIC)}")
```
</details>
<details><summary>repro.py</summary>
```python
import multiprocessing
import os
import time
if __name__ == '__main__':
print(f"run main {os.getpid()}")
multiprocessing.set_forkserver_preload(['a'])
for _ in range(2):
p = multiprocessing.Process()
p.start()
p.join()
else:
print(f"re-import main {os.getpid()} at {time.clock_gettime_ns(time.CLOCK_MONOTONIC)}")
```
</details>
## Reproduction
1. Create a new module `a` containing the `__init__.py` file above
2. Run the `repro.py` script above, ensuring the module created is on `PYTHONPATH`
## Result
```
> ./python repro.py
run main 1056488
init 1056490 at 151009034069836
re-import main 1056491 at 151009045273212
re-import main 1056492 at 151009051787587
> ./python repro.py | tee /dev/null
run main 1056607
init 1056610 at 151113770440639
re-import main 1056611 at 151113781130002
init 1056610 at 151113770440639
re-import main 1056612 at 151113787814593
init 1056610 at 151113770440639
```
## Expected
The output to be the same when stdout is redirected as when it is not.
## Analysis
This is due to fork server preloading a module that writes to `stdout`, but not flushing it. When a child process is spawned it inherits the buffered data and spuriously outputs it when it flushes _its_ `stdout`. Note that #126631 prevents `__main__` from being preloaded, so at present this will not be triggered by printing from `__main__`.
### CPython versions tested on:
CPython main branch
### Operating systems tested on:
Linux
<!-- gh-linked-prs -->
### Linked PRs
* gh-135338
* gh-135670
* gh-135671
<!-- /gh-linked-prs -->
| 9877d191f441741fc27ae5e7a6dd7ab6d4bcc6b7 | 5f6ab924653a44e08be23710d5023566e9e9214e |
python/cpython | python__cpython-135322 | # Pickle `BINSTRING` incorrect data type for size
# Bug report
### Bug description:
The `BINSTRING` opcode in Pickle has two arguments: a "4-byte little-endian signed int" length, and a string of that many bytes (see the [code comment for `BINSTRING` in `pickletools`](https://github.com/python/cpython/blob/main/Lib/pickletools.py#L1287)). Since it is signed, this means that any provided value over `0x7fffffff` would be interpreted as a negative number. The Python pickle implementation specifically treats it as a signed 32-bit length and checks to see if the length is < 0:
https://github.com/python/cpython/blob/754e7c9b5187fcad22acf7555479603f173a4a09/Lib/pickle.py#L1454-L1458
However, the C pickle implementation runs `calc_binsize(s, 4)` for `BINSTRING` and returns a **`Py_ssize_t`**. Since `Py_ssize_t` data types are the same size as the compiler's `size_t` type ([PEP0353](https://peps.python.org/pep-0353/#specification)), this means a `Py_ssize_t` is 64-bits long on 64-bit systems. Since the `size` variable here is also a `Py_ssize_t`, that means the threshold for negative values is much higher.
https://github.com/python/cpython/blob/a58026a5e3da9ca2d09ef51aa90fe217f9a975ec/Modules/_pickle.c#L5546-L5558
This is all just the background to illustrate that because `size` is not an `int`, a pickle with the `BINSTRING` opcode using a length > 0x7fffffff will fail in the Python implementation (since it's negative), but deserialize just fine in the C implementation.
The following payload will demonstrate the difference:
```
payload: b'T\x00\x00\x00\x80'+b'a'*0x80000000 + b'.'
pickle: FAILURE BINSTRING pickle has negative byte count
_pickle.c: aaaaaaaaaaaaaaaaaaa....
pickletools: pickletools failed string4 byte count < 0: -2147483648
```
The required payload does need to be 2GB in size which is very large, but not impossible on modern systems.
Note that the `LONG4` opcode is in a similar situation, except the output for `calc_binint()` is an `int` data type so this issue does not exist there.
### CPython versions tested on:
CPython main branch
### Operating systems tested on:
Linux
<!-- gh-linked-prs -->
### Linked PRs
* gh-135322
* gh-135382
* gh-135383
<!-- /gh-linked-prs -->
| 2b8b4774d29a707330d463f226630185cbd3ceff | 5ae669fc4e674968529cc32f7f31d14dddd76607 |
python/cpython | python__cpython-135324 | # `math.isnormal` and `math.issubnormal` referring to "normality" without context
# Problem
Descriptions of those functions assume everyone knows what "normalized float" means. Which itself is a mental shorthand to "float valve represented in *normalized* scientific notation". And even that can raise questions.
On the other hand those descriptions shouldn't be bloated by long specification for any technical term.
## Previous discussion
https://discuss.python.org/t/add-cmath-isnormal-and-cmath-issubnormal/94736/15
<!-- gh-linked-prs -->
### Linked PRs
* gh-135324
<!-- /gh-linked-prs -->
| 747d390045036325a7dbce12f7f0a4bc9c84c68a | 2b0c684e0759dc3fec0e9dd0fc8383e6c75b7b5c |
python/cpython | python__cpython-135289 | # clangcl PGO builds on Windows fail with `could not open '/GENPROFILE'`
# Bug report
### Bug description:
Since https://github.com/python/cpython/issues/134923 / PR https://github.com/python/cpython/pull/134924 clangcl PGO builds on Windows fail with
```
lld-link : error : could not open '/GENPROFILE': no such file or directory [e:\cpython_clang\PCbuild\pythoncore.vcxproj]
```
### CPython versions tested on:
CPython main branch
### Operating systems tested on:
Windows
<!-- gh-linked-prs -->
### Linked PRs
* gh-135289
* gh-135296
<!-- /gh-linked-prs -->
| 0045100ccbc3919e8990fa59bc413fe38d21b075 | b150b6aca7b17efe1bb13c3058d61cdefb83237e |
python/cpython | python__cpython-135277 | # Apply bugfixes from zipp
[zipp 3.23](https://zipp.readthedocs.io/en/latest/history.html#v3-23-0) includes several bug fixes and features since the last synced version (3.20.1).
<!-- gh-linked-prs -->
### Linked PRs
* gh-135277
* gh-135278
* gh-135279
<!-- /gh-linked-prs -->
| 8d6eb0c26276c4013346622580072908d46d2341 | aac22ea212849f8fffee9e05af7429c503d973ee |
python/cpython | python__cpython-135274 | # Unify ZoneInfo.from_file signature
# Bug report
### Bug description:
While enhancing stubs for `ZoneInfo.from_file` in [typeshed](https://github.com/python/typeshed/pull/14221), I noticed that the regression in the `from_file` signature was caused by the argument clinic migration, but without specifying an argument alias - intentionally or not, I don't know.
Therefore, I believe it would be a good approach to unify both signatures for the native and Python versions of `zoneinfo`. Since AC takes priority and `fobj` is a pos-only argument, I think the more appropriate approach would be to update Python's fallback.
### CPython versions tested on:
CPython main branch
### Operating systems tested on:
macOS
<!-- gh-linked-prs -->
### Linked PRs
* gh-135274
* gh-135715
* gh-135716
<!-- /gh-linked-prs -->
| 7cc89496922b7edb033e2ed47550c7c9e2ae8525 | 0243260284d3630d58a11694904476349d14a6ed |
python/cpython | python__cpython-135275 | # Typo in `token.NAME` documentation
# Documentation
https://docs.python.org/3/library/token.html
> token.NAME
> Token value that indicates an identifier. Note that keywords are also initially tokenized **an** NAME tokens.
should read
> token.NAME
> Token value that indicates an identifier. Note that keywords are also initially tokenized as NAME tokens.
<!-- gh-linked-prs -->
### Linked PRs
* gh-135275
* gh-135280
* gh-135281
<!-- /gh-linked-prs -->
| 8d17a412da7e7d8412efc625d48dcb5eecea50b0 | 8d6eb0c26276c4013346622580072908d46d2341 |
python/cpython | python__cpython-135257 | # Simplify parsing parameters in Argument Clinic
This is a pure refactoring. It removes some repeated calls of `ast.parse()` and reuse the result of a single `ast.parse()` call per parameter.
<!-- gh-linked-prs -->
### Linked PRs
* gh-135257
* gh-136635
<!-- /gh-linked-prs -->
| b74fb8e220a50a9580320dfd398a16995b845c69 | 283b05052338dd735cd4927011afc3735d9c6c7c |
python/cpython | python__cpython-135311 | # tarfile, zipfile, and shutil documentation should mention zstd compression
# Documentation
Python 3.14 added zstd compression support in the standard library, and `tarfile`, `zipfile` and `shutil` can now use it.
* The documentation of `tarfile.open()` should be updated to mention zstd compression.
* Documentation of `shutil.make_archive()`, `shutil.get_archive_formats()`, `shutil.unpack_archive()`, and `shutil.get_unpack_formats()` should be updated.
* `zipfile` documentation should mention the `ZIP_ZSTANDARD` flag.
<!-- gh-linked-prs -->
### Linked PRs
* gh-135311
* gh-136254
<!-- /gh-linked-prs -->
| 938a5d7e62d962a8462bce9fe04236ac9a2155b8 | da79ac9d26860db62500762c95b7ae534638f9a7 |
python/cpython | python__cpython-135226 | # Use CSPRNG for random UUID node ID
# Feature or enhancement
### Proposal:
Issue for: https://github.com/python/cpython/pull/135226
(I forgot to open issue first, sry)
I'll do the benchmark of the performance impact when switching to `secrets.getrandbits()` for the clock sequence soon. I'll post here.
I totally agree with the previous discussion. Summary:
- Maybe make it more clearer in the docs of UUIDv8(?) Its possible to predict UUIDv8 if hackers have collected enough UUIDs. BUT, UUIDv8 is highly customizable. It's pretty ez if someone wants to use CSPRNG in UUIDv8s by writing a wrapper outside the call(or using UUIDv4 instead lol).
- Change `_random_getnode()` from PRNG to CSPRNG which randomly generates 48-bits note when python failed to get MAC address
```diff
def _random_getnode():
+ import secrets
+ return secrets.randbits(48) | (1 << 40)
- import random
- return random.getranbits(48) | (1 << 40)
```
- Change the logic when generating `clock_seq` from PRNG to CSPRNG in both `uuidv1()` and `uuidv6()`. Well I think this is of great importance. Those two algorithm are time-based. The only random thing comes from `clock_seq`. Hackers can predict the next UUID if they got 624*32 random bits generated from the previous UUIDs (1427 UUIDs will be enough).
```diff
def uuid1(node=None, clock_seq=None):
...
if clock_seq is None:
+ import secrets
+ clock_seq = secrets.randbits(14) # instead of stable storage
- import random
- clock_seq = random.getrandbits(14) # instead of stable storage
...
def uuid6(node=None, clock_seq=None):
...
+ import secrets
+ clock_seq = secrets.randbits(14) # instead of stable storage
- import random
- clock_seq = random.getrandbits(14) # instead of stable storage
...
```
note that nothing will be changed if the performance impact is unacceptable.
### Has this already been discussed elsewhere?
I have already discussed this feature proposal on Discourse
### Links to previous discussion of this feature:
https://github.com/python/cpython/pull/135226
<!-- gh-linked-prs -->
### Linked PRs
* gh-135226
* gh-135255
* gh-135433
* gh-135467
* gh-137408
<!-- /gh-linked-prs -->
> [!NOTE]
> 3.13 needs a backport but we need to wait until the MAC address detection PR is merged (see https://github.com/python/cpython/issues/135244#issuecomment-2954016079), as otherwise, there is a small penalty at import time and upon first use of uuid1() and uuid6(). | 1cb716387255a7bdab5b580bcf8ac1b6fa32cc41 | 4372011928b43d369be727ed3eb6d9d4f9610660 |
python/cpython | python__cpython-135267 | # Smarter use of a mutex in incremental HMAC and hash functions
# Feature or enhancement
### Proposal:
Currently, when doing incremental HMAC, if a message is sufficiently large, we lock the HMAC object but release the GIL so that we can call HMAC_Update without the GIL. However, for subsequent data, we will *always* do this. I'll need to estimate if multiple calls to HMAC, first with a very big data, then only with very small data, will be impacted. Usually, `update()` is called because we're in a chunk-based strategy. So, we could expect the next block of data to be as large as the one we're already hashing.
Nevertheless, I think we should turn off the mutex usage if we encounter a small block of data as we could have started with a very big chunk, then with very small ones (at least below the hardcoded limit).
I'll do some benchmarks tomorrow but I wanted to create an issue for that.
Note: a similar argument holds for hash functions and their `update()`.
### Has this already been discussed elsewhere?
This is a minor feature, which does not need previous discussion elsewhere
### Links to previous discussion of this feature:
_No response_
<!-- gh-linked-prs -->
### Linked PRs
* gh-135267
<!-- /gh-linked-prs -->
| e7295a89b85df69f5d6a60ed50f78e1d8e636434 | ac9d37c60b7d10cb38fc9e5cb04e398dbc5bf2e5 |
python/cpython | python__cpython-135250 | # Improve `_hashlib` exception reporting when an OpenSSL error occurred
### OpenSSL memory allocation failures
When an OpenSSL error occurs, we usually raise a `ValueError`. However, failures may be related to `malloc()`, in which case we should raise a `MemoryError` as we usually do. I have a PR ready for this.
### Incorrect usage of `get_openssl_evp_md_by_utf8name()`
In `get_openssl_evp_md_by_utf8name`, when we pass an incorrect `Py_hash_type` or a digest that we cannot find, we raise:
```c
raise_ssl_error(state->unsupported_digestmod_error,
"unsupported hash type %s", name);
```
The "unsupported hash type %s" message only happens if no SSL error occurred during the execution, and this only happens if we pass an incorrect `Py_hash_type`, which also only happens "internally". So we should correctly report that the issue is with the `Py_hash_type` argument, not with the named argument.
Note: The `raise_ssl_error` function is a function that raises an automatically formatted message if there is an OpenSSL error or raises the "alternative" error message. But in this case, we should probably separate the exception and raise an SystemError / Py_UNREACHABLE() if we pass a wrong `Py_hash_type` (strictly speaking, it shouldn't be possible because the compiler would have complained if we pass an arbitrary integer as `Py_hash_type` is an enum).
### Has this already been discussed elsewhere?
This is a minor feature, which does not need previous discussion elsewhere
### Links to previous discussion of this feature:
This is related to the work I did in #134531.
<!-- gh-linked-prs -->
### Linked PRs
* gh-135250
<!-- /gh-linked-prs -->
| 83b94e856e9b55e3ad00e272ccd5e80eafa94443 | 2677dd017a033eaaad3b8e1e0eb5664a44e7e231 |
python/cpython | python__cpython-135167 | # test.test_zstd failed due to error type
# Bug report
### Bug description:
```python
weapon@dev:/tmp/tmp.SyXPqDr6vd$ ./python -m test test.test_zstd
Using random seed: 4050720728
0:00:00 load avg: 0.92 Run 1 test sequentially in a single process
0:00:00 load avg: 0.92 [1/1] test.test_zstd
test test.test_zstd failed -- Traceback (most recent call last):
File "/tmp/tmp.SyXPqDr6vd/Lib/test/test_zstd.py", line 297, in test_compress_parameters
ZstdCompressor(options={CompressionParameter.nb_workers:4})
~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
ValueError: compression parameter 'nb_workers' received an illegal value 4; the valid range is [0, 0]
0:00:00 load avg: 0.92 [1/1/1] test.test_zstd failed (1 error)
== Tests result: FAILURE ==
1 test failed:
test.test_zstd
Total duration: 456 ms
Total tests: run=118 skipped=1
Total test files: run=1/1 failed=1
Result: FAILURE
```
`Modules/_zstd/_zstdmodule.c` function `set_parameter_error` :
```
/* Error message */
PyErr_Format(PyExc_ValueError,
"%s parameter '%s' received an illegal value %d; "
"the valid range is [%d, %d]",
type, name, value_v, bounds.lowerBound, bounds.upperBound);
```
### CPython versions tested on:
CPython main branch
### Operating systems tested on:
Linux
<!-- gh-linked-prs -->
### Linked PRs
* gh-135167
* gh-135189
<!-- /gh-linked-prs -->
| 1b55e12766d007aea9fcd0966e29ce220b67d28e | 8919cb4ad9fb18832a4bc9d5bbea305e9518c7ab |
python/cpython | python__cpython-135162 | # Redundant NULL check for 'exc' after dereference in ceval.c
Redundant comparison with a NULL value at
https://github.com/python/cpython/blob/2e1544fd2b0cd46ba93fc51e3cdd47f4781d7499/Python/ceval.c#L3193
for pointer 'exc', which was dereferenced at
https://github.com/python/cpython/blob/2e1544fd2b0cd46ba93fc51e3cdd47f4781d7499/Python/ceval.c#L3192
<!-- gh-linked-prs -->
### Linked PRs
* gh-135162
<!-- /gh-linked-prs -->
| 8919cb4ad9fb18832a4bc9d5bbea305e9518c7ab | 9258f3da9175134d03f2c8c7c7eed223802ad945 |
python/cpython | python__cpython-135156 | # Compile missing _zstd module
# Bug report
### Bug description:
install dependence with `.github/workflows/posix-deps-apt.sh` in `ubuntu:22.004` .
```python
The necessary bits to build these optional modules were not found:
_zstd
To find the necessary bits, look in configure.ac and config.log.
```
### CPython versions tested on:
CPython main branch
### Operating systems tested on:
_No response_
<!-- gh-linked-prs -->
### Linked PRs
* gh-135156
* gh-135197
<!-- /gh-linked-prs -->
| a7d41e8aab5211f4ed7f636c41d63adcab0affba | b90ecea9e6b33dae360ed7eb2c32598f98444c4d |
python/cpython | python__cpython-135198 | # Parts of strings that look like comments being stripped out of nested strings with debug specifier
# Bug report
### Bug description:
Is this desired behavior?
```
$ ./python -m ast
f'{""" # this is part of the string, it shouldn't be stripped out
"""=}'
Module(
body=[
Expr(
value=JoinedStr(
values=[
Constant(value='""" \n"""='),
FormattedValue(
value=Constant(value=" # this is part of the string, it shouldn't be stripped out\n"),
conversion=114)]))])
$ ./python -m ast
t'{""" # this is part of the string, it shouldn't be stripped out
"""=}'
Module(
body=[
Expr(
value=TemplateStr(
values=[
Constant(value='""" \n"""='),
Interpolation(
value=Constant(value=" # this is part of the string, it shouldn't be stripped out\n"),
str='""" \n"""',
conversion=114)]))])
```
Even if they are not actually multiline:
```
$ python -m ast
t'{" # nooo "=}'
Module(
body=[
Expr(
value=TemplateStr(
values=[
Constant(value='" '),
Interpolation(
value=Constant(value=' # nooo '),
str='"',
conversion=114)]))])
```
Seems to go back to 3.12
### CPython versions tested on:
3.14, CPython main branch
### Operating systems tested on:
_No response_
<!-- gh-linked-prs -->
### Linked PRs
* gh-135198
* gh-136720
* gh-136899
<!-- /gh-linked-prs -->
| ef66fb597ba909ead2fbfc06f748aa7b7e9ea437 | e89923d36650fe10ce1c6b5f7152638589684004 |
python/cpython | python__cpython-135145 | # `_remote_debugging` is missing in the "legacy" MSI Windows installer
# Bug report
### Bug description:
The `_remote_debugging` extension module is missing in the MSI Windows installer:
```
Python 3.14.0b2 (tags/v3.14.0b2:12d3f88, May 26 2025, 13:55:44) [MSC v.1943 64 bit (AMD64)] on win32
Type "help", "copyright", "credits" or "license" for more information.
>>> import _remote_debugging
Traceback (most recent call last):
File "<python-input-0>", line 1, in <module>
import _remote_debugging
ModuleNotFoundError: No module named '_remote_debugging'
```
The `.pyd` file is missing from the `DLLs` directory in the installation.
The `_remote_debugging` module _is_ available in installations managed by the new Python Installer:
```
PS C:\Users\thoma> py
Python 3.14.0b2 (tags/v3.14.0b2:12d3f88, May 26 2025, 13:55:44) [MSC v.1943 64 bit (AMD64)] on win32
Type "help", "copyright", "credits" or "license" for more information.
>>> import _remote_debugging
>>>
```
The new module needs to be added to the right Tools/msi files.
### CPython versions tested on:
3.14
### Operating systems tested on:
Windows
<!-- gh-linked-prs -->
### Linked PRs
* gh-135145
* gh-135150
<!-- /gh-linked-prs -->
| e598eecf4c97509acef517e94053e45db51636fb | a10b321a5807ba924c7a7833692fe5d0dc40e875 |
python/cpython | python__cpython-135138 | # regrtest can fail writing unencodable test description or status
# Bug report
I already fixed errors related to non-ASCII skip messages (by using `ascii()` instead of `str()` or `repr()`). There may be other non-ASCII skip messages left, but they are not raised on buildbots with non-UTF-8 stdout. In any case, this only fixed CPython tests, user tests can have the same issues.
#135121 exposed a new issue -- subtest description which includes non-ASCII parameter values. It is more difficult, because we have no control on formatting them. Always using `ascii()` instead of `repr()` will harm readability on normal platforms.
<!-- gh-linked-prs -->
### Linked PRs
* gh-135138
* gh-135168
* gh-135169
<!-- /gh-linked-prs -->
| 3d396ab7591d544ac8bc1fb49615b4e867ca1c83 | 2e1544fd2b0cd46ba93fc51e3cdd47f4781d7499 |
python/cpython | python__cpython-135152 | # `generator.close()` never raises `GeneratorExit`
# Bug report
### Bug description:
[The doc](https://docs.python.org/3/reference/expressions.html#generator.close) of `generator.close()` says as shown below:
> Raises a [GeneratorExit](https://docs.python.org/3/library/exceptions.html#GeneratorExit) at the point where the generator function was paused.
But `generator.close()` never raises `GeneratorExit` as shown below:
```python
def func():
yield 'Hello'
yield 'World'
v = func()
print(v.close()) # None
print(v.close()) # None
```
```python
def func():
yield 'Hello'
yield 'World'
v = func()
print(next(v)) # Hello
print(v.close()) # None
print(v.close()) # None
```
```python
def func():
yield 'Hello'
yield 'World'
v = func()
print(next(v)) # Hello
print(next(v)) # World
print(v.close()) # None
print(v.close()) # None
```
```python
def func():
yield 'Hello'
yield 'World'
v = func()
print(v.close()) # None
print(next(v)) # StopIteration
```
```python
def func():
yield 'Hello'
yield 'World'
v = func()
print(next(v)) # Hello
print(v.close()) # None
print(next(v)) # StopIteration
```
### CPython versions tested on:
3.13
### Operating systems tested on:
Windows
<!-- gh-linked-prs -->
### Linked PRs
* gh-135152
* gh-135985
* gh-135986
<!-- /gh-linked-prs -->
| 0d76dccc3b4376ba075a1737f58809e3d83aaaa3 | fb9e292919d82326acea456aa071c9af6aff5626 |
python/cpython | python__cpython-135109 | # Build failure on NetBSD: UT_NAMESIZE undeclared due to missing utmp.h include
# Bug report
### Bug description:
CPython fails to build on NetBSD with the following error:
```c
./Modules/posixmodule.c: In function 'os_getlogin_impl':
./Modules/posixmodule.c:9569:15: error: 'UT_NAMESIZE' undeclared (first use in this function)
9569 | char name[UT_NAMESIZE + 1];
| ^~~~~~~~~~~
```
## Root Cause
The issue is in the header inclusion logic for PTY-related headers. The `<utmp.h>` header (which defines `UT_NAMESIZE`) is incorrectly nested inside the `HAVE_PTY_H` conditional block:
```c
#if defined(HAVE_OPENPTY) || defined(HAVE_FORKPTY) || defined(HAVE_LOGIN_TTY) || defined(HAVE_DEV_PTMX)
#ifdef HAVE_PTY_H
#include <pty.h>
#ifdef HAVE_UTMP_H
#include <utmp.h> // ← Only included if HAVE_PTY_H is defined!
#endif /* HAVE_UTMP_H */
#elif defined(HAVE_LIBUTIL_H)
#include <libutil.h>
#elif defined(HAVE_UTIL_H)
#include <util.h> // ← NetBSD takes this path
#endif /* HAVE_PTY_H */
#ifdef HAVE_STROPTS_H
#include <stropts.h>
#endif
#endif /* defined(HAVE_OPENPTY) || defined(HAVE_FORKPTY) || defined(HAVE_LOGIN_TTY) || defined(HAVE_DEV_PTMX) */
```
On NetBSD:
- `HAVE_OPENPTY` is defined ✓
- `HAVE_PTY_H` is **not** defined (NetBSD doesn't have `/usr/include/pty.h`)
- `HAVE_UTIL_H` is defined ✓ (PTY functions are in `/usr/include/util.h`)
- `HAVE_UTMP_H` is defined ✓ (but never gets included due to nesting)
## Proposed Fix
Move the `<utmp.h>` inclusion outside the PTY header conditional so it can be included regardless of which PTY header is used:
```c
#if defined(HAVE_OPENPTY) || defined(HAVE_FORKPTY) || defined(HAVE_LOGIN_TTY) || defined(HAVE_DEV_PTMX)
#ifdef HAVE_PTY_H
#include <pty.h>
#elif defined(HAVE_LIBUTIL_H)
#include <libutil.h>
#elif defined(HAVE_UTIL_H)
#include <util.h>
#endif /* HAVE_PTY_H */
#ifdef HAVE_UTMP_H
#include <utmp.h> // Now included regardless of PTY header choice
#endif /* HAVE_UTMP_H */
#ifdef HAVE_STROPTS_H
#include <stropts.h>
#endif
#endif /* defined(HAVE_OPENPTY) || defined(HAVE_FORKPTY) || defined(HAVE_LOGIN_TTY) || defined(HAVE_DEV_PTMX) */
```
### CPython versions tested on:
CPython main branch
### Operating systems tested on:
Other
<!-- gh-linked-prs -->
### Linked PRs
* gh-135109
* gh-135127
* gh-135128
<!-- /gh-linked-prs -->
| 5b3865418ceb1448bfbf15cddf52c900cd5882a3 | 1f515104441898111c20aca5a7bbda1d11b15d36 |
python/cpython | python__cpython-135104 | # Remove an unused local variable in Lib/code.py
In file `Lib/code.py` between lines 220-231 we have:
```python
try:
sys.ps1
delete_ps1_after = False
except AttributeError:
sys.ps1 = ">>> "
delete_ps1_after = True
try:
_ps2 = sys.ps2
delete_ps2_after = False
except AttributeError:
sys.ps2 = "... "
delete_ps2_after = True
```
As can be seen in the code snippet the existence of `sys.ps1` and `sys.ps2` is checked using try except blocks.
But in the second case with an unused local variable `_ps2`.
In order to enhance the code quality I propose to unify the approach and remove this `_ps2` because it is never used.
<!-- gh-linked-prs -->
### Linked PRs
* gh-135104
<!-- /gh-linked-prs -->
| 8f778f7bb9a8ad80fc06570566ad4de541826178 | dba9de731b231ca0c079205f496d1e3d178b4fd3 |
python/cpython | python__cpython-135102 | # iOS testbed fails when the dir ~/Library/Developer/XCTestDevices is empty
# Bug report
### Bug description:
iOS testbed run fails when the dir ~/Library/Developer/XCTestDevices is empty.
As seen in https://github.com/pypa/cibuildwheel/pull/2443 .
The dir `~/Library/Developer/XCTestDevices` is empty on machines that are clean, e.g. CI images.
The traceback is:
```
Traceback (most recent call last):
File "<frozen runpy>", line 198, in _run_module_as_main
File "<frozen runpy>", line 88, in _run_code
File "/private/var/folders/x8/34szswsn0zn1ktnx8k1v16qr0000gn/T/cibw-run-pztsyqge/cp313-ios_arm64_iphonesimulator/testbed/__main__.py", line 548, in <module>
main()
~~~~^^
File "/private/var/folders/x8/34szswsn0zn1ktnx8k1v16qr0000gn/T/cibw-run-pztsyqge/cp313-ios_arm64_iphonesimulator/testbed/__main__.py", line 530, in main
asyncio.run(
~~~~~~~~~~~^
run_testbed(
^^^^^^^^^^^^
...<3 lines>...
)
^
)
^
File "/Library/Frameworks/Python.framework/Versions/3.13/lib/python3.13/asyncio/runners.py", line 195, in run
return runner.run(main)
~~~~~~~~~~^^^^^^
File "/Library/Frameworks/Python.framework/Versions/3.13/lib/python3.13/asyncio/runners.py", line 118, in run
return self._loop.run_until_complete(task)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^^^^^^
File "/Library/Frameworks/Python.framework/Versions/3.13/lib/python3.13/asyncio/base_events.py", line 719, in run_until_complete
return future.result()
~~~~~~~~~~~~~^^
File "/private/var/folders/x8/34szswsn0zn1ktnx8k1v16qr0000gn/T/cibw-run-pztsyqge/cp313-ios_arm64_iphonesimulator/testbed/__main__.py", line 411, in run_testbed
simulator = await select_simulator_device()
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/private/var/folders/x8/34szswsn0zn1ktnx8k1v16qr0000gn/T/cibw-run-pztsyqge/cp313-ios_arm64_iphonesimulator/testbed/__main__.py", line 129, in select_simulator_device
raw_json = await async_check_output(
^^^^^^^^^^^^^^^^^^^^^^^^^
"xcrun", "simctl", "--set", "testing", "list", "-j"
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
)
^
File "/private/var/folders/x8/34szswsn0zn1ktnx8k1v16qr0000gn/T/cibw-run-pztsyqge/cp313-ios_arm64_iphonesimulator/testbed/__main__.py", line 118, in async_check_output
raise subprocess.CalledProcessError(
...<4 lines>...
)
subprocess.CalledProcessError: Command '('xcrun', 'simctl', '--set', 'testing', 'list', '-j')' returned non-zero exit status 1.
```
On a machine with empty `~/Library/Developer/XCTestDevices` the above command returns:
```console
$ xcrun simctl --set testing list -j
Using Parallel Testing Device Clones Device Set: '/Users/runner/Library/Developer/XCTestDevices'
Provided set path does not exist: /Users/runner/Library/Developer/XCTestDevices
```
### CPython versions tested on:
CPython main branch
### Operating systems tested on:
macOS
<!-- gh-linked-prs -->
### Linked PRs
* gh-135102
* gh-135113
* gh-135114
<!-- /gh-linked-prs -->
| dba9de731b231ca0c079205f496d1e3d178b4fd3 | 1ffe913c2017b44804aca18befd45689df06c069 |
python/cpython | python__cpython-135100 | # `PyMutex` failure in `parking_lot.c` on Windows during interpreter shutdown
# Bug report
### Bug description:
```
Fatal Python error: _PySemaphore_PlatformWait: unexpected error from semaphore: 4294967295 (error: 6)
Python runtime state: finalizing (tstate=0x6aa6c460)
Thread 0x00001360 (most recent call first):
<no Python frame>
Windows fatal exception: code 0x80000003
Current thread 0x00001360 (most recent call first):
<no Python frame>
Current thread's C stack trace (most recent call first):
<cannot get C stack on this system>
```
`4294967295` is the `-1` return code from `WaitForMultipleObjects`. Error code 6 is [`ERROR_INVALID_HANDLE`](https://learn.microsoft.com/en-us/windows/win32/Debug/system-error-codes--0-499-#:~:text=ERROR_INVALID_HANDLE).
I'm pretty sure that the `_PyOS_SigintEvent()` handle is getting closed concurrently with the mutex wait.
https://github.com/python/cpython/blob/1ffe913c2017b44804aca18befd45689df06c069/Python/parking_lot.c#L118-L136
Seen in https://github.com/python/cpython/actions/runs/15423887404/job/43405853684 at the end of running `test_decorators`.
### CPython versions tested on:
CPython main branch
### Operating systems tested on:
Windows
<!-- gh-linked-prs -->
### Linked PRs
* gh-135100
* gh-135116
<!-- /gh-linked-prs -->
| cc581f32bf5f15e9f2f89b830ec64ea25684d0cd | 8f778f7bb9a8ad80fc06570566ad4de541826178 |
python/cpython | python__cpython-135076 | # Unexpanded f-strings in Lib/test/support/__init__.py exceptions
# Bug report
### Bug description:
```python
❯ ruff check --target-version=py314 --preview --select RUF027 Lib/test/support/__init__.py
Lib/test/support/__init__.py:1087:26: RUF027 Possible f-string without an `f` prefix
|
1085 | memlimit = _parse_memlimit(limit)
1086 | if memlimit < _2G - 1:
1087 | raise ValueError('Memory limit {limit!r} too low to be useful')
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ RUF027
1088 |
1089 | real_max_memuse = memlimit
|
= help: Add `f` prefix
Lib/test/support/__init__.py:2361:26: RUF027 Possible f-string without an `f` prefix
|
2359 | max_depth = 20_000
2360 | elif max_depth < 3:
2361 | raise ValueError("max_depth must be at least 3, got {max_depth}")
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ RUF027
2362 | depth = get_recursion_depth()
2363 | depth = max(depth - 1, 1) # Ignore infinite_recursion() frame.
|
= help: Add `f` prefix
Found 2 errors.
```
Related to #135069
I'll submit a PR shortly.
### CPython versions tested on:
CPython main branch, 3.13, 3.14
### Operating systems tested on:
_No response_
<!-- gh-linked-prs -->
### Linked PRs
* gh-135076
* gh-135129
* gh-135130
<!-- /gh-linked-prs -->
| bc00ce941e03347dade3faa8822f19836b5bbfe4 | 5b3865418ceb1448bfbf15cddf52c900cd5882a3 |
python/cpython | python__cpython-135071 | # encodings.idna: Unexpanded f-string in "Unsupported error handling" exception
# Bug report
### Bug description:
The exception string in `encondings.idna.IncrementalDecoder` is missing the 'f' prefix.
https://github.com/python/cpython/blob/3612d8f51741b11f36f8fb0494d79086bac9390a/Lib/encodings/idna.py#L319
The exception can be triggered with this snippet.
```python
from encodings.idna import IncrementalDecoder
decoder = IncrementalDecoder(errors='boom!')
decoder.decode(b'')
```
On main
```console
$ ./python test.py
Traceback (most recent call last):
File "/home/hollas/software/cpython/test.py", line 3, in <module>
decoder.decode(b'')
~~~~~~~~~~~~~~^^^^^
File "<frozen codecs>", line 325, in decode
File "/home/hollas/software/cpython/Lib/encodings/idna.py", line 319, in _buffer_decode
raise UnicodeError("Unsupported error handling: {errors}")
UnicodeError: Unsupported error handling: {errors}
```
The issue also exists on Python 3.13 and 3.14, but not on 3.12
```console
$ uvx python@3.12 test.py
Traceback (most recent call last):
File "/home/hollas/software/cpython/test.py", line 3, in <module>
decoder.decode(b'')
File "<frozen codecs>", line 322, in decode
File "/usr/lib64/python3.12/encodings/idna.py", line 264, in _buffer_decode
raise UnicodeError("Unsupported error handling "+errors)
UnicodeError: Unsupported error handling boom!
```
(I'll submit a PR shortly)
### CPython versions tested on:
CPython main branch
### Operating systems tested on:
_No response_
<!-- gh-linked-prs -->
### Linked PRs
* gh-135071
* gh-136235
* gh-136236
<!-- /gh-linked-prs -->
| 8dc3383abea72ee3deafec60818aeb817d8fec09 | 8f8bdf251a5f79d15ac2b1a6d19860033bf50c79 |
python/cpython | python__cpython-135037 | # Multiple tarfile extraction filter bypasses (`filter="tar"`/`filter="data"`)
### Bug description:
Public issue for fixing CVE-2025-4517, CVE-2025-4330, CVE-2025-4138, and CVE-2024-12718. [See full advisory on security-announce](https://mail.python.org/archives/list/security-announce@python.org/thread/MAXIJJCUUMCL7ATZNDVEGGHUMQMUUKLG/).
[edit @encukou]: Also addresses CVE-2025-4435. Sorry for leaving that out of the commit messages.
### CPython versions tested on:
CPython main branch
### Operating systems tested on:
_No response_
<!-- gh-linked-prs -->
### Linked PRs
* gh-135037
* gh-135064
* gh-135065
* gh-135066
* gh-135068
* gh-135070
* gh-135084
* gh-135093
<!-- /gh-linked-prs -->
| 3612d8f51741b11f36f8fb0494d79086bac9390a | ec12559ebafca01ded22c9013de64abe535c838d |
python/cpython | python__cpython-135031 | # Possible MemoryError regression in Python 3.14 since alpha 6
# Bug report
### Bug description:
Running a simple piece of code:
```python
eval("(" * 200 + ")" * 200)
```
raised an error like this in Python 3.8:
```
>>> eval("(" * 200 + ")" * 200)
s_push: parser stack overflow
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
MemoryError
```
It was then fixed in Python 3.9 (by the new parser, I guess) and it's now back – since 3.14 alpha 6 the error for the same code is:
```
>>> eval("(" * 200 + ")" * 200)
Traceback (most recent call last):
File "<python-input-0>", line 1, in <module>
eval("(" * 200 + ")" * 200)
~~~~^^^^^^^^^^^^^^^^^^^^^^^
MemoryError: Parser stack overflowed - Python source too complex to parse
```
I did a bisection and found commit 014223649c33b2febbccfa221c2ab7f18a8c0847 by @markshannon, after which we see the same error. as before
### CPython versions tested on:
3.14
### Operating systems tested on:
Linux
<!-- gh-linked-prs -->
### Linked PRs
* gh-135031
* gh-135059
<!-- /gh-linked-prs -->
| 6e80f11eb5eba360334b4ace105eb7d73394baf7 | b525e31b7fc50e7a498f8b9b16437cb7b9656f6f |
python/cpython | python__cpython-135006 | # Rewrite and cleanup HACL* `blake2module.c`
Some parts of `blake2module.c` have duplicated bits, and most of the file has lines exceeding 100 characters. I wouldn't bother about lines exceeding 80 characters, but here we're talking about lines that exceed my (laptop) screen.
When we switched from our implementation of BLAKE-2 to the HACL*, I mentioned cleaning it up, but I forgot about this.
<!-- gh-linked-prs -->
### Linked PRs
* gh-135006
<!-- /gh-linked-prs -->
| 3cb109796dd4b07625b21773bbc04697c65bdf20 | 83b94e856e9b55e3ad00e272ccd5e80eafa94443 |
python/cpython | python__cpython-135002 | # Explicitly specify the encoding parameter value of `calendar.HTMLCalendar` as 'utf-8'
# Feature or enhancement
### Proposal:
Currently, the `calendar.HTMLCalendar.formatyearpage` method is used to generate an html of a calendar. `encoding` parameter defaults to the `sys.getdefaultencoding()`, but it always returns "utf-8". ([PEP 686 – Making UTF-8 Mode default](https://peps.python.org/pep-0686/) ) I propose changing this default to `'utf-8'`, this makes it clearer
In a similar vein, the `difflib.HtmlDiff` also outputs HTML with UTF-8 encoding by default.
<!-- gh-linked-prs -->
### Linked PRs
* gh-135002
<!-- /gh-linked-prs -->
| f90483e13ad48227767cd6d69bc4857843710a5c | aaad2e81cecae01d3e7e12b2bbd2e1a8533ff5d4 |
python/cpython | python__cpython-135188 | # os.lstat() supports dir_fd but is not in os.supports_dir_fd
# Bug report
### Bug description:
I had written a script using `os.fwalk()` and `os.lstat()`, which was working.
Then I added an `if os.lstat in os.supports_dir_fd` check and a fallback implementation and observed the fallback path being taken:
```python
import os, sys
from datetime import datetime
oldest = datetime.max
newest = datetime.min
oldest_file = None
newest_file = None
if os.lstat in os.supports_dir_fd:
for dir, _, files, dirfd in os.fwalk(sys.argv[1]):
for file in files:
mtime = os.lstat(file, dir_fd=dirfd).st_mtime
date = datetime.fromtimestamp(mtime)
if date < oldest:
oldest = date
oldest_file = os.path.join(dir, file)
if date > newest:
newest = date
newest_file = os.path.join(dir, file)
else:
print('not using dir_fd')
# for dir, _, files in os.walk(sys.argv[1], followlinks=False): ...
print('Newest:', newest.strftime('%Y-%m-%d %H-%M-%S'), newest_file)
print('Oldest:', oldest.strftime('%Y-%m-%d %H-%M-%S'), oldest_file)
```
The workaround is to check for os.stat instead (and use `os.stat(..., follow_symlinks=False)` for consistency).
### CPython versions tested on:
3.13
### Operating systems tested on:
Linux
<!-- gh-linked-prs -->
### Linked PRs
* gh-135188
* gh-135205
* gh-135206
<!-- /gh-linked-prs -->
| e004cf8fd5c006a7a1c60807a03066f4c43452e5 | 6ef06fad84244261c695ec337c7d2734277054db |
python/cpython | python__cpython-135021 | # Issue with `PyObject_DelAttr[String]` in stable ABI builds targeting older CPython versions
# Bug report
### Bug description:
To my knowledge, one can use newer Python version (e.g. Python 3.13) to build C extensions for _older_ Python versions (e.g. 3.12) when targeting the stable ABI. This is accomplished by setting `Py_LIMITED_API` to a suitable number like `0x030b0000`, which simply disables all features beyond this cutoff.
If this understanding is correct, then I would like to report a bug:
Python 3.13 added an new C API function for `PyObject_DelAttr()`. In contrast, Python 3.12 realized this operation using a macro wrapping `PyObject_SetAttr()`:
```
#define PyObject_DelAttr(O, A) PyObject_SetAttr((O), (A), NULL)
```
This change is problematic: extensions built on 3.13 targeting 3.12 stable ABI will now call a function that does not exist on Python 3.12, and such extensions will fail to load with a linker error. The change to the new C API entry point should have been guarded with `#ifdefs`.
This issue also affects `PyObject_DelAttrString()`.
### CPython versions tested on:
3.13
### Operating systems tested on:
Linux
<!-- gh-linked-prs -->
### Linked PRs
* gh-135021
* gh-135133
* gh-135134
* gh-135165
* gh-135178
* gh-135182
<!-- /gh-linked-prs -->
| c21113072cd1f0da83729f99d3576647db85d816 | 40c8be0008ecadb5d0dc9a017434b1133a3a6e06 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.