repo stringclasses 1 value | instance_id stringlengths 20 22 | problem_statement stringlengths 126 60.8k | merge_commit stringlengths 40 40 | base_commit stringlengths 40 40 |
|---|---|---|---|---|
python/cpython | python__cpython-133184 | # iOS compiler stubs override environment-based minimum version specifications
# Bug report
### Bug description:
Python includes shims for compiler tools on iOS to match GNU conventions for compiler names (e.g., `arm64-apple-ios-clang`) to avoid encoding a user-specific location to a compiler in sysconfig, and to avoid issues with UNIX build tools that expect CC/CXX to be single arguments without spaces.
Using this approach `arm64-apple-ios-clang` expands internal as `xcrun --sdk iphoneos clang -target arm64-apple-ios-clang`
Unfortunately, Apple has 3 ways to define a minimum iOS version, in order of priority:
1. the `IPHONEOS_DEPLOYMENT_TARGET=12.0` environment variable
2. the `-mios-min-version=12.0` command line argument
3. a version number encoded in a `-target arm64-apple-ios12.0-simulator` argument
If you specify *all three* options, (1) will be ignored, and (2) will raise a warning if the version it specifies doesn't match the version specified by (3).
The problem arises if you specify `-target arm64-apple-ios-simulator`- which is what the shim currently encodes.
If the `-target` doesn't specify a version, this is interpreted as "no min version". An explicitly provided `-mios-version-min` definition will override this minimum without warning; but any `IPHONEOS_DEPLOYMENT_VERSION` value will be *ignored*.
Thus, using the compiler shims in an environment where `IPHONEOS_DEPLOYMENT_VERSION` is defined will result in this value being *ignored* unless the minimum version is also passed in as a `-mios-version-min` argument.
### CPython versions tested on:
3.13
### Operating systems tested on:
Other
<!-- gh-linked-prs -->
### Linked PRs
* gh-133184
* gh-133234
<!-- /gh-linked-prs -->
| 6e907c4d1f8583a3fc80ad70232981c589900378 | 811edcf9cda5fb09aa5189e88e693d35dee7a2d1 |
python/cpython | python__cpython-133187 | # Compilation failure with --enable-optimizations and --without-doc-strings.
# Bug report
### Bug description:
`--without-doc-strings` breaks compilation with `--enable-optimizations` as `PROFILE_TASK` has failures, presumably, due to the missing doc strings. 3.12 does not have this problem as `PROFILE_TASK` is allowed to succeed with `|| true`.
Related: gh-110276
```shell
./configure --without-doc-strings --enable-optimizations
make -j32
...
Total duration: 25.6 sec
Total tests: run=8,926 failures=11 skipped=302
Total test files: run=43/43 failed=2 skipped=2
Result: FAILURE
make: *** [profile-run-stamp] Error 2
```
### CPython versions tested on:
3.13
### Operating systems tested on:
macOS
<!-- gh-linked-prs -->
### Linked PRs
* gh-133187
* gh-133207
<!-- /gh-linked-prs -->
| cc39b19f0fca8db0f881ecaf02f88d72d9f93776 | 8b26b23a9674a02563f28e4cfbef3d3e39876bfe |
python/cpython | python__cpython-133240 | # PyType_GetModuleByDef can return NULL without exception set
The [documentation](https://docs.python.org/3/c-api/type.html#c.PyType_GetModuleByDef) does not mention that this is possible, but if the object passed in is not a heap type it returns `NULL` without setting an exception:
```
PyObject *
PyType_GetModuleByDef(PyTypeObject *type, PyModuleDef *def)
{
assert(PyType_Check(type));
if (!_PyType_HasFeature(type, Py_TPFLAGS_HEAPTYPE)) {
// type_ready_mro() ensures that no heap type is
// contained in a static type MRO.
return NULL;
}
...
}
```
CC @ericsnowcurrently @encukou .
<!-- gh-linked-prs -->
### Linked PRs
* gh-133240
<!-- /gh-linked-prs -->
| fa52f289a36f50d6d10e57d485e5a4f58261222b | 662dd294563ce86980c640ad67e3d460a72c9cb9 |
python/cpython | python__cpython-133170 | # Add a C API function to detect temporaries
# Feature or enhancement
### Proposal:
NumPy has an optimization to detect temporaries created via the NumPy C API (e.g. in NumPy internals) and elide them. This can lead to a significant performance improvement for some operations.
In https://github.com/numpy/numpy/issues/28681, @colesbury [proposed](https://github.com/numpy/numpy/issues/28681#issuecomment-2810661401) adding some code to handle the change to use stackrefs internally in CPython, which broke the NumPy temporary elision heuristics in 3.14.
We later added that code more or less verbatim to the NumPy `main` branch:
https://github.com/numpy/numpy/blob/d692fbccd98cb880812b32936e5f94fcfe55053f/numpy/_core/src/multiarray/temp_elide.c#L119-L152
This unblocks testing NumPy on the 3.14 beta but we should really have at least an unstable C API function we can call here rather than relying on CPython internals.
### Has this already been discussed elsewhere?
No response given
### Links to previous discussion of this feature:
https://github.com/python/cpython/issues/133140#issuecomment-2839652603
<!-- gh-linked-prs -->
### Linked PRs
* gh-133170
<!-- /gh-linked-prs -->
| f2379535fe2d2219b71653782d5e31defd9b5556 | 4701ff92d747002d04b67688c7a581b1952773ac |
python/cpython | python__cpython-134047 | # UBsan: Remove _Py_NO_SANITIZE_UNDEFINED
# Bug report
Split out from the #111178 monster-issue:
We use the macro `_Py_NO_SANITIZE_UNDEFINED` to disable the UB sanitizer in some hard-to-fix cases, so that we can get a stable, regression-monitoring checker sooner.
To fully fix UBsan failures, we should get rid of the macro.
At least outside test functions.
<!-- gh-linked-prs -->
### Linked PRs
* gh-134047
* gh-134048
* gh-135320
* gh-135334
* gh-134050
* gh-135346
<!-- /gh-linked-prs -->
| 0a160bf14c4848f50539e52e2de486c641d122a2 | 22e4a40d9089dde2f1578c4b320f27b20e2e125b |
python/cpython | python__cpython-133176 | # Line beginning `<tab>` gives completion instead of indentation in pdb multi-line input after `interact`
# Feature or enhancement
### Proposal:
Line beginning `<tab>` in multi-line input has inconsistent behavior between before and after the `interact` command.
```
python -c 'import pdb; breakpoint()'
for i in range(10):
<tab> # indentation
interact
for i in range(10):
<tab> # completion
```
Hope there is the same indentation behavior after the `interact` command. That feels more natural and intuitive.
### Has this already been discussed elsewhere?
This is a minor feature, which does not need previous discussion elsewhere
### Links to previous discussion of this feature:
_No response_
<!-- gh-linked-prs -->
### Linked PRs
* gh-133176
<!-- /gh-linked-prs -->
| 327f5ff9fa4291e66079c61c77b273cb953c302f | 0e21ed7c09c687d62d6bf054022e66bccd1fa2bc |
python/cpython | python__cpython-133368 | # test_external_inspection: test_async_global_awaited_by() fails on s390x Fedora Stable Refleaks 3.x
Example: https://buildbot.python.org/#/builders/1641/builds/199
```
FAIL: test_async_global_awaited_by (test.test_external_inspection.TestGetStackTrace.test_async_global_awaited_by)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/home/buildbot/buildarea/3.x.cstratak-fedora-stable-s390x.refleak/build/Lib/test/test_external_inspection.py", line 531, in test_async_global_awaited_by
self.assertEqual([[['echo_client_spam'], 'echo client spam', [[['main'], 'Task-1', []]]]], entries[-1][1])
~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
AssertionError: Lists differ: [[['echo_client_spam'], 'echo client spam', [[['main'], 'Task-1', []]]]] != []
First list contains 1 additional elements.
First extra element 0:
[['echo_client_spam'], 'echo client spam', [[['main'], 'Task-1', []]]]
- [[['echo_client_spam'], 'echo client spam', [[['main'], 'Task-1', []]]]]
+ []
```
cc @pablogsal
<!-- gh-linked-prs -->
### Linked PRs
* gh-133368
<!-- /gh-linked-prs -->
| 8eaaf1640232191319f83917ef72e7853af25681 | a36367520eb3a954c94c71a6b2b64d2542283e38 |
python/cpython | python__cpython-133144 | # Adding an unstable C API for unique references
# Feature or enhancement
### Proposal:
In #132070, there was a race brought up in free-threading through use of `Py_REFCNT(op) == 1`. The fix is to use `_PyObject_IsUniquelyReferenced`.
There were a couple additional comments about how we should handle this for the public API (for example, @godlygeek suggested implementing `Py_REFCNT` as `_PyObject_IsUniquelyReferenced(op) ? 1 : INT_MAX`). I think the best approach is to just expose an unstable API for this and point to it in `Py_REFCNT`'s documentation.
I'm not imagining anything complex:
```c
int
PyUnstable_Object_IsUniquelyReferenced(PyObject *op)
{
return _PyObject_IsUniquelyReferenced(op);
}
```
@encukou, do you mind if this skips the C API WG? I'm not sure there's much to discuss about the API, other than some light bikeshedding.
### Has this already been discussed elsewhere?
No response given
### Links to previous discussion of this feature:
_No response_
<!-- gh-linked-prs -->
### Linked PRs
* gh-133144
<!-- /gh-linked-prs -->
| b275b8f34210bae2fc1e0af23515df5efe911c8e | 0eeaa0ef8bf60fd3b1448a615b6b1662d558990e |
python/cpython | python__cpython-133145 | # Add curses.assume_default_colors()
# Feature or enhancement
This is a refinement of the `curses.use_default_colors()` function which allows to change the color pair 0.
<!-- gh-linked-prs -->
### Linked PRs
* gh-133145
<!-- /gh-linked-prs -->
| 7363e8d24d14abf651633865ea959702ebac73d3 | 3e256b9118eded25e6aca61e3939fd4e03b87082 |
python/cpython | python__cpython-133132 | # iPhone SE simulator no longer available by default in Xcode 16.4
# Bug report
### Bug description:
The iOS testbed attempts to select the "iPhone SE (3rd generation)" simulator as a default option if no simulator is explicitly specified.
However, with the release of the iPhone 16e, Xcode 16.3 no longer ships with an iPhone SE image by default. This causes the `make testios` target to fail, as well as any testbed usage that doesn't explicitly specify a simulator.
### CPython versions tested on:
3.13
### Operating systems tested on:
Other
<!-- gh-linked-prs -->
### Linked PRs
* gh-133132
* gh-133173
<!-- /gh-linked-prs -->
| 42b0b0667e67ff444a03d0e7b217e77f3aae535d | c46635aa5a20fc1b4c5e85370fa0fa2303c47c14 |
python/cpython | python__cpython-133118 | # Enable type checking of `tomllib` stdlib folder
Following example of https://github.com/python/cpython/pull/131509 I propose to add mypy type checking in CI for `tomllib`.
It is fully typed already, we just need to run the CI if something changes.
<!-- gh-linked-prs -->
### Linked PRs
* gh-133118
* gh-133192
* gh-133206
* gh-133343
<!-- /gh-linked-prs -->
| 5ea9010e8910cb97555c3aef4ed95cca93a74aab | fd0f5d0a5eba0cc8cce8bed085089b842f45e047 |
python/cpython | python__cpython-133103 | # Negative run-time reported by `subprocess.run`'s `TimeoutExpired` exception when setting `timeout=0`
# Bug report
### Bug description:
```python
import subprocess
subprocess.run(['echo', 'hi'], timeout = 0)
# subprocess.TimeoutExpired: Command '['echo', 'hi']' timed out after -0.00010189996100962162 seconds
# subprocess.TimeoutExpired: Command '['echo', 'hi']' timed out after -4.819990135729313e-05 seconds
```
This is quite unexpected that the measured run-time can be negative
In general, it's unclear in the docs what behavior should have `timeout=0`. I would propose that it should disable timeout control, and should be equivalent to `timeout=None`.
But in any case, negative run-times are quite strange :)
### CPython versions tested on:
3.12
### Operating systems tested on:
WSLv1+Ubuntu24.04
<!-- gh-linked-prs -->
### Linked PRs
* gh-133103
* gh-133418
<!-- /gh-linked-prs -->
| b64aa302d7bc09454ba8d5b19922ff6a4192dd96 | 78adb63ee198c94c6ce2a1634aa7ea1d47c011ad |
python/cpython | python__cpython-133080 | # Remove Py_C_RECURSION_LIMIT & PyThreadState.c_recursion_remaining
Both were added in 3.13, are undocumented, and don't make sense in 3.14 due to changes in the stack overflow detection machinery. (On current main they contain dummy values.)
SC exception for removal without deprecation: https://github.com/python/steering-council/issues/288
<!-- gh-linked-prs -->
### Linked PRs
* gh-133080
<!-- /gh-linked-prs -->
| 0c26dbd16e9dd71a52d3ebd43d692f0cd88a3a37 | 208d06fd515119af49f844c7781e1eb2be8a8add |
python/cpython | python__cpython-133074 | # Undefined behavior `NULL + 0` in `list_extend_{set,dict,dictitems}`
# Bug report
Found by @emmatyping.
### Bug description:
This affects `list_extend_{set,dict,dictitems}`, e.g.:
```c
static int
list_extend_set(PyListObject *self, PySetObject *other)
{
Py_ssize_t m = Py_SIZE(self);
Py_ssize_t n = PySet_GET_SIZE(other);
if (list_resize(self, m + n) < 0) {
return -1;
}
/* populate the end of self with iterable's items */
Py_ssize_t setpos = 0;
Py_hash_t hash;
PyObject *key;
PyObject **dest = self->ob_item + m; // UB if 'self' is the empty list!
while (_PySet_NextEntryRef((PyObject *)other, &setpos, &key, &hash)) {
FT_ATOMIC_STORE_PTR_RELEASE(*dest, key);
dest++;
}
Py_SET_SIZE(self, m + n);
return 0;
}
```
If a list object is empty, `ob->ob_items` will be NULL. In particular, `list_resize` won't allocate and the size will be 0. However:
```c
PyObject **dest = self->ob_item + m;
```
becomes equivalent to `NULL + 0` which is UB. There is no runtime error for now because the loop that follows is not executed.
### CPython versions tested on:
CPython main branch
### Operating systems tested on:
_No response_
<!-- gh-linked-prs -->
### Linked PRs
* gh-133074
<!-- /gh-linked-prs -->
| a99bfaa53cbbb2ebd35bd94237a11bfaefe32665 | 4ebbfcf30e0e2d87ff6036d4d1de0f6f0ef7c46a |
python/cpython | python__cpython-133062 | # Show value of `UINT32_MAX` in HMAC error messages
# Bug report
### Bug description:
Following https://github.com/python/cpython/pull/133027#discussion_r2062730124 and https://github.com/python/cpython/pull/133027#discussion_r2062730751, we should avoid having the textual "UINT32_MAX" or any C constant in a user-facing error message. I however did this when I implemented one-shot HMAC.
### CPython versions tested on:
CPython main branch
### Operating systems tested on:
_No response_
<!-- gh-linked-prs -->
### Linked PRs
* gh-133062
<!-- /gh-linked-prs -->
| 1b7470f8cbff4bb9e58edd940a997a3647e285e4 | cd76eff26e5f700ffb715d9d096d2e28744d0594 |
python/cpython | python__cpython-133055 | # Always skip `pyrepl` tests that contain `"can't use pyrepl"`
# Bug report
Right now there's a pattern in https://github.com/python/cpython/blob/main/Lib/test/test_pyrepl/test_pyrepl.py
that looks like this:
```python
output, exit_code = self.run_repl(commands, env=env)
if "can't use pyrepl" in output:
self.skipTest("pyrepl not available")
```
We manually check the error message after `run_repl` call and then skip the test.
This design is prone to possible errors.
I propose to do this check in `run_pyrepl` itself.
<!-- gh-linked-prs -->
### Linked PRs
* gh-133055
* gh-133095
<!-- /gh-linked-prs -->
| b739ec5ab78ed55367516de7a11e732cb3f1081d | 58567cc18c5b048e08307b5ba18a9934a395ca42 |
python/cpython | python__cpython-133084 | # PEP 649: class __annotate__ is shadowed by compiler-generated one
# Bug report
### Bug description:
I'm playing around with PEP 649, PEP 749 annotations and I found this surprising behaviour:
```python
class C:
x: str
def __annotate__(format):
if format != 1:
raise NotImplementedError()
return {'x': int}
assert C.__annotate__(1) == {'x': int} # This returns {'x': str} and fails
```
I'm not yet sure why I would write code with annotations and then also write an explicit `__annotate__`, but I would expect the explicit method to prevail over the compiler-generated one. If I explicitly set `C.__annotate__ = some_func` afterwards it works as I would expect.
Calling `dis` on the provided example shows both `__annotate__`s getting compiled, and one overwriting the other in the class.
Related to #119180.
### CPython versions tested on:
3.14, CPython main branch
### Operating systems tested on:
Linux
<!-- gh-linked-prs -->
### Linked PRs
* gh-133084
<!-- /gh-linked-prs -->
| 345fdce1d024f238c53bc355e90ec1c17e12ec20 | 245cd6c53278006fa34fd7799d32f0884eb7e75d |
python/cpython | python__cpython-133038 | # Deprecate `codecs.open()`
Discussion: https://discuss.python.org/t/deprecating-codecs-open/88135/
`codecs.open()` is used a lot because it was recommended in Python 2.
But `open()` (or `io.open()`) is recommended since Python 3.
It is a time to deprecate `codecs.open()`.
Since it is still widely used, we won't schedule its removal.
<!-- gh-linked-prs -->
### Linked PRs
* gh-133038
<!-- /gh-linked-prs -->
| 4e294f6feb3193854d23e0e8be487213a80b232f | 732d1b02417e91d6a4247879e290065287cc6b51 |
python/cpython | python__cpython-133034 | # `TypeIgnore` is not documented
# Documentation
In [the document of ast module](https://docs.python.org/3.14/library/ast.html), the class `TypeIgnore` is listed in the abstract grammar, but there is no document entry on this page.
<!-- gh-linked-prs -->
### Linked PRs
* gh-133034
* gh-133078
<!-- /gh-linked-prs -->
| 4e04511cb9c176c32d6f3694f426750d710121cd | af3f6fcb7ecd3226b0067e0e1a9a6bfb0657a9ba |
python/cpython | python__cpython-133252 | # Improve the error message for invalid typecodes in `multiprocessing.{Array,Value}`
# Feature or enhancement
### Proposal:
More discussion: https://github.com/python/cpython/pull/132504#issuecomment-2802448414
This is the current message which is not very clear (it comes from `ctypes.sizeof`):
```python
>>> from multiprocessing import Array
>>> Array('x', 1)
...
TypeError: this type has no size
```
I propose we change it to:
```python
>>> from multiprocessing import Array
>>> Array('x', 1)
TypeError: bad typecode (must be a ctypes type or one of c, b, B, u, h, H, i, I, l, L, q, Q, f or d)
```
This is modeled after the error message in `array.array` which is more informative:
```python
>>> array('x')
...
ValueError: bad typecode (must be b, B, u, w, h, H, i, I, l, L, q, Q, f or d)
```
(A `ValueError` seems like a more appropriate option for multiprocessing as well, but that'd be a breaking change)
Note that, `multiprocessing.Array` (and `Value`) does not support the 'w' typecode while `array.array` does.
### Has this already been discussed elsewhere?
This is a minor feature, which does not need previous discussion elsewhere
### Links to previous discussion of this feature:
https://github.com/python/cpython/pull/132504#issuecomment-2802448414
<!-- gh-linked-prs -->
### Linked PRs
* gh-133252
<!-- /gh-linked-prs -->
| f52de8a937e89a4d1cf314f12ee5e7bbaa79e7da | 2cd24ebfe9a14bd52cb4d411c126b6a2dac65ae0 |
python/cpython | python__cpython-133019 | # Test Asyncio Utils References futures.TimeoutError Which Does Not Exist
# Bug report
### Bug description:
https://github.com/python/cpython/blob/632524a5cbdd3e999992d0e9e68fe8b464ed16ec/Lib/test/test_asyncio/utils.py#L107C1-L107C37
``asyncio.futures.TimeoutError`` does not exist AFAIK -- we can reproduce this fairly easily using the following 1-liner:
```
from asyncio.futures import TimeoutError
```
Is this a bug? If so, I can fix this.
### CPython versions tested on:
3.14
### Operating systems tested on:
macOS
<!-- gh-linked-prs -->
### Linked PRs
* gh-133019
* gh-133023
<!-- /gh-linked-prs -->
| 8d6d7386a35b4a6fdd7d599f2184780bb83cc306 | 5e96e4fca80a8cd25da6b469b25f8f5a514de8be |
python/cpython | python__cpython-133010 | # UAF: `xml.etree.ElementTree.Element.__deepcopy__` when concurrent mutations happen
# Crash report
### What happened?
Reproducer:
```py
import xml.etree.ElementTree as ET
from copy import deepcopy
class Evil(ET.Element):
def __deepcopy__(self, memo):
root.clear()
return self
root = ET.Element('a')
root.append(Evil('x'))
deepcopy(root)
```
### CPython versions tested on:
CPython main branch
### Operating systems tested on:
_No response_
### Output from running 'python -VV' on the command line:
Python 3.14.0a7+ (heads/main:7f02ded29fb, Apr 26 2025, 14:29:01) [GCC 7.5.0]
<!-- gh-linked-prs -->
### Linked PRs
* gh-133010
* gh-133805
* gh-133806
<!-- /gh-linked-prs -->
| 116a9f9b3775c904c98e390d896200e1641498aa | d13d5fdf610a294a6c3dc125e0856fb7fdd41e49 |
python/cpython | python__cpython-133014 | # test_remote_exec fails on Android with PermissionError
Example: https://buildbot.python.org/#/builders/1591/builds/1914
```
ERROR: test_breakpoints (test.test_remote_pdb.PdbConnectTestCase.test_breakpoints)
Test setting and hitting breakpoints.
----------------------------------------------------------------------
Traceback (most recent call last):
File "/data/user/0/org.python.testbed/files/python/lib/python3.14/test/test_remote_pdb.py", line 431, in test_breakpoints
process, client_file = self._connect_and_get_client_file()
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^^
File "/data/user/0/org.python.testbed/files/python/lib/python3.14/test/test_remote_pdb.py", line 324, in _connect_and_get_client_file
process = subprocess.Popen(
[sys.executable, self.script_path],
...<2 lines>...
text=True
)
File "/data/user/0/org.python.testbed/files/python/lib/python3.14/subprocess.py", line 1038, in __init__
self._execute_child(args, executable, preexec_fn, close_fds,
~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
pass_fds, cwd, env,
^^^^^^^^^^^^^^^^^^^
...<5 lines>...
gid, gids, uid, umask,
^^^^^^^^^^^^^^^^^^^^^^
start_new_session, process_group)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/data/user/0/org.python.testbed/files/python/lib/python3.14/subprocess.py", line 1967, in _execute_child
raise child_exception_type(errno_num, err_msg, err_filename)
PermissionError: [Errno 13] Permission denied: ''
```
cc @pablogsal @gaogaotiantian
<!-- gh-linked-prs -->
### Linked PRs
* gh-133014
<!-- /gh-linked-prs -->
| 0eb0e70ca073cc3eb26611963301ca8258217505 | 314f4b9716c2a3f49f7834d1e12bb2ee6c24a588 |
python/cpython | python__cpython-133007 | # tarfile: Support `preset` argument to `tarfile.open(mode="w|xz")`
# Feature or enhancement
### Proposal:
Currently, `tarfile.open()` doesn't support specifying a compression level / preset when using `mode="w|xz"`. Technically this is documented (via not listing `w|xz` among the modes supporting either `compresslevel` or `preset` argument), it's kinda surprising (to the point of me thinking that `w|xz` takes `compresslevel` instead of `preset`, since the latter threw a `TypeError` — and then being surprised that the default compression level was used), and quite limiting that you can't set the preset while using stream mode. On top of that, there does not seem to be any technical limitation preventing us from doing that.
I'm working on a pull request to add that option.
### Has this already been discussed elsewhere?
This is a minor feature, which does not need previous discussion elsewhere
### Links to previous discussion of this feature:
_No response_
<!-- gh-linked-prs -->
### Linked PRs
* gh-133007
<!-- /gh-linked-prs -->
| 019ee49d50b6466f2f18035c812fa87d20c24a46 | 146b981f764cc8975910066096dc2f6cb33beec6 |
python/cpython | python__cpython-132997 | # Upgrade bundled pip to 25.1.1
# Feature or enhancement
### Proposal:
`ensurepip`'s bundled version of pip gets updated to the current latest 25.1.1.
### Has this already been discussed elsewhere?
No response given
### Links to previous discussion of this feature:
_No response_
<!-- gh-linked-prs -->
### Linked PRs
* gh-132997
* gh-133308
<!-- /gh-linked-prs -->
| a512905e156bc09a20b171686ac129e66c13f26a | ddac7ac59a7dfa4437562b6e705e64865c3b1e9a |
python/cpython | python__cpython-132999 | # Expose C constant `HASHLIB_GIL_MINSIZE`
# Feature or enhancement
### Proposal:
In `hashlib.h`, we have a HASHLIB_GIL_MINSIZE constant that we could expose as `hashlib.GIL_MINSIZE`. For now it's a read-only constant since the constant is shared across multiple modules (`_hashlib` and HACL*-based primitives).
Since `_hashlib` may be missing but `_md5` may not, it's better to place that constant in all modules. However, for it to be read-only, we should make it available in all modules it is shared by. That way, we can always have access to it for desired module and we can also change its per-module behavior in the future.
Note: this is typically useful in tests where for now it's always hardcoded.
### Has this already been discussed elsewhere?
This is a minor feature, which does not need previous discussion elsewhere
### Links to previous discussion of this feature:
_No response_
<!-- gh-linked-prs -->
### Linked PRs
* gh-132999
<!-- /gh-linked-prs -->
| 3695ba93d54f82d9aaa3e88a246596f69a8c948f | 019ee49d50b6466f2f18035c812fa87d20c24a46 |
python/cpython | python__cpython-132998 | # Add `socket.IP_FREEBIND` constant
# Feature or enhancement
### Proposal:
The socket options exposed by CPython's socket module do not include `IP_FREEBIND`, meaning the option number has to be hardcoded. Examples of this in the wild can be find using GitHub's code search.
### Has this already been discussed elsewhere?
This is a minor feature, which does not need previous discussion elsewhere
### Links to previous discussion of this feature:
_No response_
<!-- gh-linked-prs -->
### Linked PRs
* gh-132998
<!-- /gh-linked-prs -->
| 314f4b9716c2a3f49f7834d1e12bb2ee6c24a588 | bd2e5f044c9cb08b8725ab45b05de0115d014bbe |
python/cpython | python__cpython-132988 | # Support __index__() for unsigned integer converters in PyArg_Parse and Argument Clinic
The `__index__()` special method was introduced to make other integer-like types which are not `int` subclasses to be accepted as integer arguments.
It is supported by all `PyArg_Parse` format units for integers, except `k` (long sized bitfield) and `K` (long long sized bitfield). Argument Clinic has the same behavior for `unsigned_long(bitwise=True)` and `unsigned_long_long(bitwise=True)`. It supports also non-bitwise unsigned integer converters, all of them do not support `__index__()`.
Note that making `PyLong_AsUnsignedLong()` and `PyLong_AsUnsignedLongLong()` to support `__index__()` is potentially unsafe. But the higher level code that uses them, can be changed.
<!-- gh-linked-prs -->
### Linked PRs
* gh-132988
* gh-133011
* gh-133093
* gh-133094
* gh-133096
* gh-133097
* gh-133098
* gh-133099
* gh-133100
<!-- /gh-linked-prs -->
| 632524a5cbdd3e999992d0e9e68fe8b464ed16ec | e714ead7a2895799b7f2cbded086378d92625a3a |
python/cpython | python__cpython-133018 | # Implement PEP 784 - Adding Zstandard to the Python standard library
# Feature or enhancement
This is a tracking issue for implementing [PEP 784](https://peps.python.org/pep-0784/). See the PEP text for more details.
Since the diff is significant (~10k lines) I wanted to split up the PRs a bit.
### Implementation Plan:
- [x] Add `compression` module just re-exporting existing compression modules. Move the `_compression` module.
- [x] Add `_zstd` native module with Unix build config
- [x] Add Windows build config for `_zstd` ~~(blocked on adding `libzstd` to cpython-source-deps) and SBOM config.~~
- [x] Add `zstd` Python module with tests
- [x] add NEWS/What's New section
- [x] Add documentation for `zstd`
- [x] Decide on whether or not to check parameter types for options
- [x] Verify style conformance of C and Python code
- [x] Python
- [x] C
- [x] Refactor `_train_dict` and `_finalize_dict` to share common code
- [x] Fix https://github.com/python/cpython/issues/133885
- [x] Improve error message when `options` value can't be converted to int
<!-- gh-linked-prs -->
### Linked PRs
* gh-133018
* gh-133027
* gh-133063
* gh-133076
* gh-133086
* gh-133185
* gh-133282
* gh-133365
* gh-133366
* gh-133479
* gh-133495
* gh-133502
* gh-133535
* gh-133547
* gh-133550
* gh-133565
* gh-133629
* gh-133670
* gh-133674
* gh-133694
* gh-133695
* gh-133723
* gh-133736
* gh-133756
* gh-133757
* gh-133762
* gh-133775
* gh-133784
* gh-133785
* gh-133786
* gh-133788
* gh-133791
* gh-133792
* gh-133793
* gh-133799
* gh-133854
* gh-133856
* gh-133857
* gh-133859
* gh-133860
* gh-133911
* gh-133915
* gh-133921
* gh-133924
* gh-133947
* gh-133950
* gh-133962
* gh-133974
* gh-134001
* gh-134230
* gh-134305
* gh-134425
* gh-134432
* gh-134442
* gh-134459
* gh-134463
* gh-134601
* gh-134602
* gh-134605
* gh-134609
* gh-134723
* gh-134838
* gh-134930
* gh-134998
* gh-136617
* gh-137052
* gh-137320
* gh-137321
* gh-137343
* gh-137360
<!-- /gh-linked-prs -->
| 20be6ba61ac8a0a5d6242701c4186579cfa653f0 | 6d53b752831c453da115dd4ce54a0d121d9990cd |
python/cpython | python__cpython-133223 | # Remote PDB can't interrupt an infinite loop in an evaluated command
# Bug report
### Bug description:
As @gaogaotiantian pointed out in https://github.com/python/cpython/pull/132451#discussion_r2055068762 PDB's new remote attaching feature can enter an uninterruptible infinite loop if you type `while True: pass` at a PDB prompt. We're able to interrupt the script itself, when you 'cont'inue past a breakpoint, but aren't able to interrupt a statement evaluated interactively at a PDB prompt.
I have a few different ideas for how this could be fixed, though they all have different tradeoffs.
Options:
1. the client can send a SIGINT signal to the remote.
* + Pro: it can interrupt IO, as long as `PyErr_CheckSignals()` is being called by the main thread
* - Con: it will not work on Windows
* - Con: it will kill the process if the SIGINT handler has been unset (and PDB does mess with the signal handler - I'm not sure this is just a theoretical problem)
2. we can raise a `KeyboardInterrupt` from an injected script
* + Pro: It will work on Windows
* - Con: We need to change `sys.remote_exec()` to allow `KeyboardInterrupt` to escape
* + Pro: A looping injected script can be killed now, though!
* - Con: It cannot interrupt IO
* - Con: The `KeyboardInterrupt` will get raised even in places where signals were blocked, potentially leading to a crash where something that should never get an exception got one
3. we can make PDB stop if `set_trace()` is called while it's evaluating a user-supplied statement
* + Pro: it will work on Windows
* - Con: It cannot interrupt IO
* - Con: it will lead to recursion of PDB's trace function if the script is interrupted, and I don't know how well PDB will handle that
4. the PDB server can start a thread that listens to a socket and calls `signal.raise_signal(signal.SIGINT)` every time it gets a message on the socket
* + Pro: it can interrupt IO in the same cases as 1 can
* + Pro: it works in "true remote" scenarios where the client and server are on different hosts (which isn't currently a feature, but would be a nice future feature)
* + Pro: it works on Windows
* - Con: more complexity, more failure modes
I'm currently leaning towards (4) - it's the most complete fix and works on the most platforms and the most scenarios (interrupting both CPU-bound and IO-bound stuff). I plan to prototype this soon.
### CPython versions tested on:
CPython main branch
### Operating systems tested on:
_No response_
<!-- gh-linked-prs -->
### Linked PRs
* gh-133223
<!-- /gh-linked-prs -->
| 9434709edf960307431cedd442b22c5b28d03230 | 24ebb9ccfdf78be09253e8a78a906279a1b3e21e |
python/cpython | python__cpython-133067 | # shutil.which doesn't work in Docker container
# Bug report
### Bug description:
```
import shutil
target_fp = shutil.which(head_proc)
if target_fp is not None:
perm = os.stat(target_fp)
# This checks the permission bits as an extra check.
# In a Docker container "which" was not returning
# executable correctly when headproc was not in the
# current working directory. This gets the user
# permission value (rwx) and the remainder of 2
# division (odd or even). If even, then executable
# bit is not set and remainder is 0. bool(0) is False.
# bool(1) is True.
executable = bool(int(oct(perm.st_mode)[-3]) % 2)
if not executable:
target_fp = None
print(target_fp)
```
This only occurred in a Docker container, but was not a problem outside a container on an Ubuntu Linux box.
When head_proc is the name of a file in the current working directory (i.e. test.py) that is not executable, the shutil.which correctly returns None. However, when the current working directory is not where the file resides (i.e. subdir/test.py or /path/to/test.py) then shutil.which returns the filename and path incorrectly, indicating that it is executable, when it is not.
os.access also returns an incorrect value when mode is os.R_OK | os.X_OK when the current working directory is not where the file exists. If os.access is called on a file in the current working directory in a Docker container, it works as advertised. The code above demonstrates a work-around using os.stat that fixes it for now until this can be fixed.
### CPython versions tested on:
3.11
### Operating systems tested on:
Other
<!-- gh-linked-prs -->
### Linked PRs
* gh-133067
* gh-133803
* gh-133804
<!-- /gh-linked-prs -->
| d13d5fdf610a294a6c3dc125e0856fb7fdd41e49 | 832058274dd99e30321a5f9a3773038cd0424c11 |
python/cpython | python__cpython-132951 | # test_remote_pdb fails on FreeBSD: Remote debugging is not supported on this platform
Example: https://buildbot.python.org/#/builders/1223/builds/5984
```
ERROR: test_keyboard_interrupt (test.test_remote_pdb.PdbConnectTestCase.test_keyboard_interrupt)
Test that sending keyboard interrupt breaks into pdb.
----------------------------------------------------------------------
Traceback (most recent call last):
File "/buildbot/buildarea/3.x.ware-freebsd/build/Lib/test/test_remote_pdb.py", line 520, in test_keyboard_interrupt
self._send_interrupt(process.pid)
~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^
File "/buildbot/buildarea/3.x.ware-freebsd/build/Lib/test/test_remote_pdb.py", line 366, in _send_interrupt
sys.remote_exec(pid, interrupt_script)
~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^
RuntimeError: Remote debugging is not supported on this platform
```
cc @pablogsal
<!-- gh-linked-prs -->
### Linked PRs
* gh-132951
* gh-132959
* gh-132965
<!-- /gh-linked-prs -->
| 947c4f19d969578242ccc454ecc4b04c51408b03 | 2a28b21a517775120a7a720adc29cf85111e8bf4 |
python/cpython | python__cpython-133032 | # test_opcache fails randomly (failure or crash)
On a Free Threaded build:
```
$ ./configure --with-pydebug --disable-gil
$ make
$ ./python -m test test_opcache --forever -v
```
Error 1 (easy to reproduce):
```
FAIL: test_load_global_module (test.test_opcache.TestRacesDoNotCrash.test_load_global_module)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/home/vstinner/python/main/Lib/test/test_opcache.py", line 1048, in test_load_global_module
self.assert_races_do_not_crash(
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^
opname, get_items, read, write, check_items=True
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
)
^
File "/home/vstinner/python/main/Lib/test/test_opcache.py", line 590, in assert_races_do_not_crash
self.assert_specialized(item, opname)
~~~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^
File "/home/vstinner/python/main/Lib/test/test_opcache.py", line 23, in assert_specialized
self.assertIn(opname, opnames)
~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^
AssertionError: 'LOAD_GLOBAL_MODULE' not found in {'LOAD_GLOBAL', 'RESUME_CHECK', 'RETURN_VALUE'}
```
Error 2 (crash, harder to reproduce):
```
test_load_attr_method_with_values (test.test_opcache.TestRacesDoNotCrash.test_load_attr_method_with_values) ...
Fatal Python error: Segmentation fault
<Cannot show all threads while the GIL is disabled>
Stack (most recent call first):
File "/home/vstinner/python/main/Lib/test/test_opcache.py", line 913 in write
File "/home/vstinner/python/main/Lib/threading.py", line 1021 in run
File "/home/vstinner/python/main/Lib/threading.py", line 1079 in _bootstrap_inner
File "/home/vstinner/python/main/Lib/threading.py", line 1041 in _bootstrap
Current thread's C stack trace (most recent call first):
Binary file "./python", at _Py_DumpStack+0x31 [0x6715c7]
Binary file "./python" [0x684547]
Binary file "./python" [0x6847bd]
Binary file "/lib64/libc.so.6", at +0x19df0 [0x7f95d056bdf0]
Binary file "./python", at _Py_Dealloc+0x18 [0x4e7a28]
Binary file "./python", at _Py_MergeZeroLocalRefcount+0x58 [0x4e7b3e]
Binary file "./python" [0x50f410]
Binary file "./python" [0x51db0a]
Binary file "./python", at _PyObject_GenericSetAttrWithDict+0xa3 [0x4e9e77]
Binary file "./python", at PyObject_GenericSetAttr+0xe [0x4ea0dd]
Binary file "./python", at PyObject_SetAttr+0x60 [0x4e8f5f]
Binary file "./python", at PyObject_DelAttr+0xe [0x4e92fb]
Binary file "./python", at _PyEval_EvalFrameDefault+0xed7b [0x5ce860]
(...)
<truncated rest of calls>
Extension modules: _testinternalcapi (total: 1)
```
cc @colesbury @nascheme
<!-- gh-linked-prs -->
### Linked PRs
* gh-133032
* gh-133114
<!-- /gh-linked-prs -->
| 31d1342de9489f95384dbc748130c2ae6f092e84 | fe462f5a9122e1c2641b5369cbb88c4a5e822816 |
python/cpython | python__cpython-132931 | # Implement PEP 773
# Feature or enhancement
For tracking work (assuming the PEP is still accepted today as planned - just getting things in place before I go away for the weekend).
- [x] Mark existing installer as deprecated in installer
- [x] Mark existing installer as deprecated in docs
- [x] Add `--preset-pymanager` support to PC/layout script
- [x] Add deprecation warning to legacy `py.exe` when used with commands
- [x] Update using/windows docs
<!-- gh-linked-prs -->
### Linked PRs
* gh-132931
* gh-133091
* gh-133119
* gh-133120
* gh-133154
* gh-133246
<!-- /gh-linked-prs -->
| e20ca6d1b006674be23d16083f273e8a7b8f77b6 | 11f457cf41beede182d7387080f35c73f8f4a46f |
python/cpython | python__cpython-132923 | # test_peg_generator fails when using -Werror: dep_util is Deprecated
Example: https://buildbot.python.org/#/builders/146/builds/11260
```
ERROR: test_advanced_left_recursive (test.test_peg_generator.test_c_parser.TestCParser.test_advanced_left_recursive)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/home/vstinner/python/main/Lib/test/test_peg_generator/test_c_parser.py", line 255, in test_advanced_left_recursive
self.run_test(grammar_source, test_source)
~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/vstinner/python/main/Lib/test/test_peg_generator/test_c_parser.py", line 134, in run_test
self.build_extension(grammar_source)
~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^
File "/home/vstinner/python/main/Lib/test/test_peg_generator/test_c_parser.py", line 131, in build_extension
generate_parser_c_extension(grammar, Path('.'), library_dir=self.library_dir)
~~~~~~~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/vstinner/python/main/Tools/peg_generator/pegen/testutil.py", line 107, in generate_parser_c_extension
compile_c_extension(
~~~~~~~~~~~~~~~~~~~^
str(source),
^^^^^^^^^^^^
...<3 lines>...
library_dir=library_dir,
^^^^^^^^^^^^^^^^^^^^^^^^
)
^
File "/home/vstinner/python/main/Tools/peg_generator/pegen/build.py", line 98, in compile_c_extension
from setuptools._distutils.dep_util import newer_group
File "/home/vstinner/python/main/build/test_python_worker_615517æ/tempcwd/venv/lib/python3.14t/site-packages/setuptools/_distutils/dep_util.py", line 9, in __getattr__
warnings.warn(
~~~~~~~~~~~~~^
"dep_util is Deprecated. Use functions from setuptools instead.",
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
DeprecationWarning,
^^^^^^^^^^^^^^^^^^^
stacklevel=2,
^^^^^^^^^^^^^
)
^
DeprecationWarning: dep_util is Deprecated. Use functions from setuptools instead.
```
<!-- gh-linked-prs -->
### Linked PRs
* gh-132923
* gh-132926
* gh-133004
* gh-133522
<!-- /gh-linked-prs -->
| 1a70f66ea856de1b1b0ca47baf9ee8ba6799ae18 | 79ba56433e5ced7740866d1112b0cead86f627f6 |
python/cpython | python__cpython-133399 | # GC performance regression in free threaded build
# Bug report
### Bug description:
I've identified a significant performance regression when using Python's free-threaded mode with shared list appends. In my test case, simply appending to a shared list causes a 10-15x performance decrease compared to normal Python operation.
Test Case:
```python
import itertools
import time
def performance_test(n_options=5, n_items=5, iterations=50):
list = []
def expensive_operation():
# Create lists of tuples
data = []
for _ in range(n_options):
data.append([(f"a{i}", f"b{i}") for i in range(n_items)])
# Generate all combinations and create result tuples
results = []
for combo in itertools.product(*data):
result = tuple((x[0], x[1], f"name_{i}") for i, x in enumerate(combo))
results.append(result)
# Commenting the following line solves the performance regression in free-threaded mode
list.append(results)
return results
start = time.time()
for _ in range(iterations):
result = expensive_operation()
duration = time.time() - start
print(f"n_options={n_options}, n_items={n_items}, iterations={iterations}")
print(f"Time: {duration:.4f}s, Combinations: {len(result)}")
return duration
if __name__ == "__main__":
print("Python Performance Regression Test")
print("-" * 40)
performance_test()
```
Results:
- Standard Python3.13: 0.1290s
- Free-threaded Python3.13t: 2.1643s
- Free-threaded Python 3.14.0a7: 2.1923s
- Free-threaded Python3.13t with list.append commented out: 0.1332s
The regression appears to be caused by contention on the per-list locks and reference count fields when appending to a shared list in free-threaded mode.
### CPython versions tested on:
3.14
### Operating systems tested on:
Linux
<!-- gh-linked-prs -->
### Linked PRs
* gh-133399
* gh-133464
* gh-133508
* gh-133544
* gh-133718
* gh-134692
* gh-134802
<!-- /gh-linked-prs -->
| 5c245ffce71b5a23e0022bb5d1eaf645fe96ddbb | 8e08ac9f32d89bf387c75bb6d0710a7b59026b5b |
python/cpython | python__cpython-132919 | # Detect buffer overflow in fcntl.fcntl() and fcntl.ioctl()
`fcntl()` and `ioctl()` take an argument which can be a pointer to a buffer of unspecified length, depending on operation. They can also write in that buffer, depending on operation. A temporary buffer of size 1024 is used, so a chance of directly overflowing the bytes-like object provided by user is small, but if its size than necessary, the user will get truncated data in best case, and in worst case it will cause the C stack corruption.
We cannot prevent this, unless we limit the set of supported operations to a small set of allowed operations. This is not practical, because `fcntl()` and `ioctl()` exist to support operations not explicitly supported by Python. But we can detect a buffer overflow, and raise an exception. It may be too late, if the stack or memory are corrupted, but it is better than silently ignore error.
<!-- gh-linked-prs -->
### Linked PRs
* gh-132919
* gh-133254
<!-- /gh-linked-prs -->
| c2eaeee3dc3306ca486b0377b07b1a957584b691 | 0f84f6b334e4340798fb2ec4e1ca21ad838db339 |
python/cpython | python__cpython-132914 | # test_remote_pdb hangs randomly
Example on "Hypothesis tests on Ubuntu": https://github.com/python/cpython/actions/runs/14660752238/job/41144501229?pr=132906
```
(...)
0:46:29 load avg: 0.00 running (1): test_remote_pdb (43 min 3 sec)
0:46:59 load avg: 0.00 running (1): test_remote_pdb (43 min 33 sec)
0:47:29 load avg: 0.00 running (1): test_remote_pdb (44 min 3 sec)
0:47:59 load avg: 0.00 running (1): test_remote_pdb (44 min 33 sec)
0:48:29 load avg: 0.00 running (1): test_remote_pdb (45 min 3 sec)
0:48:59 load avg: 0.00 running (1): test_remote_pdb (45 min 33 sec)
```
Sadly, the "Hypothesis tests on Ubuntu" job has no timeout :-(
<!-- gh-linked-prs -->
### Linked PRs
* gh-132914
* gh-132920
* gh-132924
* gh-132929
* gh-132937
* gh-132939
* gh-132949
<!-- /gh-linked-prs -->
| eb2e430b88afa93e7bfc05f4346e8336c2c31b48 | e8cf3a1a641d6fa0bfa84a4f8363ff1e42abda30 |
python/cpython | python__cpython-132911 | # Possible (benign) overflow for 'K' format code in `do_mkvalue`
# Bug report
### Bug description:
This is probably not an issue but here are the known "temporary" overflows:
```c
// format = 'K'
return PyLong_FromUnsignedLongLong((long long)va_arg(*p_va, unsigned long long));
```
Note that `va_arg(*p_va, unsigned long long)` will be converted into an unsigned long long, and then into a long long, and then back to an unsigned long long. So if we were to have an overflow here, it shouldn't really matter. Indeed, take `v = (1ULL << 63) + 2`. We have:
```c
unsigned long long v2 = va_arg(*p_va, unsigned long long);
// v2 = 9223372036854775810
long long v3 = (long long)v2;
// v3 = -9223372036854775806
PyObject *v4 = PyLong_FromUnsignedLongLong(v3);
// v4 = 9223372036854775810 as PyObject as v3 was casted back to an unsigned long long
```
### CPython versions tested on:
CPython main branch
### Operating systems tested on:
_No response_
<!-- gh-linked-prs -->
### Linked PRs
* gh-132911
* gh-132932
<!-- /gh-linked-prs -->
| 3fa024dec32e2ff86baf3dd7e14a0b314855327c | de6482eda3a46cc9c9a03fb9ba57295ab99b4722 |
python/cpython | python__cpython-132935 | # Have math.isnormal() and, perhaps, math.issubnormal()?
# Feature or enhancement
### Proposal:
Of course, these functions can be emulated in pure-Python.
On another hand, people can reasonably expect these classifiers as "The [math](https://docs.python.org/3.14/library/math.html#module-math) module consists mostly of thin wrappers around the platform C math library functions." (c) `isnormal()` was in libm since C99 and `issubnormal()` - since C23.
### Has this already been discussed elsewhere?
This is a minor feature, which does not need previous discussion elsewhere
### Links to previous discussion of this feature:
_No response_
<!-- gh-linked-prs -->
### Linked PRs
* gh-132935
* gh-136039
<!-- /gh-linked-prs -->
| 5f61cde80a9b33c8e118b1c009fe2aaa4bb87356 | 128195e12eb6d5b9542558453df7045dd7aa1e15 |
python/cpython | python__cpython-137118 | # REPL: AttributeError: module `__mp_main__` has no attribute `is_prime` in `ProcessPoolExecutor` example
# Bug report
### Bug description:
python version:3.12.9
An error occurred when running as a standalone script.
An error message is reported when running the sample code of the python document:
Traceback (most recent call last):
File "C:\Users\wen\Desktop\test.py", line 32, in <module>
main()
File "C:\Users\wen\Desktop\test.py", line 28, in main
for number, prime in zip(PRIMES, executor.map(is_prime, PRIMES)):
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Miniconda3\Lib\concurrent\futures\process.py", line 636, in _chain_from_iterable_of_lists
for element in iterable:
^^^^^^^^
File "C:\Miniconda3\Lib\concurrent\futures\_base.py", line 619, in result_iterator
yield _result_or_cancel(fs.pop())
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Miniconda3\Lib\concurrent\futures\_base.py", line 317, in _result_or_cancel
return fut.result(timeout)
^^^^^^^^^^^^^^^^^^^
File "C:\Miniconda3\Lib\concurrent\futures\_base.py", line 456, in result
return self.__get_result()
^^^^^^^^^^^^^^^^^^^
File "C:\Miniconda3\Lib\concurrent\futures\_base.py", line 401, in __get_result
raise self._exception
concurrent.futures.process.BrokenProcessPool: A process in the process pool was terminated abruptly while the future was running or pending.
```
import concurrent.futures
import math
PRIMES = [
112272535095293,
112582705942171,
112272535095293,
115280095190773,
115797848077099,
1099726899285419]
def is_prime(n):
if n < 2:
return False
if n == 2:
return True
if n % 2 == 0:
return False
sqrt_n = int(math.floor(math.sqrt(n)))
for i in range(3, sqrt_n + 1, 2):
if n % i == 0:
return False
return True
def main():
with concurrent.futures.ProcessPoolExecutor() as executor:
for number, prime in zip(PRIMES, executor.map(is_prime, PRIMES)):
print('%d is prime: %s' % (number, prime))
if __name__ == '__main__':
main()
```
### CPython versions tested on:
3.12
### Operating systems tested on:
Windows 10
<!-- gh-linked-prs -->
### Linked PRs
* gh-137118
* gh-137154
* gh-137155
<!-- /gh-linked-prs -->
| 4e40f2bea7edfa5ba7e2e0e6159d9da9dfe4aa97 | 6784ef7da7cbf1a944fd0685630ced54e4a0066c |
python/cpython | python__cpython-132895 | # Improve accuracy of NormalDist.cdf
### Proposal:
Replace the `1 + erf(s)` computation with `erfc(-s)` as suggested in this [StackOverflow discussion](https://stackoverflow.com/questions/37891569) and hinted in this [John Cook blog post](https://www.johndcook.com/blog/2010/06/07/). The core idea is to exploit the identity, `1 + erf(x) == erfc(-x)`, to eliminate the addition step thus avoiding loss of precision.
### Empirical analysis
Here is some code for empircal analysis. For 10,000 random values in each range bin, it shows the maximum difference in between the two approaches as measured in ULPs. It also shows how often each technique beats or agrees with the other as compared to a reference implementation using `mpmath`:
```python
import math
from mpmath import mp
from collections import Counter
from random import uniform
from itertools import pairwise
mp.dps = 50
def best(x):
'Which is better? erf, erfc, or are they the same?'
ref = 1 + mp.erf(x)
e = 1 + math.erf(x)
c = math.erfc(-x)
de = abs(e - ref)
dc = abs(c - ref)
return 'erf' if de < dc else 'erfc' if dc < de else '='
def err(x):
'Diffence in ulp'
e = 1 + math.erf(x)
c = math.erfc(-x)
return abs(round((e - c) / math.ulp(c)))
def frange(lo, hi, steps=10):
step = (hi - lo) / steps
return [lo + i * step for i in range(steps+1)]
for lo, hi in pairwise(frange(-5, 5, steps=20)):
xarr = [uniform(lo, hi) for i in range(10_000)]
winners = Counter(map(best, xarr)).most_common()
max_err = max(map(err, xarr))
print(f'{lo:-5.2f} to {hi:-5.2f}: {max_err:12d} ulp', winners)
```
On macOS with clang-1600.0.26.6, this outputs:
```
-5.00 to -4.50: 411273532899 ulp [('erfc', 10000)]
-4.50 to -4.00: 3211997254 ulp [('erfc', 10000)]
-4.00 to -3.50: 25164209 ulp [('erfc', 10000)]
-3.50 to -3.00: 784219 ulp [('erfc', 9999), ('=', 1)]
-3.00 to -2.50: 24563 ulp [('erfc', 9996), ('erf', 3), ('=', 1)]
-2.50 to -2.00: 1535 ulp [('erfc', 9921), ('erf', 45), ('=', 34)]
-2.00 to -1.50: 95 ulp [('erfc', 9490), ('erf', 298), ('=', 212)]
-1.50 to -1.00: 11 ulp [('erfc', 7540), ('=', 1246), ('erf', 1214)]
-1.00 to -0.50: 2 ulp [('erfc', 4211), ('=', 4039), ('erf', 1750)]
-0.50 to 0.00: 1 ulp [('=', 9764), ('erfc', 155), ('erf', 81)]
0.00 to 0.50: 1 ulp [('=', 9228), ('erf', 698), ('erfc', 74)]
0.50 to 1.00: 1 ulp [('=', 9553), ('erfc', 364), ('erf', 83)]
1.00 to 1.50: 1 ulp [('=', 8708), ('erfc', 1156), ('erf', 136)]
1.50 to 2.00: 1 ulp [('=', 8792), ('erfc', 1167), ('erf', 41)]
2.00 to 2.50: 1 ulp [('=', 8722), ('erfc', 1271), ('erf', 7)]
2.50 to 3.00: 1 ulp [('=', 8780), ('erfc', 1220)]
3.00 to 3.50: 1 ulp [('=', 8728), ('erfc', 1272)]
3.50 to 4.00: 1 ulp [('=', 8730), ('erfc', 1270)]
4.00 to 4.50: 1 ulp [('=', 8742), ('erfc', 1258)]
4.50 to 5.00: 1 ulp [('=', 8756), ('erfc', 1244)]
```
The results show massive improvement for negative inputs. For positive inputs, the difference is no more than 1 ulp and `erfc` wins in every bucket except for `0.00 to 0.50`.
@tim-one ran this on a Windows build (which uses a different math library) and found that "on Windows too there was no bin in which erf won more often than erfc".
<!-- gh-linked-prs -->
### Linked PRs
* gh-132895
* gh-133106
<!-- /gh-linked-prs -->
| 63da5cc1504f066b31374027f637b4b021445d6b | b1fc8b69ec4c29026cd8786fc5da0c498c7dcd57 |
python/cpython | python__cpython-133208 | # Socket file descriptor races in GIL-enabled build
# Bug report
In the free threading build, but not the default GIL-enabled build, the reads and writes `sock_fd` use relaxed atomics.
This can lead to data races because the `sock_fd` field is read when the GIL is released.
https://github.com/python/cpython/blob/e1c09fff054ebcb90e72bba25ef7332bcabec92b/Modules/socketmodule.c#L571-L587
https://github.com/python/cpython/actions/runs/14630615998/job/41052077715?pr=131174
Thread sanitizer:
```
WARNING: ThreadSanitizer: data race (pid=10596)
Read of size 4 at 0x7feeb9bc4470 by thread T32:
#0 get_sock_fd /home/runner/work/cpython/cpython/./Modules/socketmodule.c:603:15 (_socket.cpython-314d-x86_64-linux-gnu.so+0x147a2) (BuildId: 5fd9421785689189a09a4eaff953e770708561c7)
#1 sock_send_impl /home/runner/work/cpython/cpython/./Modules/socketmodule.c:4568:24 (_socket.cpython-314d-x86_64-linux-gnu.so+0x147a2)
#2 sock_call_ex /home/runner/work/cpython/cpython/./Modules/socketmodule.c:10[13](https://github.com/python/cpython/actions/runs/14630615998/job/41052077715?pr=131174#step:14:14):19 (_socket.cpython-314d-x86_64-linux-gnu.so+0x120e0) (BuildId: 5fd9421785689189a09a4eaff953e770708561c7)
#3 sock_call /home/runner/work/cpython/cpython/./Modules/socketmodule.c:1065:12 (_socket.cpython-3[14](https://github.com/python/cpython/actions/runs/14630615998/job/41052077715?pr=131174#step:14:15)d-x86_64-linux-gnu.so+0x120e0)
#4 sock_send /home/runner/work/cpython/cpython/./Modules/socketmodule.c:4594:9 (_socket.cpython-314d-x86_64-linux-gnu.so+0xf18c) (BuildId: 5fd9421785689189a09a4eaff953e770708561c7)
#5 method_vectorcall_VARARGS /home/runner/work/cpython/cpython/Objects/descrobject.c:325:24 (python+0x265e8f) (BuildId: aaa34bd06c4565b8343ea0751a4b0118212c23b3)
#6 _PyObject_VectorcallTstate /home/runner/work/cpython/cpython/./Include/internal/pycore_call.h:169:11 (python+0x24db5b) (BuildId: aaa34bd06c4565b8343ea0751a4b0118212c23b3)
#7 PyObject_Vectorcall /home/runner/work/cpython/cpython/Objects/call.c:327:12 (python+0x24f340) (BuildId: aaa34bd06c4565b8343ea0751a4b0118212c23b3)
#8 _PyEval_EvalFrameDefault /home/runner/work/cpython/cpython/Python/generated_cases.c.h:3886:35 (python+0x47f974) (BuildId: aaa34bd06c4565b8343ea0751a4b0118212c23b3)
#9 _PyEval_EvalFrame /home/runner/work/cpython/cpython/./Include/internal/pycore_ceval.h:119:16 (python+0x46ab79) (BuildId: aaa34bd06c4565b8343ea0751a4b0118212c23b3)
#10 _PyEval_Vector /home/runner/work/cpython/cpython/Python/ceval.c:1967:12 (python+0x46ab79)
#11 _PyFunction_Vectorcall /home/runner/work/cpython/cpython/Objects/call.c (python+0x24f82c) (BuildId: aaa34bd06c4565b8343ea0751a4b0118212c23b3)
#12 _PyObject_VectorcallTstate /home/runner/work/cpython/cpython/./Include/internal/pycore_call.h:169:11 (python+0x2550cb) (BuildId: aaa34bd06c4565b8343ea0751a4b0118212c23b3)
#13 method_vectorcall /home/runner/work/cpython/cpython/Objects/classobject.c:94:18 (python+0x2537c1) (BuildId: aaa34bd06c4565b8343ea0751a4b0118212c23b3)
#14 _PyVectorcall_Call /home/runner/work/cpython/cpython/Objects/call.c:273:16 (python+0x24f247) (BuildId: aaa34bd06c4565b8343ea0751a4b0118212c23b3)
#[15](https://github.com/python/cpython/actions/runs/14630615998/job/41052077715?pr=131174#step:14:16) _PyObject_Call /home/runner/work/cpython/cpython/Objects/call.c:348:16 (python+0x24f458) (BuildId: aaa34bd06c4565b8343ea0751a4b0118212c23b3)
#[16](https://github.com/python/cpython/actions/runs/14630615998/job/41052077715?pr=131174#step:14:17) PyObject_Call /home/runner/work/cpython/cpython/Objects/call.c:373:12 (python+0x24f657) (BuildId: aaa34bd06c4565b8343ea0751a4b0118212c23b3)
#17 _PyEval_EvalFrameDefault /home/runner/work/cpython/cpython/Python/generated_cases.c.h:2475:32 (python+0x477cac) (BuildId: aaa34bd06c4565b8343ea0751a4b0118212c23b3)
#18 _PyEval_EvalFrame /home/runner/work/cpython/cpython/./Include/internal/pycore_ceval.h:119:16 (python+0x46ab79) (BuildId: aaa34bd06c4565b8343ea0751a4b0118212c23b3)
#19 _PyEval_Vector /home/runner/work/cpython/cpython/Python/ceval.c:1967:12 (python+0x46ab79)
#20 _PyFunction_Vectorcall /home/runner/work/cpython/cpython/Objects/call.c (python+0x24f82c) (BuildId: aaa34bd06c4565b8343ea0751a4b0118212c23b3)
#21 _PyObject_VectorcallTstate /home/runner/work/cpython/cpython/./Include/internal/pycore_call.h:169:11 (python+0x2550cb) (BuildId: aaa34bd06c4565b8343ea0751a4b0118212c23b3)
#22 method_vectorcall /home/runner/work/cpython/cpython/Objects/classobject.c:94:18 (python+0x2537c1) (BuildId: aaa34bd06c4565b8343ea0751a4b0118212c23b3)
#23 _PyVectorcall_Call /home/runner/work/cpython/cpython/Objects/call.c:273:16 (python+0x24f247) (BuildId: aaa34bd06c4565b8343ea0751a4b0118212c23b3)
#24 _PyObject_Call /home/runner/work/cpython/cpython/Objects/call.c:348:16 (python+0x24f458) (BuildId: aaa34bd06c4565b8343ea0751a4b0118212c23b3)
#25 PyObject_Call /home/runner/work/cpython/cpython/Objects/call.c:373:12 (python+0x24f657) (BuildId: aaa34bd06c4565b8343ea0751a4b0118212c23b3)
#26 _PyEval_EvalFrameDefault /home/runner/work/cpython/cpython/Python/generated_cases.c.h:2475:32 (python+0x477cac) (BuildId: aaa34bd06c4565b8343ea0751a4b0118212c23b3)
#27 _PyEval_EvalFrame /home/runner/work/cpython/cpython/./Include/internal/pycore_ceval.h:119:16 (python+0x46ab79) (BuildId: aaa34bd06c4565b8343ea0751a4b0118212c23b3)
#28 _PyEval_Vector /home/runner/work/cpython/cpython/Python/ceval.c:1967:12 (python+0x46ab79)
#29 _PyFunction_Vectorcall /home/runner/work/cpython/cpython/Objects/call.c (python+0x24f82c) (BuildId: aaa34bd06c4565b8343ea0751a4b0118212c23b3)
#30 _PyObject_VectorcallTstate /home/runner/work/cpython/cpython/./Include/internal/pycore_call.h:169:11 (python+0x2550cb) (BuildId: aaa34bd06c4565b8343ea0751a4b0118212c23b3)
#31 method_vectorcall /home/runner/work/cpython/cpython/Objects/classobject.c:72:20 (python+0x253745) (BuildId: aaa34bd06c4565b8343ea0751a4b0118212c23b3)
#32 _PyObject_VectorcallTstate /home/runner/work/cpython/cpython/./Include/internal/pycore_call.h:169:11 (python+0x4e42d9) (BuildId: aaa34bd06c4565b8343ea0751a4b0118212c23b3)
#33 context_run /home/runner/work/cpython/cpython/Python/context.c:728:29 (python+0x4e42d9)
#34 _PyEval_EvalFrameDefault /home/runner/work/cpython/cpython/Python/generated_cases.c.h:3551:35 (python+0x47dd02) (BuildId: aaa34bd06c4565b8343ea0751a4b0118212c23b3)
#35 _PyEval_EvalFrame /home/runner/work/cpython/cpython/./Include/internal/pycore_ceval.h:119:16 (python+0x46ab79) (BuildId: aaa34bd06c4565b8343ea0751a4b0118212c23b3)
#40 call_method /home/runner/work/cpython/cpython/Objects/typeobject.c:3006:19 (python+0x362dc3) (BuildId: aaa34bd06c4565b8343ea0751a4b0118212c23b3)
#41 slot_tp_call /home/runner/work/cpython/cpython/Objects/typeobject.c:10268:12 (python+0x362dc3)
#42 _PyObject_MakeTpCall /home/runner/work/cpython/cpython/Objects/call.c:242:18 (python+0x24e2ba) (BuildId: aaa34bd06c4565b8343ea0751a4b0118212c23b3)
#43 _PyObject_VectorcallTstate /home/runner/work/cpython/cpython/./Include/internal/pycore_call.h:167:16 (python+0x24dbbd) (BuildId: aaa34bd06c4565b8343ea0751a4b0118212c23b3)
#44 PyObject_Vectorcall /home/runner/work/cpython/cpython/Objects/call.c:327:12 (python+0x24f340) (BuildId: aaa34bd06c4565b8343ea0751a4b0118212c23b3)
#45 _PyEval_EvalFrameDefault /home/runner/work/cpython/cpython/Python/generated_cases.c.h:3886:35 (python+0x47f974) (BuildId: aaa34bd06c4565b8343ea0751a4b0118212c23b3)
#46 _PyEval_EvalFrame /home/runner/work/cpython/cpython/./Include/internal/pycore_ceval.h:119:16 (python+0x46ab79) (BuildId: aaa34bd06c4565b8343ea0751a4b0118212c23b3)
#47 _PyEval_Vector /home/runner/work/cpython/cpython/Python/ceval.c:1967:12 (python+0x46ab79)
#48 _PyFunction_Vectorcall /home/runner/work/cpython/cpython/Objects/call.c (python+0x24f82c) (BuildId: aaa34bd06c4565b8343ea0751a4b0118212c23b3)
#49 _PyObject_VectorcallTstate /home/runner/work/cpython/cpython/./Include/internal/pycore_call.h:169:11 (python+0x2550cb) (BuildId: aaa34bd06c4565b8343ea0751a4b0118212c23b3)
#50 method_vectorcall /home/runner/work/cpython/cpython/Objects/classobject.c:94:18 (python+0x2537c1) (BuildId: aaa34bd06c4565b8343ea0751a4b0118212c23b3)
#51 _PyVectorcall_Call /home/runner/work/cpython/cpython/Objects/call.c:273:16 (python+0x24f247) (BuildId: aaa34bd06c4565b8343ea0751a4b0118212c23b3)
#52 _PyObject_Call /home/runner/work/cpython/cpython/Objects/call.c:348:16 (python+0x24f458) (BuildId: aaa34bd06c4565b8343ea0751a4b0118212c23b3)
#53 PyObject_Call /home/runner/work/cpython/cpython/Objects/call.c:373:12 (python+0x24f657) (BuildId: aaa34bd06c4565b8343ea0751a4b0118212c23b3)
#54 _PyEval_EvalFrameDefault /home/runner/work/cpython/cpython/Python/generated_cases.c.h:2475:32 (python+0x477cac) (BuildId: aaa34bd06c4565b8343ea0751a4b0118212c23b3)
#55 _PyEval_EvalFrame /home/runner/work/cpython/cpython/./Include/internal/pycore_ceval.h:119:16 (python+0x46ab79) (BuildId: aaa34bd06c4565b8343ea0751a4b0118212c23b3)
#56 _PyEval_Vector /home/runner/work/cpython/cpython/Python/ceval.c:1967:12 (python+0x46ab79)
#57 _PyFunction_Vectorcall /home/runner/work/cpython/cpython/Objects/call.c (python+0x24f82c) (BuildId: aaa34bd06c4565b8343ea0751a4b0118212c23b3)
#58 _PyObject_VectorcallDictTstate /home/runner/work/cpython/cpython/Objects/call.c:135:15 (python+0x24de89) (BuildId: aaa34bd06c4565b8343ea0751a4b0118212c23b3)
#59 _PyObject_Call_Prepend /home/runner/work/cpython/cpython/Objects/call.c:504:24 (python+0x24fc4d) (BuildId: aaa34bd06c4565b8343ea0751a4b0118212c23b3)
#60 call_method /home/runner/work/cpython/cpython/Objects/typeobject.c:3006:19 (python+0x362dc3) (BuildId: aaa34bd06c4565b8343ea0751a4b0118212c23b3)
#61 slot_tp_call /home/runner/work/cpython/cpython/Objects/typeobject.c:10268:12 (python+0x362dc3)
#62 _PyObject_MakeTpCall /home/runner/work/cpython/cpython/Objects/call.c:242:18 (python+0x24e2ba) (BuildId: aaa34bd06c4565b8343ea0751a4b0118212c23b3)
#63 _PyObject_VectorcallTstate /home/runner/work/cpython/cpython/./Include/internal/pycore_call.h:167:16 (python+0x24dbbd) (BuildId: aaa34bd06c4565b8343ea0751a4b0118212c23b3)
#64 PyObject_Vectorcall /home/runner/work/cpython/cpython/Objects/call.c:327:12 (python+0x24f340) (BuildId: aaa34bd06c4565b8343ea0751a4b0118212c23b3)
#65 _PyEval_EvalFrameDefault /home/runner/work/cpython/cpython/Python/generated_cases.c.h:3886:35 (python+0x47f974) (BuildId: aaa34bd06c4565b8343ea0751a4b0118212c23b3)
#66 _PyEval_EvalFrame /home/runner/work/cpython/cpython/./Include/internal/pycore_ceval.h:119:16 (python+0x46ab79) (BuildId: aaa34bd06c4565b8343ea0751a4b0118212c23b3)
#67 _PyEval_Vector /home/runner/work/cpython/cpython/Python/ceval.c:1967:12 (python+0x46ab79)
#68 _PyFunction_Vectorcall /home/runner/work/cpython/cpython/Objects/call.c (python+0x24f82c) (BuildId: aaa34bd06c4565b8343ea0751a4b0118212c23b3)
#69 _PyObject_VectorcallTstate /home/runner/work/cpython/cpython/./Include/internal/pycore_call.h:169:11 (python+0x2550cb) (BuildId: aaa34bd06c4565b8343ea0751a4b0118212c23b3)
#70 method_vectorcall /home/runner/work/cpython/cpython/Objects/classobject.c:94:18 (python+0x2537c1) (BuildId: aaa34bd06c4565b8343ea0751a4b0118212c23b3)
#71 _PyVectorcall_Call /home/runner/work/cpython/cpython/Objects/call.c:273:16 (python+0x24f247) (BuildId: aaa34bd06c4565b8343ea0751a4b0118212c23b3)
#72 _PyObject_Call /home/runner/work/cpython/cpython/Objects/call.c:348:16 (python+0x24f458) (BuildId: aaa34bd06c4565b8343ea0751a4b0118212c23b3)
#73 PyObject_Call /home/runner/work/cpython/cpython/Objects/call.c:373:12 (python+0x24f657) (BuildId: aaa34bd06c4565b8343ea0751a4b0118212c23b3)
#74 _PyEval_EvalFrameDefault /home/runner/work/cpython/cpython/Python/generated_cases.c.h:2475:32 (python+0x477cac) (BuildId: aaa34bd06c4565b8343ea0751a4b0118212c23b3)
#75 _PyEval_EvalFrame /home/runner/work/cpython/cpython/./Include/internal/pycore_ceval.h:119:16 (python+0x46ab79) (BuildId: aaa34bd06c4565b8343ea0751a4b0118212c23b3)
#76 _PyEval_Vector /home/runner/work/cpython/cpython/Python/ceval.c:1967:12 (python+0x46ab79)
#77 _PyFunction_Vectorcall /home/runner/work/cpython/cpython/Objects/call.c (python+0x24f82c) (BuildId: aaa34bd06c4565b8343ea0751a4b0118212c23b3)
#78 _PyObject_VectorcallDictTstate /home/runner/work/cpython/cpython/Objects/call.c:135:15 (python+0x24de89) (BuildId: aaa34bd06c4565b8343ea0751a4b0118212c23b3)
#79 _PyObject_Call_Prepend /home/runner/work/cpython/cpython/Objects/call.c:504:24 (python+0x24fc4d) (BuildId: aaa34bd06c4565b8343ea0751a4b0118212c23b3)
#80 call_method /home/runner/work/cpython/cpython/Objects/typeobject.c:3006:19 (python+0x362dc3) (BuildId: aaa34bd06c4565b8343ea0751a4b0118212c23b3)
#81 slot_tp_call /home/runner/work/cpython/cpython/Objects/typeobject.c:10268:12 (python+0x362dc3)
#82 _PyObject_MakeTpCall /home/runner/work/cpython/cpython/Objects/call.c:242:18 (python+0x24e2ba) (BuildId: aaa34bd06c4565b8343ea0751a4b0118212c23b3)
#83 _PyObject_VectorcallTstate /home/runner/work/cpython/cpython/./Include/internal/pycore_call.h:167:16 (python+0x24dbbd) (BuildId: aaa34bd06c4565b8343ea0751a4b0118212c23b3)
#84 PyObject_Vectorcall /home/runner/work/cpython/cpython/Objects/call.c:327:12 (python+0x24f340) (BuildId: aaa34bd06c4565b8343ea0751a4b0118212c23b3)
#85 _PyEval_EvalFrameDefault /home/runner/work/cpython/cpython/Python/generated_cases.c.h:1451:35 (python+0x47265b) (BuildId: aaa34bd06c4565b8343ea0751a4b0118212c23b3)
#86 _PyEval_EvalFrame /home/runner/work/cpython/cpython/./Include/internal/pycore_ceval.h:119:16 (python+0x46ab79) (BuildId: aaa34bd06c4565b8343ea0751a4b0118212c23b3)
#87 _PyEval_Vector /home/runner/work/cpython/cpython/Python/ceval.c:1967:12 (python+0x46ab79)
#88 PyEval_EvalCode /home/runner/work/cpython/cpython/Python/ceval.c:870:21 (python+0x46a727) (BuildId: aaa34bd06c4565b8343ea0751a4b0118212c23b3)
#89 builtin_exec_impl /home/runner/work/cpython/cpython/Python/bltinmodule.c:1158:[17](https://github.com/python/cpython/actions/runs/14630615998/job/41052077715?pr=131174#step:14:18) (python+0x463291) (BuildId: aaa34bd06c4565b8343ea0751a4b0118212c23b3)
#90 builtin_exec /home/runner/work/cpython/cpython/Python/clinic/bltinmodule.c.h:568:20 (python+0x463291)
#91 cfunction_vectorcall_FASTCALL_KEYWORDS /home/runner/work/cpython/cpython/Objects/methodobject.c:470:24 (python+0x2ef6c7) (BuildId: aaa34bd06c4565b8343ea0751a4b01[18](https://github.com/python/cpython/actions/runs/14630615998/job/41052077715?pr=131174#step:14:19)212c23b3)
#92 _PyObject_VectorcallTstate /home/runner/work/cpython/cpython/./Include/internal/pycore_call.h:169:11 (python+0x24db5b) (BuildId: aaa34bd06c4565b8343ea0751a4b0118212c23b3)
#93 PyObject_Vectorcall /home/runner/work/cpython/cpython/Objects/call.c:327:12 (python+0x24f340) (BuildId: aaa34bd06c4565b8343ea0751a4b0118212c23b3)
#94 _PyEval_EvalFrameDefault /home/runner/work/cpython/cpython/Python/generated_cases.c.h:1451:35 (python+0x47265b) (BuildId: aaa34bd06c4565b8343ea0751a4b0118212c23b3)
#95 _PyEval_EvalFrame /home/runner/work/cpython/cpython/./Include/internal/pycore_ceval.h:1[19](https://github.com/python/cpython/actions/runs/14630615998/job/41052077715?pr=131174#step:14:20):16 (python+0x46ab79) (BuildId: aaa34bd06c4565b8343ea0751a4b0118212c23b3)
#96 _PyEval_Vector /home/runner/work/cpython/cpython/Python/ceval.c:1967:12 (python+0x46ab79)
#97 _PyFunction_Vectorcall /home/runner/work/cpython/cpython/Objects/call.c (python+0x24f82c) (BuildId: aaa34bd06c4565b8343ea0751a4b0118212c23b3)
#98 _PyVectorcall_Call /home/runner/work/cpython/cpython/Objects/call.c:273:16 (python+0x24f247) (BuildId: aaa34bd06c4565b8343ea0751a4b0118[21](https://github.com/python/cpython/actions/runs/14630615998/job/41052077715?pr=131174#step:14:22)2c23b3)
#99 _PyObject_Call /home/runner/work/cpython/cpython/Objects/call.c:348:16 (python+0x24f458) (BuildId: aaa34bd06c4565b8343ea0751a4b0118212c23b3)
#100 PyObject_Call /home/runner/work/cpython/cpython/Objects/call.c:373:12 (python+0x24f657) (BuildId: aaa34bd06c4565b8343ea0751a4b0118212c23b3)
#101 pymain_run_module /home/runner/work/cpython/cpython/Modules/main.c:345:14 (python+0x5d44b7) (BuildId: aaa34bd06c4565b8343ea0751a4b0118212c[23](https://github.com/python/cpython/actions/runs/14630615998/job/41052077715?pr=131174#step:14:24)b3)
#102 pymain_run_python /home/runner/work/cpython/cpython/Modules/main.c (python+0x5d3285) (BuildId: aaa34bd06c4565b8343ea0751a4b0118212c23b3)
#103 Py_RunMain /home/runner/work/cpython/cpython/Modules/main.c:767:5 (python+0x5d3285)
#104 pymain_main /home/runner/work/cpython/cpython/Modules/main.c:797:12 (python+0x5d4369) (BuildId: aaa34bd06c4565b8343ea0751a4b0118212c23b3)
#105 Py_BytesMain /home/runner/work/cpython/cpython/Modules/main.c:821:12 (python+0x5d43e9) (BuildId: aaa34bd06c4565b8343ea0751a4b0118212c23b3)
#106 main /home/runner/work/cpython/cpython/./Programs/python.c:15:12 (python+0x168de0) (BuildId: aaa34bd06c4565b8343ea0751a4b0118212c23b3)
SUMMARY: ThreadSanitizer: data race /home/runner/work/cpython/cpython/./Modules/socketmodule.c:603:15 in get_sock_fd
==================
ThreadSanitizer: reported 1 warnings
```
<!-- gh-linked-prs -->
### Linked PRs
* gh-133208
* gh-133683
<!-- /gh-linked-prs -->
| 2d82ab761ab8051440e486ca68355514f3df42aa | 7d129f99ab8e7ef9382e5e5c7bb4e51ecf74e894 |
python/cpython | python__cpython-132883 | # [3.14 regression] Cannot copy Union containing objects that do not implement `__or__`
# Bug report
### Bug description:
In 3.13 it was possible to copy a Union containing any object:
```
>>> copy.copy(typing.Union[b"x", b"y"])
typing.Union[b'x', b'y']
```
But in main this fails:
```
>>> copy.copy(typing.Union[b"x", b"y"])
Traceback (most recent call last):
File "<python-input-2>", line 1, in <module>
copy.copy(typing.Union[b"x", b"y"])
~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/jelle/py/cpython/Lib/copy.py", line 100, in copy
return _reconstruct(x, None, *rv)
File "/Users/jelle/py/cpython/Lib/copy.py", line 234, in _reconstruct
y = func(*args)
TypeError: unsupported operand type(s) for |: 'bytes' and 'bytes'
>>>
```
This doesn't affect any types supported by the type system (all of which support `|`), but may affect users who put non-standard objects inside a Union.
### CPython versions tested on:
CPython main branch
### Operating systems tested on:
macOS
<!-- gh-linked-prs -->
### Linked PRs
* gh-132883
<!-- /gh-linked-prs -->
| e1c09fff054ebcb90e72bba25ef7332bcabec92b | 9f5994b94cb6b8526bcb8fa29a99656dc403e25e |
python/cpython | python__cpython-133135 | # math.ldexp gives incorrect results on Windows
# Bug report
### Bug description:
```python
>>> import math
>>> math.ldexp(6993274598585239, -1126)
5e-324
>>>
```
The correct result would be 1e-323. This is obviously a bug in the Windows ldexp implementation (it works fine on Linux). But it would be good, if it could be fixed on the CPython side with a workaround.
math.ldexp is used by mpmath to round from multiprecision to float, leading to incorrect rounding on Windows.
### CPython versions tested on:
3.13
### Operating systems tested on:
Windows
<!-- gh-linked-prs -->
### Linked PRs
* gh-133135
* gh-134684
* gh-134685
<!-- /gh-linked-prs -->
| cf8941c60356acdd00055e5583a2d64761c34af4 | 0e3bc962c6462f836751e35ef35fa837fd952550 |
python/cpython | python__cpython-132872 | # Use _Alignof to query alignments in the struct module
# Feature or enhancement
### Proposal:
The struct module uses pre-C11 hacks to determine alignment for types:
https://github.com/python/cpython/blob/580888927c8eafbf3d4c6be9144684117f0a96ca/Modules/_struct.c#L79-L106
Now we can use `_Alignof` operator, as C11-compatible compiler is a build requireent.
### Has this already been discussed elsewhere?
This is a minor feature, which does not need previous discussion elsewhere
### Links to previous discussion of this feature:
_No response_
<!-- gh-linked-prs -->
### Linked PRs
* gh-132872
<!-- /gh-linked-prs -->
| ecd03739f87889bb2f173e0a476d26b468c776c9 | c292f7f56311b305c673415651c919a7633d36cc |
python/cpython | python__cpython-132860 | # sys.remote_exec() scripts run in the `__main__` namespace
# Bug report
### Bug description:
The scripts injected by `sys.remote_exec` run in the context of the `__main__` module, meaning that they unintentionally overwrite variables used in the main script. For instance, given a `loop_forever.py` containing:
```py
import os
print(os.getpid())
x = 1
while x == 1:
pass
print(f"{x=}")
```
and an `injected.py` containing:
```py
x = 42
```
Using `sys.remote_exec()` to inject `injected.py` into an interpreter running `loop_forever.py` results in `x=42` being printed out and the script exiting.
### CPython versions tested on:
CPython main branch
### Operating systems tested on:
_No response_
<!-- gh-linked-prs -->
### Linked PRs
* gh-132860
<!-- /gh-linked-prs -->
| a94c7528b596e9ec234f12ebeeb45fc731412b18 | 402dba29281c1e1092ff32875c91f23bacabb677 |
python/cpython | python__cpython-132828 | # "unhashable type" is a beginner-unfriendly error message
# Feature or enhancement
### Proposal:
Every time I'm teaching dicts and sets to beginners, someone tries to use lists or other mutable data structures as dict keys or set members. The error message in that case is unhelpful, because it describes the problem in very technical terms (when you're just starting you have no idea what a hash function is). it could be changed to also add what operation is impossible as a result:
```python
>>> s = set()
>>> s.add({'pages': 12, 'grade': 'A'})
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: Cannot use 'dict' as a set element (unhashable type).
>>> d = {}
>>> l = [1, 2, 3]
>>> d[l] = 12
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: Cannot use 'list' as a dict key (unhashable type).
```
That way, the problem is stated much more in terms of what the programmer is actually trying to do.
### Has this already been discussed elsewhere?
No response given
### Links to previous discussion of this feature:
_No response_
<!-- gh-linked-prs -->
### Linked PRs
* gh-132828
* gh-132847
<!-- /gh-linked-prs -->
| 426449d9834855fcf8c150889157af8c39526b81 | b2e666f30aa66b3f5e7ccbcf4e5f21e2d89e39a8 |
python/cpython | python__cpython-135606 | # test__opcode fails with missing 'jump_backward' in specialization stats
# Bug report
### Bug description:
When running the `test__opcode` test with a build that uses `--enable-pystats`, there is a mismatch between the expected specialized opcodes and the actual ones. The test is expecting `jump_backward` in the specialized opcodes list, but it's not present in the actual stats.
### Configuration
```sh
./configure --with-pydebug --enable-pystats
```
### Test Output
```sh
╰─$ ./python -m test test__opcode 82595ms
Using random seed: 2739258338
0:00:00 load avg: 0.97 Run 1 test sequentially in a single process
0:00:00 load avg: 0.97 [1/1] test__opcode
test test__opcode failed -- Traceback (most recent call last):
File "/home/arf/Desktop/cpython/Lib/test/test__opcode.py", line 131, in test_specialization_stats
self.assertCountEqual(stats.keys(), specialized_opcodes)
~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
AssertionError: Element counts were not equal:
First has 0, Second has 1: 'jump_backward'
0:00:00 load avg: 0.97 [1/1/1] test__opcode failed (1 failure)
== Tests result: FAILURE ==
1 test failed:
test__opcode
Total duration: 52 ms
Total tests: run=7 failures=1
Total test files: run=1/1 failed=1
Result: FAILURE
```
### Environment
- OS: Arch Linux (Linux 6.14.2-arch1-1)
### CPython versions tested on:
CPython main branch, 3.14
### Operating systems tested on:
Linux
<!-- gh-linked-prs -->
### Linked PRs
* gh-135606
* gh-135612
<!-- /gh-linked-prs -->
| a9e66a7c506680263b39bc8c150ddc5e72213c45 | acc20a83f4d93682e91c4992c785eaf7e3d0c69c |
python/cpython | python__cpython-133209 | # `csv.writer` with `QUOTE_NONE` still requires non-emtpy `quotechar` and `escapechar`
# Bug report
### Bug description:
I'd like to output all quotes verbatim, without any escaping/quoting, but this fails:
```python
import csv
w = csv.writer(open('test.tsv', 'w'), delimiter = '\t', quoting = csv.QUOTE_NONE)
w.writerow(('"hello"', '"world"'))
# _csv.Error: need to escape, but no escapechar set
```
When I set `escapechar=''` or `quotechar=''`, it also fails:
```python
import csv
w = csv.writer(open('test.tsv', 'w'), delimiter = '\t', quoting = csv.QUOTE_NONE, escapechar = '')
w.writerow(('"hello"', '"world"'))
# TypeError: "escapechar" must be a 1-character string
import csv
w = csv.writer(open('test.tsv', 'w'), delimiter = '\t', quoting = csv.QUOTE_NONE, quotechar = '')
w.writerow(('"hello"', '"world"'))
# TypeError: "quotechar" must be a 1-character string
```
I would suggest that under `QUOTE_NONE`, escaping of quotes should not be performed at all, or at least allow empty `escapechar` / `quotechar`
---
The workaround / hack I found:
```python
import csv
w = csv.writer(open('test.tsv', 'w'), delimiter = '\t', quoting = csv.QUOTE_NONE, quotechar = '\t')
w.writerow(('"hello"', '"world"'))
```
### CPython versions tested on:
3.12
### Operating systems tested on:
Linux
<!-- gh-linked-prs -->
### Linked PRs
* gh-133209
* gh-133241
* gh-135050
* gh-136113
* gh-136114
<!-- /gh-linked-prs -->
| 536a5ff15357a7d616c8226a642443e699f30244 | 980a56843bf631ea80c1486a367d41031dec6a7e |
python/cpython | python__cpython-132799 | # Schedule removal of PyUnicode_AsDecoded/Encoded functions
- PyUnicode_AsDecodedObject
- PyUnicode_AsDecodedUnicode
- PyUnicode_AsEncodedUnicode
- PyUnicode_AsEncodedObject
Were deprecated in 3.6 in https://github.com/python/cpython/commit/0093907f0ed88c6aa3561cc4328154ae907bc976 by @serhiy-storchaka
They have no documentation.
Will schedule them for removal in 3.15 as suggested by @vstinner in https://github.com/python/cpython/issues/46236#issuecomment-2821199571
<!-- gh-linked-prs -->
### Linked PRs
* gh-132799
<!-- /gh-linked-prs -->
| f6fb498c9759bc4ab615761639c5147d2d870c56 | 8783cec9b67b3860bda9496611044b6f310c6761 |
python/cpython | python__cpython-132782 | # NotShareableError Should Inherit from TypeError
# Bug report
### Bug description:
Shareability is a feature of types, not values, so `NotShareableError` should be a subclass of `TypeError`, not `ValueError`.
### CPython versions tested on:
3.14, CPython main branch
### Operating systems tested on:
_No response_
<!-- gh-linked-prs -->
### Linked PRs
* gh-132782
* gh-132973
* gh-132989
<!-- /gh-linked-prs -->
| ca12a744abd02d0d36adfb1444c1ba31623d617d | 8a4d4f37abb9fa639fdc5d7003c4067904cdcc6b |
python/cpython | python__cpython-132780 | # Tools/build/generate_global_strings.py Doesn't Handle Duplicates Correctly
# Bug report
### Bug description:
I'll have a fix up shortly.
### CPython versions tested on:
CPython main branch
### Operating systems tested on:
_No response_
<!-- gh-linked-prs -->
### Linked PRs
* gh-132780
<!-- /gh-linked-prs -->
| 9be364568835e467199ccd65bbcd786f9c8171dc | 09b624b80f54e1f97812981cfff9fa374bd5360f |
python/cpython | python__cpython-132779 | # memoryview Cross-Interpreter Data Has Some Minor Issues
# Bug report
### Bug description:
* dealloc is incomplete
* errors aren't handled quite right
* buffer not cleaned up if xidata never used
* some minor cleanup is needed
### CPython versions tested on:
CPython main branch
### Operating systems tested on:
_No response_
<!-- gh-linked-prs -->
### Linked PRs
* gh-132779
* gh-132821
* gh-132960
<!-- /gh-linked-prs -->
| b5bf8c80a921679b23548453565f6fd1f79901f2 | a4ea80d52394bafffb2257abbe815c7ffdb003a3 |
python/cpython | python__cpython-132974 | # Improvements to test.support.interpreters.Interpreter
`Interpreter.call()` should be able to support arbitrary callables, full args, and return values.
<!-- gh-linked-prs -->
### Linked PRs
* gh-132974
* gh-132977
* gh-132978
* gh-132979
* gh-132981
* gh-133101
* gh-133107
* gh-133108
* gh-133128
* gh-133221
* gh-133265
* gh-133472
* gh-133474
* gh-133475
* gh-133480
* gh-133481
* gh-133482
* gh-133483
* gh-133484
* gh-133497
* gh-133528
* gh-133625
* gh-133676
* gh-133955
* gh-134418
* gh-134439
* gh-134440
* gh-134441
* gh-134452
* gh-134465
* gh-134507
* gh-134511
* gh-134515
* gh-134522
* gh-134530
* gh-134532
* gh-134599
* gh-134735
* gh-134736
* gh-134758
* gh-134794
* gh-134900
* gh-134901
* gh-134933
* gh-135369
* gh-135492
* gh-135595
* gh-135638
* gh-137686
<!-- /gh-linked-prs -->
| 6f0432599297635492597e3766259390e8331c62 | b739ec5ab78ed55367516de7a11e732cb3f1081d |
python/cpython | python__cpython-132770 | # Static analysis reveals that `tok_mode->last_expr_buffer[i] != '\0' && i < input_length` is not safe
# Bug report
This code can be improved:
https://github.com/python/cpython/blob/132b6bc98f47a4d897dead8635b5a50a0baee485/Parser/lexer/lexer.c#L143
It would be safer to first check the value of `i` and then try to access `[i]` index. This way it is harder to get read out of bounds.
Found by PVS-Studio in https://habr.com/ru/companies/pvs-studio/articles/902048/
<!-- gh-linked-prs -->
### Linked PRs
* gh-132770
* gh-132788
<!-- /gh-linked-prs -->
| ea8ec95cfadbf58a11ef8e41341254d982a1a479 | 8516343d3aeb12e5871ecd1701649fd1fcf46fc0 |
python/cpython | python__cpython-133627 | # dict_set_fromkeys() calculates size of dictionary improperly
# Bug report
### Bug description:
The function dict_set_fromkeys() in the file dictobject.c adds elements of an iterable to an existing dictionary. The size of the expanded dictionary is estimated as PySet_GET_SIZE(iterable) and the size of the existing dictionary is not considered.
This is unlogical.
A more resonable estimation is "mp->ma_used + PySet_GET_SIZE(iterable)". What turns this into a bug is the fact that a large dictionary plus a small iterable can cause the function to loop forever.
The following code is adopted from the official tests, file test_dict.py. Official binaries for Python 3.13.3 compiled with MSVC will loop forever when running this modified test (the official test has range(10)). Other platforms may be affected as well.
```python
class baddict3(dict):
def __new__(cls):
return d
d = {i : i for i in range(17)}
res = d.copy()
res.update(a=None, b=None, c=None)
print(res)
q = baddict3.fromkeys({"a", "b", "c"})
print(q)
```
A minor error is in the function calculate_log2_keysize(minsize) which has three branches for calculating its result. However, when minsize < 10 the results of the three branches can differ. Since _BitScanReverse64() is Windows-only other platforms might be unaffected.
A final unrelated note. PEP 11 – CPython platform support states that aarch64-unknown-linux-gnu and x86_64-unknown-linux-gnu are Tier 1 AND Tier 2.
### CPython versions tested on:
3.13
### Operating systems tested on:
Windows
<!-- gh-linked-prs -->
### Linked PRs
* gh-133627
* gh-133685
* gh-133686
<!-- /gh-linked-prs -->
| 421ba589d02b53131f793889d221ef3b1f1410a4 | 2d82ab761ab8051440e486ca68355514f3df42aa |
python/cpython | python__cpython-132759 | # Build fails with --enable-pystats and --with-tail-call-interp due to undeclared lastopcode variable
# Bug report
### Bug description:
When attempting to build CPython with both the `--enable-pystats` and `--with-tail-call-interp` options,
the build fails with errors about an undeclared identifier `lastopcode`.
### Configuration
```sh
./configure CC=clang LD=clang --with-tail-call-interp --enable-pystats
```
### Build Output
```sh
In file included from Python/ceval.c:976:
Python/generated_cases.c.h:28:13: error: use of undeclared identifier 'lastopcode'
28 | INSTRUCTION_STATS(BINARY_OP);
| ^
Python/ceval_macros.h:66:48: note: expanded from macro 'INSTRUCTION_STATS'
66 | if (_Py_stats) _Py_stats->opcode_stats[lastopcode].pair_count[op]++; \
| ^
In file included from Python/ceval.c:976:
Python/generated_cases.c.h:28:13: error: use of undeclared identifier 'lastopcode'
Python/ceval_macros.h:67:9: note: expanded from macro 'INSTRUCTION_STATS'
67 | lastopcode = op; \
| ^
In file included from Python/ceval.c:976:
Python/generated_cases.c.h:93:13: error: use of undeclared identifier 'lastopcode'
93 | INSTRUCTION_STATS(BINARY_OP_ADD_FLOAT);
| ^
Python/ceval_macros.h:66:48: note: expanded from macro 'INSTRUCTION_STATS'
66 | if (_Py_stats) _Py_stats->opcode_stats[lastopcode].pair_count[op]++; \
| ^
In file included from Python/ceval.c:976:
Python/generated_cases.c.h:93:13: error: use of undeclared identifier 'lastopcode'
Python/ceval_macros.h:67:9: note: expanded from macro 'INSTRUCTION_STATS'
67 | lastopcode = op; \
| ^
In file included from Python/ceval.c:976:
Python/generated_cases.c.h:151:13: error: use of undeclared identifier 'lastopcode'
151 | INSTRUCTION_STATS(BINARY_OP_ADD_INT);
| ^
Python/ceval_macros.h:66:48: note: expanded from macro 'INSTRUCTION_STATS'
66 | if (_Py_stats) _Py_stats->opcode_stats[lastopcode].pair_count[op]++; \
| ^
In file included from Python/ceval.c:976:
Python/generated_cases.c.h:151:13: error: use of undeclared identifier 'lastopcode'
Python/ceval_macros.h:67:9: note: expanded from macro 'INSTRUCTION_STATS'
67 | lastopcode = op; \
| ^
In file included from Python/ceval.c:976:
Python/generated_cases.c.h:211:13: error: use of undeclared identifier 'lastopcode'
211 | INSTRUCTION_STATS(BINARY_OP_ADD_UNICODE);
| ^
Python/ceval_macros.h:66:48: note: expanded from macro 'INSTRUCTION_STATS'
66 | if (_Py_stats) _Py_stats->opcode_stats[lastopcode].pair_count[op]++; \
| ^
In file included from Python/ceval.c:976:
Python/generated_cases.c.h:211:13: error: use of undeclared identifier 'lastopcode'
Python/ceval_macros.h:67:9: note: expanded from macro 'INSTRUCTION_STATS'
67 | lastopcode = op; \
| ^
In file included from Python/ceval.c:976:
Python/generated_cases.c.h:271:13: error: use of undeclared identifier 'lastopcode'
271 | INSTRUCTION_STATS(BINARY_OP_EXTEND);
| ^
Python/ceval_macros.h:66:48: note: expanded from macro 'INSTRUCTION_STATS'
66 | if (_Py_stats) _Py_stats->opcode_stats[lastopcode].pair_count[op]++; \
| ^
In file included from Python/ceval.c:976:
Python/generated_cases.c.h:271:13: error: use of undeclared identifier 'lastopcode'
Python/ceval_macros.h:67:9: note: expanded from macro 'INSTRUCTION_STATS'
67 | lastopcode = op; \
| ^
In file included from Python/ceval.c:976:
Python/generated_cases.c.h:335:13: error: use of undeclared identifier 'lastopcode'
335 | INSTRUCTION_STATS(BINARY_OP_INPLACE_ADD_UNICODE);
| ^
Python/ceval_macros.h:66:48: note: expanded from macro 'INSTRUCTION_STATS'
66 | if (_Py_stats) _Py_stats->opcode_stats[lastopcode].pair_count[op]++; \
| ^
In file included from Python/ceval.c:976:
Python/generated_cases.c.h:335:13: error: use of undeclared identifier 'lastopcode'
Python/ceval_macros.h:67:9: note: expanded from macro 'INSTRUCTION_STATS'
67 | lastopcode = op; \
| ^
In file included from Python/ceval.c:976:
Python/generated_cases.c.h:418:13: error: use of undeclared identifier 'lastopcode'
418 | INSTRUCTION_STATS(BINARY_OP_MULTIPLY_FLOAT);
| ^
Python/ceval_macros.h:66:48: note: expanded from macro 'INSTRUCTION_STATS'
66 | if (_Py_stats) _Py_stats->opcode_stats[lastopcode].pair_count[op]++; \
| ^
In file included from Python/ceval.c:976:
Python/generated_cases.c.h:418:13: error: use of undeclared identifier 'lastopcode'
Python/ceval_macros.h:67:9: note: expanded from macro 'INSTRUCTION_STATS'
67 | lastopcode = op; \
| ^
In file included from Python/ceval.c:976:
Python/generated_cases.c.h:476:13: error: use of undeclared identifier 'lastopcode'
476 | INSTRUCTION_STATS(BINARY_OP_MULTIPLY_INT);
| ^
Python/ceval_macros.h:66:48: note: expanded from macro 'INSTRUCTION_STATS'
66 | if (_Py_stats) _Py_stats->opcode_stats[lastopcode].pair_count[op]++; \
| ^
In file included from Python/ceval.c:976:
Python/generated_cases.c.h:476:13: error: use of undeclared identifier 'lastopcode'
Python/ceval_macros.h:67:9: note: expanded from macro 'INSTRUCTION_STATS'
67 | lastopcode = op; \
| ^
In file included from Python/ceval.c:976:
Python/generated_cases.c.h:536:13: error: use of undeclared identifier 'lastopcode'
536 | INSTRUCTION_STATS(BINARY_OP_SUBSCR_DICT);
| ^
Python/ceval_macros.h:66:48: note: expanded from macro 'INSTRUCTION_STATS'
66 | if (_Py_stats) _Py_stats->opcode_stats[lastopcode].pair_count[op]++; \
| ^
In file included from Python/ceval.c:976:
Python/generated_cases.c.h:536:13: error: use of undeclared identifier 'lastopcode'
Python/ceval_macros.h:67:9: note: expanded from macro 'INSTRUCTION_STATS'
67 | lastopcode = op; \
| ^
In file included from Python/ceval.c:976:
Python/generated_cases.c.h:602:13: error: use of undeclared identifier 'lastopcode'
602 | INSTRUCTION_STATS(BINARY_OP_SUBSCR_GETITEM);
| ^
Python/ceval_macros.h:66:48: note: expanded from macro 'INSTRUCTION_STATS'
66 | if (_Py_stats) _Py_stats->opcode_stats[lastopcode].pair_count[op]++; \
| ^
fatal error: too many errors emitted, stopping now [-ferror-limit=]
clang -c -fno-strict-overflow -Wsign-compare -Wunreachable-code -DNDEBUG -g -O3 -Wall -std=c11 -Wextra -Wno-unused-parameter -Wno-missing-field-initializers -Wstrict-prototypes -Werror=implicit-function-declaration -fvisibility=hidden -I./Include/internal -I./Include/internal/mimalloc -I. -I./Include -DPy_BUILD_CORE -o Python/context.o Python/context.c
20 errors generated.
make: *** [Makefile:3221: Python/ceval.o] Error 1
make: *** Waiting for unfinished jobs....
```
### Environment
- OS: Arch Linux (Linux 6.14.2-arch1-1)
- Compiler: clang version 19.1.7
### CPython versions tested on:
CPython main branch
### Operating systems tested on:
Linux
<!-- gh-linked-prs -->
### Linked PRs
* gh-132759
<!-- /gh-linked-prs -->
| 6430c634da4332550744fe8f50b12c927b8382f6 | de9deb7ca7120fbb5cbbb53044ce91087065e723 |
python/cpython | python__cpython-132772 | # SIGSEV with method descriptors called without a second argument
# Crash report
### What happened?
Since Python 3.12, the following code triggers a segmentation fault:
```python
import _io, sys; _io._TextIOBase.detach.__get__(sys.stderr)
```
### CPython versions tested on:
3.9, 3.11, 3.12, 3.13, 3.14
### Operating systems tested on:
Linux
### Output from running 'python -VV' on the command line:
_No response_
<!-- gh-linked-prs -->
### Linked PRs
* gh-132772
* gh-132786
<!-- /gh-linked-prs -->
| fa70bf85931eff62cb24fb2f5b7e86c1dcf642d0 | c8e0b6e6849fc3d93c33050b43fe866391625157 |
python/cpython | python__cpython-132756 | # Rewrite the fcntl module
The current code of the `fcntl` module is a little mess. This was caused particularly by the nature of the underlying C API (`fcntl()` and `ioctl()` take an argument as a C int or a pointer to a structure, and the type of the argument, as well as the size of the structure is not explicitly specified, but depends on opcode and is platform depending), and particularly by the decades of incremental changes.
At Python level, `fcntl()` and `ioctl()` take an optional argument which can be an integer, a string, or a mutable or immutable bytes-like object. The result can be an integer or a bytes object. The input mutable bytes-like object can be modified in-place in `ioctl()`. Issues, differences and inconsistencies between `fcntl()` and `ioctl()`:
* `ioctl()` first try to interpret the argument as a mutable bytes-like object, then as a string or immutable bytes-like object, then as an integer. `fcntl()` does the same except trying a mutable bytes-like object. Raising and silencing exceptions is not particularly efficient, and silencing arbitrary exceptions is not good.
* `fcntl()` and `ioctl()` accept string argument. This looks like an unintentional side effect in Python 2 which was not fixed in Python 3. I do not know any operation code which takes a string argument -- they take binary structures. Even if there is an operation which takes a char array, most likely it should be encoded using filesystem encoding instead of UTF-8, so `bytes` is more preferable than `str`. String argument should be deprecated.
* `ioctl()` makes the temporary buffer null-terminated, but `fcntl()` does not.
* `ioctl()` has the `mutate_arg` parameter, but `fcntl()` does not, even if some operations mutate argument.
* `ioctl()` copies the content of the mutable bytes-like object to a temporary buffer if `mutate_arg` is true and it fits in a buffer, and then copies it back. It was needed when the parsing code used "w#", because the object could be resized after releasing the GIL. And it was wrong, because copying back will not work after resizing. But now "w*" is used ("w#" no longer supported), so no copying to temporary buffer is needed.
* `ioctl()` does not release the GIL when call the C `ioctl()` if the mutable bytes-like object does not fit in the temporary buffer. It can, for reasons described above.
* `ioctl()` accepts opcode as ~`unsigned int`~ `unsigned long`, `fcntl()` accepts only `int`. `fcntl()` accepts the integer argument as `unsigned int`, `ioctl()` accepts only `int`. Using an unsigned format makes the code more error-prone (#132629 should mitigate this), but makes it more usable if some opcode or value constants in C has the MSB set. I think that both function should accept `unsigned int` (`unsigned long`?) for both arguments.
* `fcntl()` automatically retries on EINTR, `ioctl()` raises an OSError.
<!-- gh-linked-prs -->
### Linked PRs
* gh-132756
* gh-132764
* gh-132765
* gh-132768
* gh-132791
* gh-132832
* gh-133066
* gh-133070
* gh-133104
<!-- /gh-linked-prs -->
| a04390b4dad071195f834db347aa686292811051 | 78cfee6f0920ac914ed179c013f61c53ede16fa9 |
python/cpython | python__cpython-132738 | # cProfile cannot run code that pickles objects defined in __main__
# Bug report
### Bug description:
```python
import pickle
import sys
from dataclasses import dataclass, field
@dataclass
class State:
x: list[int] = field(default_factory=list)
print(pickle.dumps(State([0])))
```
The following code will run successfully on it's own, but will fail if run with `cProfile`, with the message that `__main__.State` could not be found by.
### CPython versions tested on:
CPython main branch
### Operating systems tested on:
Linux
<!-- gh-linked-prs -->
### Linked PRs
* gh-132738
<!-- /gh-linked-prs -->
| c7a7aa9a57c25ef2666f7dbf62edab882747af6b | e1c09fff054ebcb90e72bba25ef7332bcabec92b |
python/cpython | python__cpython-132735 | # Add constants for Bluetooth socket support
The support of Bluetooth sockets was fixed and many new features were added in the last month. But while code was changed, the constants that can be used with it were not always added. The proposed PR adds a lot of constants and updates the documentation.
<!-- gh-linked-prs -->
### Linked PRs
* gh-132735
* gh-132829
<!-- /gh-linked-prs -->
| e84624450dc0494271119018c699372245d724d9 | 05d0559db04743aa47d485d53437edbb31d8e967 |
python/cpython | python__cpython-133035 | # Improve `sysconfig` CLI
### Proposal:
Currently, the `sysconfig` module's CLI is fairly hidden and inaccessible. I think there are two main issues with the module CLI.
1. There is no "Command Line Interface" section of the [docs](https://docs.python.org/dev/library/sysconfig.html) (only have [example](https://docs.python.org/dev/library/sysconfig.html#sysconfig-cli) for `python -m sysconfig` output) like there is with some other modules e.g. [random](https://docs.python.org/3/library/random.html#command-line-usage). (It is, however, listed under the [Modules command-line interface section](https://docs.python.org/dev/library/cmdline.html))
2. There is no help section when running the platform CLI, the CLI takes argument `--generate-posix-vars`, which I could only find from looking at the source in [`./Lib/sysconfig/__main__.py`](https://github.com/python/cpython/blob/main/Lib/sysconfig/__main__.py).
Inspired by #131524
I suggest adding a **1.** full-fledged `CommandLineTest` class to the tests (currently there is only `test_main` which only checks that the output is present), **2.** update the documentation so that it is the same as other command line interaces, **3.** use `argparse` as a quality CLI tool
P.S.: `sysconfig` CLI was added in January 2010 for Python 3.2 (https://github.com/python/cpython/commit/edacea30e457e151611a9fe560120cedb3bdc527)
I'm happy to submit a PR if needed.
### Has this already been discussed elsewhere?
This is a minor feature, which does not need previous discussion elsewhere
### Links to previous discussion of this feature:
_No response_
<!-- gh-linked-prs -->
### Linked PRs
* gh-133035
* gh-133088
<!-- /gh-linked-prs -->
| 0f84f6b334e4340798fb2ec4e1ca21ad838db339 | ee9102a53565f694c37ca1bcc6bb1239db7207d9 |
python/cpython | python__cpython-132774 | # AMD64 FreeBSD14/15 3.x multiprocessing test failures
# Bug report
### Bug description:
I don't know which one is the faulty commit so I'm tagaging everyone involved in the recent failures of this build bot.
The build bot is failing since https://github.com/python/cpython/commit/0c356c865a6d3806724f54d6d463b2e5289f6afa (cc @skirpichev) but there were warnings since https://github.com/python/cpython/commit/954b2cf031fb84ff3386251d5c45281f47229003 (cc @bswck @sobolevn) for the same tests (I doubt Sergey's commit is actually the faulty one as it shouldn't have touched the interpreter's code itself)
Before that, similar failures appeared since https://github.com/python/cpython/commit/c66ffcf8e3ab889a30aae43520aa29c167344bd3 (cc @Yhg1s) and before then, it was failing for a different reason but it was fixed in https://github.com/python/cpython/commit/102f825c5112cbe6985edc0971822b07bd778135.
### CPython versions tested on:
CPython main branch
### Operating systems tested on:
_No response_
<!-- gh-linked-prs -->
### Linked PRs
* gh-132774
* gh-132842
* gh-132846
<!-- /gh-linked-prs -->
| c8e0b6e6849fc3d93c33050b43fe866391625157 | 01317bb449612ea1dbbf36e439437909abd79a45 |
python/cpython | python__cpython-132801 | # Segfault in `union_repr` from `list_repr_impl` in free-threaded build
# Crash report
### What happened?
Calling `repr` in many threads on a list containing a large `typing.Union` segfaults in a free-threaded build:
```python
import abc
import builtins
import collections.abc
import itertools
import types
import typing
from functools import reduce
from operator import or_
abc_types = [cls for cls in abc.__dict__.values() if isinstance(cls, type)]
builtins_types = [cls for cls in builtins.__dict__.values() if isinstance(cls, type)]
collections_abc_types = [cls for cls in collections.abc.__dict__.values() if isinstance(cls, type)]
collections_types = [cls for cls in collections.__dict__.values() if isinstance(cls, type)]
itertools_types = [cls for cls in itertools.__dict__.values() if isinstance(cls, type)]
types_types = [cls for cls in types.__dict__.values() if isinstance(cls, type)]
typing_types = [cls for cls in typing.__dict__.values() if isinstance(cls, type)]
all_types = (abc_types + builtins_types + collections_abc_types + collections_types + itertools_types
+ types_types + typing_types)
all_types = [t for t in all_types if not issubclass(t, BaseException)]
BIG_UNION = reduce(or_, all_types, int)
from threading import Thread
from time import sleep
for x in range(100):
union_list = [int | BIG_UNION] * 17
def stress_list():
for x in range(3):
try:
union_list.pop()
except Exception:
pass
repr(union_list)
sleep(0.006)
union_list.__getitem__(None, None)
alive = []
for x in range(25):
alive.append(Thread(target=stress_list, args=()))
for t in alive:
t.start()
```
Example segfault backtrace:
```
Thread 60 "Thread-59 (stre" received signal SIGSEGV, Segmentation fault.
0x0000555555d21751 in _Py_TYPE (ob=<unknown at remote 0xdddddddddddddddd>) at ./Include/object.h:270
270 return ob->ob_type;
#0 0x0000555555d21751 in _Py_TYPE (ob=<unknown at remote 0xdddddddddddddddd>) at ./Include/object.h:270
#1 union_repr (self=<optimized out>) at Objects/unionobject.c:296
#2 0x0000555555b8949a in PyObject_Repr (v=<unknown at remote 0x7fffb48464b0>) at Objects/object.c:776
#3 0x0000555555cc06a1 in PyUnicodeWriter_WriteRepr (writer=writer@entry=0x7fff660907f0,
obj=<unknown at remote 0x207c>) at Objects/unicodeobject.c:13951
#4 0x0000555555aeba03 in list_repr_impl (v=0x7fffb4a8dc90) at Objects/listobject.c:606
#5 list_repr (self=[]) at Objects/listobject.c:633
#6 0x0000555555b8949a in PyObject_Repr (v=[]) at Objects/object.c:776
#7 0x0000555555e07abf in _PyEval_EvalFrameDefault (tstate=<optimized out>, frame=<optimized out>,
throwflag=<optimized out>) at Python/generated_cases.c.h:2306
#8 0x0000555555ddcf53 in _PyEval_EvalFrame (tstate=0x529000325210, frame=0x529000384328, throwflag=0)
at ./Include/internal/pycore_ceval.h:119
#9 _PyEval_Vector (tstate=0x529000325210, func=0x7fffb4828dd0, locals=0x0, args=<optimized out>,
argcount=1, kwnames=0x0) at Python/ceval.c:1917
#10 0x0000555555a4f42b in _PyObject_VectorcallTstate (tstate=0x529000325210,
callable=<function at remote 0x7fffb4828dd0>, args=0x207c, nargsf=3, nargsf@entry=1,
kwnames=<unknown at remote 0x7fff66090450>, kwnames@entry=0x0) at ./Include/internal/pycore_call.h:169
#11 0x0000555555a4ccbf in method_vectorcall (method=<optimized out>, args=<optimized out>,
nargsf=<optimized out>, kwnames=<optimized out>) at Objects/classobject.c:72
#12 0x0000555555e8005b in _PyObject_VectorcallTstate (tstate=0x529000325210,
callable=<method at remote 0x7fff660c0eb0>, args=0x7fff93df06d8, nargsf=0, kwnames=0x0)
at ./Include/internal/pycore_call.h:169
#13 context_run (self=<_contextvars.Context at remote 0x7fffb4a8ebf0>, args=<optimized out>,
nargs=<optimized out>, kwnames=0x0) at Python/context.c:728
#14 0x0000555555e0a3e7 in _PyEval_EvalFrameDefault (tstate=<optimized out>, frame=<optimized out>,
throwflag=<optimized out>) at Python/generated_cases.c.h:3551
```
Once it aborted with message:
```
python: Objects/unionobject.c:296: PyObject *union_repr(PyObject *): Assertion `PyTuple_Check(alias->args)' failed.
```
Found using [fusil](https://github.com/devdanzin/fusil) by @vstinner.
### CPython versions tested on:
CPython main branch
### Operating systems tested on:
Linux
### Output from running 'python -VV' on the command line:
Python 3.14.0a7+ experimental free-threading build (heads/main:741c6386b86, Apr 18 2025, 15:04:45) [Clang 19.1.7 (++20250114103320+cd708029e0b2-1~exp1~20250114103432.75)]
<!-- gh-linked-prs -->
### Linked PRs
* gh-132801
* gh-132802
* gh-132809
* gh-132811
* gh-132839
* gh-132899
<!-- /gh-linked-prs -->
| a4ea80d52394bafffb2257abbe815c7ffdb003a3 | 722c501dba2db391012aa1530144ecdfbddec9e8 |
python/cpython | python__cpython-132901 | # uuid.getnode() is not tied to MAC address when using `libuuid`
# Bug report
### Bug description:
According to the docs, [uuid.getnode()](https://docs.python.org/3/library/uuid.html#uuid.getnode) is meant to return a number based on the MAC address of the network interface. However, if Python is built with `libuuid`, this does not match the observed behavior. Instead, `getnode()` often produces a random number.
Versions of Python obtained through python.org, `brew`, and `pyenv` don't appear to display this bug. But versions obtained through `uv` and `python-build-standalone` do.
The key difference seems to be which branch of this `try` block inside `uuid.py` is executed:
```python
# Import optional C extension at toplevel, to help disabling it when testing
try:
import _uuid
_generate_time_safe = getattr(_uuid, "generate_time_safe", None)
_UuidCreate = getattr(_uuid, "UuidCreate", None)
_has_uuid_generate_time_safe = _uuid.has_uuid_generate_time_safe
except ImportError:
_uuid = None
_generate_time_safe = None
_UuidCreate = None
_has_uuid_generate_time_safe = None
```
When the top branch executes, `getnode()` produces a random number.
When the bottom branch executes, `getnode()` produces a number tied to the MAC address of the network interface.
### Steps to reproduce:
#### Case 1: working as intended
Using a version of Python compiled with these flags...
```
HAVE_UUID_CREATE = "0"
HAVE_UUID_ENC_BE = "0"
HAVE_UUID_GENERATE_TIME_SAFE = "1"
HAVE_UUID_H = "1"
HAVE_UUID_UUID_H = "1"
```
...we get this behavior:
```console
$ python -c "import uuid; print(uuid.getnode())"
some number X
$ python -c "import uuid; print(uuid.getnode())"
the same number X
```
#### Case 2: buggy behavior
Using a version of Python compiled with these flags...
```
HAVE_UUID_CREATE = "0"
HAVE_UUID_ENC_BE = "0"
HAVE_UUID_GENERATE_TIME_SAFE = "0"
HAVE_UUID_H = "0"
HAVE_UUID_UUID_H = "1"
```
...we get this behavior:
```console
$ python -c "import uuid; print(uuid.getnode())"
some number X
$ python -c "import uuid; print(uuid.getnode())"
some other number Y!!!
```
### CPython versions tested on:
3.13
### Operating systems tested on:
macOS, Linux
<!-- gh-linked-prs -->
### Linked PRs
* gh-132901
* gh-134697
* gh-134704 [**to be merged in August 2025**]
* gh-134705
* gh-134707
* gh-134709 [included in gh-134704]
<!-- /gh-linked-prs -->
| 9eb84d83e00070cec3cfe78f1d0c7a7a0fbef30f | cb8045e86c4fadfd847d614193f2b38ec03933b8 |
python/cpython | python__cpython-132690 | # data race in PyMember_GetOne with _Py_T_OBJECT
When running ctypes tests using parallel threads the following data race is reported:
```console
WARNING: ThreadSanitizer: data race (pid=73865)
Read of size 8 at 0x7fadbc570760 by thread T1391:
#0 PyMember_GetOne /home/realkumaraditya/cpython/Python/structmember.c:88:13 (python+0x4deee7) (BuildId: f96651439547347e4e969de5af0d9edea510d7e3)
#1 member_get /home/realkumaraditya/cpython/Objects/descrobject.c:180:12 (python+0x207c4a) (BuildId: f96651439547347e4e969de5af0d9edea510d7e3)
#2 _PyObject_GenericGetAttrWithDict /home/realkumaraditya/cpython/Objects/object.c (python+0x2a8da2) (BuildId: f96651439547347e4e969de5af0d9edea510d7e3)
#3 PyObject_GenericGetAttr /home/realkumaraditya/cpython/Objects/object.c:1792:12 (python+0x2a8732) (BuildId: f96651439547347e4e969de5af0d9edea510d7e3)
#4 PyObject_GetAttr /home/realkumaraditya/cpython/Objects/object.c:1296:18 (python+0x2a70e7) (BuildId: f96651439547347e4e969de5af0d9edea510d7e3)
#5 _PyEval_EvalFrameDefault /home/realkumaraditya/cpython/Python/generated_cases.c.h:7710:30 (python+0x4111a8) (BuildId: f96651439547347e4e969de5af0d9edea510d7e3)
#6 _PyEval_EvalFrame /home/realkumaraditya/cpython/./Include/internal/pycore_ceval.h:119:16 (python+0x3f80ef) (BuildId: f96651439547347e4e969de5af0d9edea510d7e3)
#7 _PyEval_Vector /home/realkumaraditya/cpython/Python/ceval.c:1917:12 (python+0x3f80ef)
#8 PyEval_EvalCode /home/realkumaraditya/cpython/Python/ceval.c:829:21 (python+0x3f80ef)
#9 builtin_exec_impl /home/realkumaraditya/cpython/Python/bltinmodule.c:1158:17 (python+0x3f1937) (BuildId: f96651439547347e4e969de5af0d9edea510d7e3)
#10 builtin_exec /home/realkumaraditya/cpython/Python/clinic/bltinmodule.c.h:568:20 (python+0x3f1937)
#11 _PyEval_EvalFrameDefault /home/realkumaraditya/cpython/Python/generated_cases.c.h:2179:35 (python+0x3fff63) (BuildId: f96651439547347e4e969de5af0d9edea510d7e3)
#12 _PyEval_EvalFrame /home/realkumaraditya/cpython/./Include/internal/pycore_ceval.h:119:16 (python+0x3f85b0) (BuildId: f96651439547347e4e969de5af0d9edea510d7e3)
#13 _PyEval_Vector /home/realkumaraditya/cpython/Python/ceval.c:1917:12 (python+0x3f85b0)
#14 _PyFunction_Vectorcall /home/realkumaraditya/cpython/Objects/call.c (python+0x1f1a8f) (BuildId: f96651439547347e4e969de5af0d9edea510d7e3)
#15 _PyObject_VectorcallTstate /home/realkumaraditya/cpython/./Include/internal/pycore_call.h:169:11 (python+0x1f6460) (BuildId: f96651439547347e4e969de5af0d9edea510d7e3)
#16 method_vectorcall /home/realkumaraditya/cpython/Objects/classobject.c:94:18 (python+0x1f6460)
#17 _PyVectorcall_Call /home/realkumaraditya/cpython/Objects/call.c:273:16 (python+0x1f171f) (BuildId: f96651439547347e4e969de5af0d9edea510d7e3)
#18 _PyObject_Call /home/realkumaraditya/cpython/Objects/call.c:348:16 (python+0x1f171f)
#19 PyObject_Call /home/realkumaraditya/cpython/Objects/call.c:373:12 (python+0x1f1785) (BuildId: f96651439547347e4e969de5af0d9edea510d7e3)
#20 _PyEval_EvalFrameDefault /home/realkumaraditya/cpython/Python/generated_cases.c.h:2448:32 (python+0x400a72) (BuildId: f96651439547347e4e969de5af0d9edea510d7e3)
#21 _PyEval_EvalFrame /home/realkumaraditya/cpython/./Include/internal/pycore_ceval.h:119:16 (python+0x3f85b0) (BuildId: f96651439547347e4e969de5af0d9edea510d7e3)
#22 _PyEval_Vector /home/realkumaraditya/cpython/Python/ceval.c:1917:12 (python+0x3f85b0)
#23 _PyFunction_Vectorcall /home/realkumaraditya/cpython/Objects/call.c (python+0x1f1a8f) (BuildId: f96651439547347e4e969de5af0d9edea510d7e3)
#24 _PyObject_VectorcallTstate /home/realkumaraditya/cpython/./Include/internal/pycore_call.h:169:11 (python+0x1f63af) (BuildId: f96651439547347e4e969de5af0d9edea510d7e3)
#25 method_vectorcall /home/realkumaraditya/cpython/Objects/classobject.c:72:20 (python+0x1f63af)
#26 _PyObject_VectorcallTstate /home/realkumaraditya/cpython/./Include/internal/pycore_call.h:169:11 (python+0x450517) (BuildId: f96651439547347e4e969de5af0d9edea510d7e3)
#27 context_run /home/realkumaraditya/cpython/Python/context.c:728:29 (python+0x450517)
#28 _PyEval_EvalFrameDefault /home/realkumaraditya/cpython/Python/generated_cases.c.h:3507:35 (python+0x404c09) (BuildId: f96651439547347e4e969de5af0d9edea510d7e3)
#29 _PyEval_EvalFrame /home/realkumaraditya/cpython/./Include/internal/pycore_ceval.h:119:16 (python+0x3f85b0) (BuildId: f96651439547347e4e969de5af0d9edea510d7e3)
#30 _PyEval_Vector /home/realkumaraditya/cpython/Python/ceval.c:1917:12 (python+0x3f85b0)
#31 _PyFunction_Vectorcall /home/realkumaraditya/cpython/Objects/call.c (python+0x1f1a8f) (BuildId: f96651439547347e4e969de5af0d9edea510d7e3)
#32 _PyObject_VectorcallTstate /home/realkumaraditya/cpython/./Include/internal/pycore_call.h:169:11 (python+0x1f63af) (BuildId: f96651439547347e4e969de5af0d9edea510d7e3)
#33 method_vectorcall /home/realkumaraditya/cpython/Objects/classobject.c:72:20 (python+0x1f63af)
#34 _PyVectorcall_Call /home/realkumaraditya/cpython/Objects/call.c:273:16 (python+0x1f171f) (BuildId: f96651439547347e4e969de5af0d9edea510d7e3)
#35 _PyObject_Call /home/realkumaraditya/cpython/Objects/call.c:348:16 (python+0x1f171f)
#36 PyObject_Call /home/realkumaraditya/cpython/Objects/call.c:373:12 (python+0x1f1785) (BuildId: f96651439547347e4e969de5af0d9edea510d7e3)
#37 thread_run /home/realkumaraditya/cpython/./Modules/_threadmodule.c:353:21 (python+0x59f302) (BuildId: f96651439547347e4e969de5af0d9edea510d7e3)
#38 pythread_wrapper /home/realkumaraditya/cpython/Python/thread_pthread.h:242:5 (python+0x4f8be7) (BuildId: f96651439547347e4e969de5af0d9edea510d7e3)
Previous write of size 8 at 0x7fadbc570760 by thread T1392:
#0 PyCData_GetContainer /home/realkumaraditya/cpython/./Modules/_ctypes/_ctypes.c:2840:29 (_ctypes.cpython-314t-x86_64-linux-gnu.so+0xbce8) (BuildId: add85487867eb2c9394b8c302c674d3f18d2bee0)
#1 KeepRef_lock_held /home/realkumaraditya/cpython/./Modules/_ctypes/_ctypes.c:2900:10 (_ctypes.cpython-314t-x86_64-linux-gnu.so+0x98f6) (BuildId: add85487867eb2c9394b8c302c674d3f18d2bee0)
#2 KeepRef /home/realkumaraditya/cpython/./Modules/_ctypes/_ctypes.c:2943:11 (_ctypes.cpython-314t-x86_64-linux-gnu.so+0x98f6)
#3 PyCData_set /home/realkumaraditya/cpython/./Modules/_ctypes/_ctypes.c:3505:12 (_ctypes.cpython-314t-x86_64-linux-gnu.so+0x17fdc) (BuildId: add85487867eb2c9394b8c302c674d3f18d2bee0)
#4 Array_ass_item_lock_held /home/realkumaraditya/cpython/./Modules/_ctypes/_ctypes.c:5094:12 (_ctypes.cpython-314t-x86_64-linux-gnu.so+0x17fdc)
#5 Array_ass_subscript_lock_held /home/realkumaraditya/cpython/./Modules/_ctypes/_ctypes.c:5127:16 (_ctypes.cpython-314t-x86_64-linux-gnu.so+0x179eb) (BuildId: add85487867eb2c9394b8c302c674d3f18d2bee0)
#6 Array_ass_subscript /home/realkumaraditya/cpython/./Modules/_ctypes/_ctypes.c:5171:14 (_ctypes.cpython-314t-x86_64-linux-gnu.so+0x179eb)
#7 PyObject_SetItem /home/realkumaraditya/cpython/Objects/abstract.c:235:19 (python+0x1b9598) (BuildId: f96651439547347e4e969de5af0d9edea510d7e3)
#8 _PyEval_EvalFrameDefault /home/realkumaraditya/cpython/Python/generated_cases.c.h:11172:27 (python+0x41b90a) (BuildId: f96651439547347e4e969de5af0d9edea510d7e3)
#9 _PyEval_EvalFrame /home/realkumaraditya/cpython/./Include/internal/pycore_ceval.h:119:16 (python+0x3f80ef) (BuildId: f96651439547347e4e969de5af0d9edea510d7e3)
#10 _PyEval_Vector /home/realkumaraditya/cpython/Python/ceval.c:1917:12 (python+0x3f80ef)
#11 PyEval_EvalCode /home/realkumaraditya/cpython/Python/ceval.c:829:21 (python+0x3f80ef)
#12 builtin_exec_impl /home/realkumaraditya/cpython/Python/bltinmodule.c:1158:17 (python+0x3f1937) (BuildId: f96651439547347e4e969de5af0d9edea510d7e3)
#13 builtin_exec /home/realkumaraditya/cpython/Python/clinic/bltinmodule.c.h:568:20 (python+0x3f1937)
#14 _PyEval_EvalFrameDefault /home/realkumaraditya/cpython/Python/generated_cases.c.h:2179:35 (python+0x3fff63) (BuildId: f96651439547347e4e969de5af0d9edea510d7e3)
#15 _PyEval_EvalFrame /home/realkumaraditya/cpython/./Include/internal/pycore_ceval.h:119:16 (python+0x3f85b0) (BuildId: f96651439547347e4e969de5af0d9edea510d7e3)
#16 _PyEval_Vector /home/realkumaraditya/cpython/Python/ceval.c:1917:12 (python+0x3f85b0)
#17 _PyFunction_Vectorcall /home/realkumaraditya/cpython/Objects/call.c (python+0x1f1a8f) (BuildId: f96651439547347e4e969de5af0d9edea510d7e3)
#18 _PyObject_VectorcallTstate /home/realkumaraditya/cpython/./Include/internal/pycore_call.h:169:11 (python+0x1f6460) (BuildId: f96651439547347e4e969de5af0d9edea510d7e3)
#19 method_vectorcall /home/realkumaraditya/cpython/Objects/classobject.c:94:18 (python+0x1f6460)
#20 _PyVectorcall_Call /home/realkumaraditya/cpython/Objects/call.c:273:16 (python+0x1f171f) (BuildId: f96651439547347e4e969de5af0d9edea510d7e3)
#21 _PyObject_Call /home/realkumaraditya/cpython/Objects/call.c:348:16 (python+0x1f171f)
#22 PyObject_Call /home/realkumaraditya/cpython/Objects/call.c:373:12 (python+0x1f1785) (BuildId: f96651439547347e4e969de5af0d9edea510d7e3)
#23 _PyEval_EvalFrameDefault /home/realkumaraditya/cpython/Python/generated_cases.c.h:2448:32 (python+0x400a72) (BuildId: f96651439547347e4e969de5af0d9edea510d7e3)
#24 _PyEval_EvalFrame /home/realkumaraditya/cpython/./Include/internal/pycore_ceval.h:119:16 (python+0x3f85b0) (BuildId: f96651439547347e4e969de5af0d9edea510d7e3)
#25 _PyEval_Vector /home/realkumaraditya/cpython/Python/ceval.c:1917:12 (python+0x3f85b0)
#26 _PyFunction_Vectorcall /home/realkumaraditya/cpython/Objects/call.c (python+0x1f1a8f) (BuildId: f96651439547347e4e969de5af0d9edea510d7e3)
#27 _PyObject_VectorcallTstate /home/realkumaraditya/cpython/./Include/internal/pycore_call.h:169:11 (python+0x1f63af) (BuildId: f96651439547347e4e969de5af0d9edea510d7e3)
#28 method_vectorcall /home/realkumaraditya/cpython/Objects/classobject.c:72:20 (python+0x1f63af)
#29 _PyObject_VectorcallTstate /home/realkumaraditya/cpython/./Include/internal/pycore_call.h:169:11 (python+0x450517) (BuildId: f96651439547347e4e969de5af0d9edea510d7e3)
#30 context_run /home/realkumaraditya/cpython/Python/context.c:728:29 (python+0x450517)
#31 _PyEval_EvalFrameDefault /home/realkumaraditya/cpython/Python/generated_cases.c.h:3507:35 (python+0x404c09) (BuildId: f96651439547347e4e969de5af0d9edea510d7e3)
#32 _PyEval_EvalFrame /home/realkumaraditya/cpython/./Include/internal/pycore_ceval.h:119:16 (python+0x3f85b0) (BuildId: f96651439547347e4e969de5af0d9edea510d7e3)
#33 _PyEval_Vector /home/realkumaraditya/cpython/Python/ceval.c:1917:12 (python+0x3f85b0)
#34 _PyFunction_Vectorcall /home/realkumaraditya/cpython/Objects/call.c (python+0x1f1a8f) (BuildId: f96651439547347e4e969de5af0d9edea510d7e3)
#35 _PyObject_VectorcallTstate /home/realkumaraditya/cpython/./Include/internal/pycore_call.h:169:11 (python+0x1f63af) (BuildId: f96651439547347e4e969de5af0d9edea510d7e3)
#36 method_vectorcall /home/realkumaraditya/cpython/Objects/classobject.c:72:20 (python+0x1f63af)
#37 _PyVectorcall_Call /home/realkumaraditya/cpython/Objects/call.c:273:16 (python+0x1f171f) (BuildId: f96651439547347e4e969de5af0d9edea510d7e3)
#38 _PyObject_Call /home/realkumaraditya/cpython/Objects/call.c:348:16 (python+0x1f171f)
#39 PyObject_Call /home/realkumaraditya/cpython/Objects/call.c:373:12 (python+0x1f1785) (BuildId: f96651439547347e4e969de5af0d9edea510d7e3)
#40 thread_run /home/realkumaraditya/cpython/./Modules/_threadmodule.c:353:21 (python+0x59f302) (BuildId: f96651439547347e4e969de5af0d9edea510d7e3)
#41 pythread_wrapper /home/realkumaraditya/cpython/Python/thread_pthread.h:242:5 (python+0x4f8be7) (BuildId: f96651439547347e4e969de5af0d9edea510d7e3)
SUMMARY: ThreadSanitizer: data race /home/realkumaraditya/cpython/Python/structmember.c:88:13 in PyMember_GetOne
==================
```
The data race is because when `_Py_T_OBJECT` is used, it currently reads the field non atomically without critical section. It needs to load it using atomics like `Py_T_OBJECT_EX`.
<!-- gh-linked-prs -->
### Linked PRs
* gh-132690
<!-- /gh-linked-prs -->
| 7fd708b727fe19403726da6cb912b81768a96946 | e77d6784e754d9d656238bb5691534857598c000 |
python/cpython | python__cpython-132790 | # Enum `_missing_` function changes `__contains__` behaviour
# Bug report
### Bug description:
I observed different behavior for Enum `__contains__` in 3.13.2 and 3.13.3 if there is `_missing_` method implemented:
```python
from enum import Enum
class Color(Enum):
RED = 1
@classmethod
def _missing_(cls, value):
return cls.RED
# 3.13.2
>>> "blue" in Color
False
# 3.13.3
>>> "blue" in Color
True
```
IMO `_missing_` should only influence the `__call__` method:
```python
# both 3.13.2 and 3.13.3
>>> Color("blue")
<Color.RED: 1>
```
### CPython versions tested on:
3.13
### Operating systems tested on:
macOS
<!-- gh-linked-prs -->
### Linked PRs
* gh-132790
* gh-132896
<!-- /gh-linked-prs -->
| 22bc953aa9be3039629dd1315f856d2522619412 | 63da5cc1504f066b31374027f637b4b021445d6b |
python/cpython | python__cpython-132679 | # Add --prioritize to regrtest
From the added help:
```
--prioritize TEST1,TEST2,...
select these tests first, even if the order is randomized.
--prioritize is used to influence the order of selected tests, such that
the tests listed as an argument are executed first. This is especially
useful when combined with -j and -r to pin the longest-running tests
to start at the beginning of a test run. Pass --prioritize=test_a,test_b
to make test_a run first, followed by test_b, and then the other tests.
If test_a wasn't selected for execution by regular means, --prioritize will
not make it execute.
```
This is useful in general, but in particular for bigmem tests on buildbots where `test_bigmem`, `test_lzma`, and `test_bz2` can take over 30 minutes each. Guaranteeing that the test run starts with those shortens the overall runtime of tests, which is otherwise pretty unpredictable due to `-r`.
<!-- gh-linked-prs -->
### Linked PRs
* gh-132679
<!-- /gh-linked-prs -->
| a594008d9e6c4d37ff6fd698395cd318d7ec3300 | d134bd272f90bfc2dfb55b126d4552c996251fc1 |
python/cpython | python__cpython-132675 | # Compiler warnings on free-threaded build for `_hashopenssl.c`
# Bug report
### Bug description:
I get the following warnings since https://github.com/python/cpython/pull/128886, which was backported to 3.13.
```text
./Modules/_hashopenssl.c:416:69: warning: passing 'const EVP_MD *' (aka 'const struct evp_md_st *') to parameter of type 'void *' discards qualifiers [-Wincompatible-pointer-types-discards-qualifiers]
416 | other_digest = _Py_atomic_exchange_ptr(&entry->evp, digest);
| ^~~~~~
./Include/cpython/pyatomic_gcc.h:194:42: note: passing argument to parameter 'value' here
194 | _Py_atomic_exchange_ptr(void *obj, void *value)
| ^
./Modules/_hashopenssl.c:428:80: warning: passing 'const EVP_MD *' (aka 'const struct evp_md_st *') to parameter of type 'void *' discards qualifiers [-Wincompatible-pointer-types-discards-qualifiers]
428 | other_digest = _Py_atomic_exchange_ptr(&entry->evp_nosecurity, digest);
| ^~~~~~
./Include/cpython/pyatomic_gcc.h:194:42: note: passing argument to parameter 'value' here
194 | _Py_atomic_exchange_ptr(void *obj, void *value)
```
### CPython versions tested on:
CPython main branch
### Operating systems tested on:
_No response_
<!-- gh-linked-prs -->
### Linked PRs
* gh-132675
* gh-132677
<!-- /gh-linked-prs -->
| 2df0f8804701cc17674e5b4e90499e9fac71d0e1 | 379352620ce4d77f7248939a4cf211db48fdd241 |
python/cpython | python__cpython-132676 | # `_align_ = 0` segfaults when used with empty filed list (`_fields_ = []`)
# Crash report
### What happened?
Tested on the latest `ubuntu:25.04` docker image.
```python
from ctypes import Structure
class MyStructure(Structure):
_align_ = 0
_fields_ = []
```
Crashes with `Floating point exception` on CPython 3.13.
I'm attaching both gdb core dump and backtrace: [dump.zip](https://github.com/user-attachments/files/19811098/dump.zip)
Problem seems caused by [_ctypes/stgdict.c:573](https://github.com/python/cpython/blob/7d4c00e7c50f9702be7319e0dc1557d925fbd555/Modules/_ctypes/stgdict.c#L573C29-L573C40):
`aligned_size = ((size + total_align - 1) / total_align) * total_align;`
Also tested on CPython 3.14, which doesn't segfaults but exits on an assert in [ctypes/_layout.py:19](https://github.com/python/cpython/blob/77b2c933cab4f38a8ce1f7633b96ba213566d306/Lib/ctypes/_layout.py#L19):
`assert multiple > 0`
called by [ctypes/_layout.py:314](https://github.com/python/cpython/blob/77b2c933cab4f38a8ce1f7633b96ba213566d306/Lib/ctypes/_layout.py#L314):
`aligned_size = round_up(total_size, align)`
### CPython versions tested on:
3.13
### Operating systems tested on:
Linux
### Output from running 'python -VV' on the command line:
Python 3.13.3 (main, Apr 8 2025, 19:55:40) [GCC 14.2.0]
<!-- gh-linked-prs -->
### Linked PRs
* gh-132676
* gh-132695
<!-- /gh-linked-prs -->
| 678b8e165646e72a2f39f017ad237210d3d481ef | 40ae88988c61638ee8625e5c0ee73606ede307bb |
python/cpython | python__cpython-132669 | # Outdated description in `library/dis.rst`
# Documentation
In [`POP_JUMP_IF_NOT_NONE` document](https://docs.python.org/3/library/dis.html#opcode-POP_JUMP_IF_NOT_NONE) and `POP_JUMP_IF_NONE` document, the part "This opcode is a pseudo-instruction, ..." is now incorrect as mentioned in the immediately following note.
<!-- gh-linked-prs -->
### Linked PRs
* gh-132669
* gh-132680
<!-- /gh-linked-prs -->
| 7e2672cfcf993e957c9966a88931fe6571affd24 | 2df0f8804701cc17674e5b4e90499e9fac71d0e1 |
python/cpython | python__cpython-132650 | # PC/layout script forcibly disables --include-tcltk on ARM64
# Bug report
This was important back before it would build, but we should allow including it now.
<!-- gh-linked-prs -->
### Linked PRs
* gh-132650
* gh-132656
<!-- /gh-linked-prs -->
| b87189deae7cdd65083da60cf3ba6e5bba117663 | cf59bc3ae7d34060e55542b6df6786aa2d9a457c |
python/cpython | python__cpython-132653 | # Possible data race between specialize_attr_loadclassattr and ensure_nonmanaged_dict under free-threading
# Bug report
### Bug description:
I built main branch and observed the following races under free-threading in cpython 3.14 (Python 3.14.0a7+ experimental free-threading build (heads/main:e42bda94411, Apr 17 2025, 14:08:39) [Clang 18.1.3 (1ubuntu1)])
<details>
<summary>
Race
</summary>
```
==================
WARNING: ThreadSanitizer: data race (pid=921410)
Read of size 8 at 0x7fffd22b01b8 by thread T13 (mutexes: read M0):
#0 specialize_attr_loadclassattr /project/cpython/Python/specialize.c:1644:30 (python3.14+0x4ddf68) (BuildId: d491981d76fd0b67bcb5e01a978d07da019800b8)
#1 do_specialize_instance_load_attr /project/cpython/Python/specialize.c:1163:21 (python3.14+0x4d9fa1) (BuildId: d491981d76fd0b67bcb5e01a978d07da019800b8)
#2 specialize_instance_load_attr /project/cpython/Python/specialize.c:1340:18 (python3.14+0x4d9fa1)
#3 _Py_Specialize_LoadAttr /project/cpython/Python/specialize.c:1368:16 (python3.14+0x4d9fa1)
#4 _PyEval_EvalFrameDefault /project/cpython/Python/generated_cases.c.h:7674:21 (python3.14+0x410c34) (BuildId: d491981d76fd0b67bcb5e01a978d07da019800b8)
#5 _PyEval_EvalFrame /project/cpython/./Include/internal/pycore_ceval.h:119:16 (python3.14+0x3f80f0) (BuildId: d491981d76fd0b67bcb5e01a978d07da019800b8)
#6 _PyEval_Vector /project/cpython/Python/ceval.c:1917:12 (python3.14+0x3f80f0)
...
Previous atomic write of size 8 at 0x7fffd22b01b8 by thread T8 (mutexes: read M0):
#0 _Py_atomic_store_ptr_release /project/cpython/./Include/cpython/pyatomic_gcc.h:565:3 (python3.14+0x283dbd) (BuildId: d491981d76fd0b67bcb5e01a978d07da019800b8)
#1 ensure_nonmanaged_dict /project/cpython/Objects/dictobject.c:7496:9 (python3.14+0x283dbd)
#2 _PyObjectDict_SetItem /project/cpython/Objects/dictobject.c:7532:12 (python3.14+0x283e8e) (BuildId: d491981d76fd0b67bcb5e01a978d07da019800b8)
#3 _PyObject_GenericSetAttrWithDict /project/cpython/Objects/object.c:1872:19 (python3.14+0x2aa914) (BuildId: d491981d76fd0b67bcb5e01a978d07da019800b8)
#4 PyObject_GenericSetAttr /project/cpython/Objects/object.c:1900:12 (python3.14+0x2ab1d7) (BuildId: d491981d76fd0b67bcb5e01a978d07da019800b8)
#5 PyObject_SetAttr /project/cpython/Objects/object.c:1450:15 (python3.14+0x2a7ada) (BuildId: d491981d76fd0b67bcb5e01a978d07da019800b8)
#6 _PyEval_EvalFrameDefault /project/cpython/Python/generated_cases.c.h:10653:27 (python3.14+0x4197db) (BuildId: d491981d76fd0b67bcb5e01a978d07da019800b8)
#7 _PyEval_EvalFrame /project/cpython/./Include/internal/pycore_ceval.h:119:16 (python3.14+0x3f80f0) (BuildId: d491981d76fd0b67bcb5e01a978d07da019800b8)
#8 _PyEval_Vector /project/cpython/Python/ceval.c:1917:12 (python3.14+0x3f80f0)
```
</details>
Full report: https://gist.github.com/vfdev-5/18a83532a8589b02cd01ac21342727f6
cc @hawkinsp
### CPython versions tested on:
CPython main branch
### Operating systems tested on:
Linux
<!-- gh-linked-prs -->
### Linked PRs
* gh-132653
<!-- /gh-linked-prs -->
| f3d877a27abca355f9d05decf3e2ce0874983288 | 80295a8f9b624c8d962b1df6bc9b89049e11bcf5 |
python/cpython | python__cpython-133825 | # Document how to format a `timedelta` in human-readable form
### Bug description:
```python
dt_utc = datetime.now(timezone.utc).replace(microsecond=1)
dt_utc_str = dt_utc.strftime("%Y-%m-%d %H:%M:%S")
print(f"timenow : dt_utc_str : {dt_utc_str}")
if self.timezone:
print(f"timenow : timezone : {self.timezone}")
# dt_event = dt_utc.replace(tzinfo=ZoneInfo(self.timezone))
dt_event = dt_utc.astimezone(ZoneInfo(self.timezone))
dt_event_str = dt_event.strftime("%Y-%m-%d %H:%M:%S")
print(f"timenow : dt_event_str : {dt_event_str}")
tz_offset_ = dt_event.utcoffset()
tz_offset_str = str(tz_offset_)
print(f"timenow : tzoffstr : {tz_offset_str}")
print(
f"timenow : tz_offset_ : {tz_offset_} | type : {type(tz_offset_)}"
)
# error : invalid literal ... -1 day, 20:00:00 [workaround]
# parse timezone string to decimal hours
parts = [p for p in tz_offset_str.split(",") if p]
days = int(parts[0].split()[0]) if "day" in parts[0] else 0
h, m, s = map(int, parts[-1].strip().split(":"))
# convert to decimal
self.tz_offset = days * 24 + h + m / 60 + s / 3600
print(f"timenow : tz_offset : {self.tz_offset}")
```
- terminal : expected / correct
timenow : dt_utc_str : 2025-04-17 15:46:10
timenow : timezone : Europe/Vienna
timenow : dt_event_str : 2025-04-17 17:46:10
timenow : tzoffstr : 2:00:00
timenow : tz_offset_ : 2:00:00 | type : <class 'datetime.timedelta'>
timenow : tz_offset : 2.0
- terminal : unexpected / wrong
timenow : dt_utc_str : 2025-04-17 15:46:36
timenow : timezone : America/Lima
timenow : dt_event_str : 2025-04-17 10:46:36
timenow : tzoffstr : -1 day, 19:00:00
timenow : tz_offset_ : -1 day, 19:00:00 | type : <class 'datetime.timedelta'>
timenow : tz_offset : -5.0
seems western longitudes return '-1 day, hh:mm:ss' offset
tested with usa, peru, chile, venezuela : all west longitude (-ve) return '-1 day...'
with slovenia, austria, china, australia : works ok, as expected, no 'day' in return
the workaround i am using (bottom of above py code) is working properly > tz_offset is correct
basically it is -24 (hours) + hh:mm:ss (as decimal number)
os : linux mint22 (latest, greatest)
python : 3.11.12 in virtual env (Python 3.11.12 (main, Apr 9 2025, 08:55:55) [GCC 13.3.0] on linux)
guest os (with dev venv) is running in virual machine on identical host (mint22)
have fun
### CPython versions tested on:
3.11
### Operating systems tested on:
Linux
<!-- gh-linked-prs -->
### Linked PRs
* gh-133825
* gh-133836
* gh-133837
<!-- /gh-linked-prs -->
| efcc42ba70fb09333a2be16401da731662e2984b | 76c0b01bc401c3e976011bbc69cec56dbebe0ad5 |
python/cpython | python__cpython-133787 | # Data race between compare_generic and insert_combined_dict under free-threading
# Bug report
### Bug description:
I built main branch and observed the following races under free-threading in cpython 3.14 (Python 3.14.0a7+ experimental free-threading build (heads/main:e42bda94411, Apr 17 2025, 14:08:39) [Clang 18.1.3 (1ubuntu1)])
<details>
<summary>
Race 1
</summary>
```
WARNING: ThreadSanitizer: data race (pid=33872)
Read of size 8 at 0x7fffc22d7ad8 by thread T5 (mutexes: read M0):
#0 compare_generic /project/cpython/Objects/dictobject.c:1107:13 (python3.14+0x26edbc) (BuildId: d491981d76fd0b67bcb5e01a978d07da019800b8)
#1 do_lookup /project/cpython/Objects/dictobject.c:1013:23 (python3.14+0x26edbc)
#2 dictkeys_generic_lookup /project/cpython/Objects/dictobject.c:1132:12 (python3.14+0x26edbc)
#3 _Py_dict_lookup /project/cpython/Objects/dictobject.c:1298:14 (python3.14+0x26edbc)
#4 _PyDict_GetItemRef_KnownHash_LockHeld /project/cpython/Objects/dictobject.c:2330:21 (python3.14+0x272a14) (BuildId: d491981d76fd0b67bcb5e01a978d07da019800b8)
...
Previous atomic write of size 8 at 0x7fffc22d7ad8 by thread T6 (mutexes: read M0):
#0 _Py_atomic_store_ptr_release /project/cpython/./Include/cpython/pyatomic_gcc.h:565:3 (python3.14+0x284d66) (BuildId: d491981d76fd0b67bcb5e01a978d07da019800b8)
#1 insert_combined_dict /project/cpython/Objects/dictobject.c:1748:9 (python3.14+0x284d66)
#2 insertdict /project/cpython/Objects/dictobject.c:1854:13 (python3.14+0x274254) (BuildId: d491981d76fd0b67bcb5e01a978d07da019800b8)
#3 _PyDict_SetItem_KnownHash_LockHeld /project/cpython/Objects/dictobject.c:2656:12 (python3.14+0x273be9) (BuildId: d491981d76fd0b67bcb5e01a978d07da019800b8)
#4 _PyDict_SetItem_KnownHash /project/cpython/Objects/dictobject.c:2673:11 (python3.14+0x274635) (BuildId: d491981d76fd0b67bcb5e01a978d07da019800b8)
...
```
</details>
<details>
<summary>
Race 2
</summary>
```
WARNING: ThreadSanitizer: data race (pid=33872)
Read of size 8 at 0x7fffc22d7ad0 by thread T5 (mutexes: read M0):
#0 compare_generic /project/cpython/Objects/dictobject.c:1110:13 (python3.14+0x26edd5) (BuildId: d491981d76fd0b67bcb5e01a978d07da019800b8)
#1 do_lookup /project/cpython/Objects/dictobject.c:1013:23 (python3.14+0x26edd5)
#2 dictkeys_generic_lookup /project/cpython/Objects/dictobject.c:1132:12 (python3.14+0x26edd5)
#3 _Py_dict_lookup /project/cpython/Objects/dictobject.c:1298:14 (python3.14+0x26edd5)
#4 _PyDict_GetItemRef_KnownHash_LockHeld /project/cpython/Objects/dictobject.c:2330:21 (python3.14+0x272a14) (BuildId: d491981d76fd0b67bcb5e01a978d07da019800b8)
...
Previous atomic write of size 8 at 0x7fffc22d7ad0 by thread T6 (mutexes: read M0):
#0 insert_combined_dict /project/cpython/Objects/dictobject.c (python3.14+0x284d8c) (BuildId: d491981d76fd0b67bcb5e01a978d07da019800b8)
#1 insertdict /project/cpython/Objects/dictobject.c:1854:13 (python3.14+0x274254) (BuildId: d491981d76fd0b67bcb5e01a978d07da019800b8)
#2 _PyDict_SetItem_KnownHash_LockHeld /project/cpython/Objects/dictobject.c:2656:12 (python3.14+0x273be9) (BuildId: d491981d76fd0b67bcb5e01a978d07da019800b8)
#3 _PyDict_SetItem_KnownHash /project/cpython/Objects/dictobject.c:2673:11 (python3.14+0x274635) (BuildId: d491981d76fd0b67bcb5e01a978d07da019800b8)
...
```
</details>
Full report: https://gist.github.com/vfdev-5/cbb9189043737d023b755191b62951cf
cc @hawkinsp
### CPython versions tested on:
CPython main branch
### Operating systems tested on:
Linux
<!-- gh-linked-prs -->
### Linked PRs
* gh-133787
* gh-133979
<!-- /gh-linked-prs -->
| 9ad0c7b0f14c5fcda6bfae6692c88abb95502d38 | 35f47d05893e012e9f2b145b934c1d8c61d2bb7d |
python/cpython | python__cpython-132640 | # Add PyLong_AsNativeBytes and PyLong_FromNativeBytes to the stable ABI
# Feature or enhancement
As was originally planned. These have been out in 3.13, and while it is most prominently used in Cython, has been broadly validated to be safe enough for stable ABI addition.
<!-- gh-linked-prs -->
### Linked PRs
* gh-132640
<!-- /gh-linked-prs -->
| 09b624b80f54e1f97812981cfff9fa374bd5360f | 70b322d3138311df53bc9fc57f174c8e9bdc2ab5 |
python/cpython | python__cpython-134815 | # `dict.update()` mutation check too broad
# Bug report
The [`dict.update()`](https://docs.python.org/3/library/stdtypes.html#dict.update) modification check can be erroneously triggered by modifications to *different* dictionaries that happen to [share the underlying keys objects](https://peps.python.org/pep-0412/).
For example, in the following program, the creation and modification of the `f2` object can lead to an incorrect `RuntimeError` raised in the main thread that operates on distinct dictionaries.
```python
import time
import threading
b = threading.Barrier(2)
class Foo:
pass
class MyStr(str):
def __hash__(self):
return super().__hash__()
def __eq__(self, other):
time.sleep(0.1)
return super().__eq__(other)
def thread():
b.wait()
time.sleep(0.05)
f2 = Foo()
f2.a = "a"
f2.b = "b"
f2.c = "c"
def main():
t1 = threading.Thread(target=thread)
t1.start()
b.wait()
f1 = Foo()
f1.a = "a"
f1.b = "b"
x = {}
x[MyStr("a")] = MyStr("a")
x.update(f1.__dict__)
t1.join()
if __name__ == "__main__":
main()
```
```
Traceback (most recent call last):
File "/home/sgross/cpython/bad.py", line 44, in <module>
main()
~~~~^^
File "/home/sgross/cpython/bad.py", line 39, in main
x.update(f1.__dict__)
~~~~~~~~^^^^^^^^^^^^^
RuntimeError: dict mutated during update
```
<!-- gh-linked-prs -->
### Linked PRs
* gh-134815
* gh-135581
* gh-135582
<!-- /gh-linked-prs -->
| d8994b0a77cc9821772d05db00a6ab23382fa17d | 4c15505071498439407483004721d0369f110229 |
python/cpython | python__cpython-132609 | # A sample code for `ast.While` should be colored correctly like the other examples.
# Documentation
A sample code for [ast.While](https://docs.python.org/3/library/ast.html#ast.While) currently looks like this:

Currently, all of the sample code is displayed in gray, and there is no syntax highlighting for `print` functions or a string literal.
The sample code should be colored correctly like the other examples, like a sample code for [ast.For](https://docs.python.org/3/library/ast.html#ast.For):

# Amendment proposal
I don't know a syntax for `reStructuredText`, but I find it odd that `>` characters are only two as an prompt, not three:
https://github.com/python/cpython/blob/f9578dc31fec95e0bf33ba2214554592182202f8/Doc/library/ast.rst?plain=1#L1188
I think if we make the prompt `>` three characters like the other sample codes it will be colored correctly:
```diff
- >> print(ast.dump(ast.parse("""
+ >>> print(ast.dump(ast.parse("""
```
<!-- gh-linked-prs -->
### Linked PRs
* gh-132609
* gh-132612
<!-- /gh-linked-prs -->
| f5512a2498d0d99197f4f12b37d004fdf6deec85 | 25717ff4bfcd5621eddb9439c690da8fa206ea10 |
python/cpython | python__cpython-132586 | # The new `multiprocessing.[R]Lock.locked()` method fails.
# Bug report
### Bug description:
Maybe I didn't quite understand what this feature did, but I think there's a bug when using the `locked()` method with a `multiprocessing.[R]Lock`.
Here is an example:
```
import multiprocessing as mp
def acq(lock, event):
lock.acquire()
print(f'Acq: {lock = }')
print(f'Acq: {lock.locked() = }')
event.set()
def main():
lock = mp.Lock()
event = mp.Event()
p = mp.Process(target=acq, args=(lock, event))
p.start()
event.wait()
print(f'Main: {lock = }')
print(f'Main: {lock.locked() = }')
if __name__ == "__main__":
mp.freeze_support()
main()
```
output is:
```
Acq: lock = <Lock(owner=Process-1)>
Acq: lock.locked() = True
Main: lock = <Lock(owner=SomeOtherProcess)>
Main: lock.locked() = False
```
In the `locked`method, the call to `self._semlock._count() != 0` is not appropriate. The internal `count` attribute is really used with `multiprocessing.RLock` to count number of reentrant calls to `acquire` for the current thread.
With `multiprocessing.Lock`, this `count` is set to 1 when the lock is acquired (only once).
Whatever, only other threads can obtain this value, but not other processes sharing the `[R]Lock`.
IMO the test should be replace with `self._semlock._is_zero()` and the example above should also be add as unit test.
<!-- gh-linked-prs -->
### Linked issue/PR
* gh-115942
* gh-115944
<!-- /gh-linked-prs -->
### CPython versions tested on:
CPython main branch
### Operating systems tested on:
macOS
<!-- gh-linked-prs -->
### Linked PRs
* gh-132586
<!-- /gh-linked-prs -->
| 15c75d7a8b86d9f76982f70a00b69b403458694f | 0c356c865a6d3806724f54d6d463b2e5289f6afa |
python/cpython | python__cpython-132663 | # Mysterious /home/buildbot/.debug/ directory is filling the disk of AArch64 Fedora buildbots
On two AArch64 Fedora buildbot workers, there is a mysterious `/home/buildbot/.debug/` directory which contains more than 70 GB of random files. I don't know what created these files but there are filling the disk.
It may be related to **SystemTap**. cc @stratakis
Example of content:
```
./home/buildbot/buildarea/3.x.cstratak-fedora-stable-aarch64.refleak/build/python/fbfc55b13cd2fb6f973ac5d4bfc38b68aa929f4a/elf
./home/buildbot/buildarea/3.x.cstratak-fedora-stable-aarch64.refleak/build/python/fbfc55b13cd2fb6f973ac5d4bfc38b68aa929f4a/probes
./home/buildbot/buildarea/3.13.cstratak-fedora-stable-aarch64.refleak/build/python/208beaba1427bf4f121c032812f921674d83b21f/elf
./home/buildbot/buildarea/3.13.cstratak-fedora-stable-aarch64.refleak/build/python/208beaba1427bf4f121c032812f921674d83b21f/probes
./[vdso]/86276518cf2d3106081e1ff7639ae9f7cd5aa490/vdso
./[vdso]/86276518cf2d3106081e1ff7639ae9f7cd5aa490/probes
./usr/lib/ld-linux-aarch64.so.1/c11d21d5efb3dd09d66a0ba3ffc35d8b7ab98d34/elf
./usr/lib/ld-linux-aarch64.so.1/c11d21d5efb3dd09d66a0ba3ffc35d8b7ab98d34/probes
./usr/lib64/libc.so.6/0b114a16c11a30fd7f69460c844769304ca9ebaf/elf
./usr/lib64/libc.so.6/0b114a16c11a30fd7f69460c844769304ca9ebaf/probes
./[kernel.kallsyms]/114df0fd4cb802c7a9b4223c4a7983ab97859116/kallsyms
./tmp/jitted-3349028-1.so/5c050d46136b7270551a62635decd0bb00000000/elf
./tmp/jitted-3349028-1.so/5c050d46136b7270551a62635decd0bb00000000/probes
./tmp/jitted-3349028-10.so/034c1d21c2ab0972df3fe9c93e2df4db00000000/elf
./tmp/jitted-3349028-10.so/034c1d21c2ab0972df3fe9c93e2df4db00000000/probes
./tmp/jitted-3349028-100.so/621026082183074121bc8cfb3a02dbee00000000/elf
./tmp/jitted-3349028-100.so/621026082183074121bc8cfb3a02dbee00000000/probes
./tmp/jitted-3349028-101.so/8706dac7faf88b026822b7f57d6af08e00000000/elf
./tmp/jitted-3349028-101.so/8706dac7faf88b026822b7f57d6af08e00000000/probes
./tmp/jitted-3349028-102.so/c02b3413eafc593a51e198ffbe1c784000000000/elf
./tmp/jitted-3349028-102.so/c02b3413eafc593a51e198ffbe1c784000000000/probes
./tmp/jitted-3349028-103.so/c3899fdebda5e7c6b4fd9a68b181201500000000/elf
./tmp/jitted-3349028-103.so/c3899fdebda5e7c6b4fd9a68b181201500000000/probes
```
There are 4 different file names:
```
root@localhost:/home/buildbot# find .debug -type f|sed -e 's!.*/\(.*\)!\1!g'|sort -u
elf
kallsyms
probes
vdso
```
`elf` and `vdso` are AArch64 ELF files.
`probes` and `kallsyms` are text files.
`kallsyms` seems to be a copy of `/proc/kallsyms`.
Example of `probes` file:
```
root@localhost:/home/buildbot/.debug# cat ./usr/lib/ld-linux-aarch64.so.1/c11d21d5efb3dd09d66a0ba3ffc35d8b7ab98d34/probes
%sdt_rtld:unmap_start=unmap_start
p:sdt_rtld/unmap_start /usr/lib/ld-linux-aarch64.so.1:0x1bd4 arg1=%x22:s64 arg2=%x19:u64
%sdt_rtld:unmap_complete=unmap_complete
p:sdt_rtld/unmap_complete /usr/lib/ld-linux-aarch64.so.1:0x1e98 arg1=%x22:s64 arg2=%x19:u64
%sdt_rtld:map_start=map_start
p:sdt_rtld/map_start /usr/lib/ld-linux-aarch64.so.1:0x6970 arg1=%x23:s64 arg2=%x20:u64
%sdt_rtld:reloc_start=reloc_start
p:sdt_rtld/reloc_start /usr/lib/ld-linux-aarch64.so.1:0xa124 arg3=%x1:u64
%sdt_rtld:map_complete=map_complete
p:sdt_rtld/map_complete /usr/lib/ld-linux-aarch64.so.1:0xa9ec arg3=%x23:u64 arg4=%x19:u64
%sdt_rtld:reloc_complete=reloc_complete
p:sdt_rtld/reloc_complete /usr/lib/ld-linux-aarch64.so.1:0xadb8 arg3=%x23:u64 arg4=%x19:u64
%sdt_rtld:init_start=init_start
p:sdt_rtld/init_start /usr/lib/ld-linux-aarch64.so.1:0x18104 arg2=%x23:u64
%sdt_rtld:init_complete=init_complete
p:sdt_rtld/init_complete /usr/lib/ld-linux-aarch64.so.1:0x18548 arg2=%x19:u64
%sdt_rtld:lll_lock_wait_private=lll_lock_wait_private
p:sdt_rtld/lll_lock_wait_private /usr/lib/ld-linux-aarch64.so.1:0x1c830 arg1=%x19:u64
%sdt_rtld:lll_lock_wait=lll_lock_wait
p:sdt_rtld/lll_lock_wait /usr/lib/ld-linux-aarch64.so.1:0x1c8b4 arg1=%x19:u64
%sdt_rtld:setjmp=setjmp
p:sdt_rtld/setjmp /usr/lib/ld-linux-aarch64.so.1:0x1cb28 arg1=%x0:u64 arg2=%x1:s32
%sdt_rtld:longjmp=longjmp
p:sdt_rtld/longjmp /usr/lib/ld-linux-aarch64.so.1:0x1cba8 arg1=%x0:u64 arg2=%x1:s32
%sdt_rtld:longjmp_target=longjmp_target
p:sdt_rtld/longjmp_target /usr/lib/ld-linux-aarch64.so.1:0x1cbd0 arg1=%x0:u64 arg2=%x1:s32
```
<!-- gh-linked-prs -->
### Linked PRs
* gh-132663
* gh-132681
* gh-132718
<!-- /gh-linked-prs -->
| e01e5829020e517eb68a47da4dd65926a9d144de | 7e2672cfcf993e957c9966a88931fe6571affd24 |
python/cpython | python__cpython-132616 | # Segfault/abort from calling `BytesIO` `unshare_buffer` in threads on a free-threaded build
# Crash report
### What happened?
This seems to be almost the same issue as https://github.com/python/cpython/issues/111174, but for free-threaded builds.
In a free-threaded build it's possible to segfault (rare on debug build) or abort (common on debug build) the interpreter with the following code (which might not be minimal, I'll try to reduce it further later today):
```python
from io import BytesIO
from threading import Thread
from time import sleep
def call_getbuffer(obj: BytesIO) -> None:
obj.getvalue()
obj.getbuffer()
obj.getbuffer()
sleep(0.001)
obj.getbuffer()
obj.getbuffer()
obj.getbuffer()
sleep(0.006)
obj.getbuffer()
obj.getvalue()
for x in range(100):
alive = []
obj = BytesIO()
for x in range(50):
alive.append(Thread(target=call_getbuffer, args=(obj,)))
alive.append(Thread(target=call_getbuffer, args=(obj,)))
alive.append(Thread(target=call_getbuffer, args=(obj,)))
alive.append(Thread(target=obj.__exit__, args=(None, None, None)))
alive.append(Thread(target=call_getbuffer, args=(obj,)))
alive.append(Thread(target=call_getbuffer, args=(obj,)))
alive.append(Thread(target=call_getbuffer, args=(obj,)))
for t in alive:
t.start()
for t in alive:
t.join()
```
Segfault backtrace 1 (gcc, release):
```
Thread 613 "Thread-612 (cal" received signal SIGSEGV, Segmentation fault.
bytesiobuf_getbuffer (op=0x48ec60f00f0, view=0x48ec61101b0, flags=284) at ./Modules/_io/bytesio.c:1090
1090 if (b->exports == 0 && SHARED_BUF(b)) {
#0 bytesiobuf_getbuffer (op=0x48ec60f00f0, view=0x48ec61101b0, flags=284) at ./Modules/_io/bytesio.c:1090
#1 0x00005555556c6519 in _PyManagedBuffer_FromObject (flags=284, base=0x48ec60f00f0)
at Objects/memoryobject.c:97
#2 PyMemoryView_FromObjectAndFlags (flags=284, v=0x48ec60f00f0) at Objects/memoryobject.c:813
#3 PyMemoryView_FromObject (v=v@entry=0x48ec60f00f0) at Objects/memoryobject.c:856
#4 0x00005555558b98bf in _io_BytesIO_getbuffer_impl (cls=<optimized out>, self=0x48ebe9f8020)
at ./Modules/_io/bytesio.c:337
#5 _io_BytesIO_getbuffer (self=0x48ebe9f8020, cls=<optimized out>, args=<optimized out>,
nargs=<optimized out>, kwnames=<optimized out>) at ./Modules/_io/clinic/bytesio.c.h:103
#6 0x0000555555652cb7 in _PyObject_VectorcallTstate (kwnames=<optimized out>, nargsf=<optimized out>,
args=<optimized out>, callable=0x48ebe3e6ae0, tstate=0x555555c0bb70)
at ./Include/internal/pycore_call.h:169
#7 PyObject_Vectorcall (callable=0x48ebe3e6ae0, args=<optimized out>, nargsf=<optimized out>,
kwnames=<optimized out>) at Objects/call.c:327
#8 0x00005555555e9486 in _PyEval_EvalFrameDefault (tstate=<optimized out>, frame=<optimized out>,
throwflag=<optimized out>) at Python/generated_cases.c.h:3850
#9 0x00005555557d01de in _PyEval_EvalFrame (throwflag=0, frame=<optimized out>, tstate=0x555555c0bb70)
at ./Include/internal/pycore_ceval.h:119
#10 _PyEval_Vector (tstate=0x555555c0bb70, func=0x48ebe713f00, locals=0x0, args=0x7fff85791958,
argcount=<optimized out>, kwnames=<optimized out>) at Python/ceval.c:1913
#11 0x0000555555656b23 in _PyObject_VectorcallTstate (kwnames=0x0, nargsf=1, args=0x7fff85791958,
callable=0x48ebe713f00, tstate=0x555555c0bb70) at ./Include/internal/pycore_call.h:169
#12 method_vectorcall (method=<optimized out>, args=0x7fff85791c68, nargsf=<optimized out>, kwnames=0x0)
at Objects/classobject.c:72
#13 0x00005555557eeac6 in _PyObject_VectorcallTstate (kwnames=0x0, nargsf=0, args=0x7fff85791c68,
callable=0x48ec60d0100, tstate=0x555555c0bb70) at ./Include/internal/pycore_call.h:169
#14 context_run (self=0x48ebea12fc0, args=<optimized out>, nargs=<optimized out>, kwnames=<optimized out>)
at Python/context.c:728
#15 0x00005555555ee959 in _PyEval_EvalFrameDefault (tstate=<optimized out>, frame=<optimized out>,
throwflag=<optimized out>) at Python/generated_cases.c.h:3521
#16 0x00005555557d01de in _PyEval_EvalFrame (throwflag=0, frame=<optimized out>, tstate=0x555555c0bb70)
at ./Include/internal/pycore_ceval.h:119
#17 _PyEval_Vector (tstate=0x555555c0bb70, func=0x48ebe713fc0, locals=0x0, args=0x7fff85791da8,
argcount=<optimized out>, kwnames=<optimized out>) at Python/ceval.c:1913
#18 0x0000555555656b23 in _PyObject_VectorcallTstate (kwnames=0x0, nargsf=1, args=0x7fff85791da8,
callable=0x48ebe713fc0, tstate=0x555555c0bb70) at ./Include/internal/pycore_call.h:169
#19 method_vectorcall (method=<optimized out>, args=0x555555b41890 <_PyRuntime+114512>,
nargsf=<optimized out>, kwnames=0x0) at Objects/classobject.c:72
#20 0x00005555558f0221 in thread_run (boot_raw=0x555555c07040) at ./Modules/_threadmodule.c:353
#21 0x000055555586c03b in pythread_wrapper (arg=<optimized out>) at Python/thread_pthread.h:242
#22 0x00007ffff7d32ac3 in start_thread (arg=<optimized out>) at ./nptl/pthread_create.c:442
#23 0x00007ffff7dc4850 in clone3 () at ../sysdeps/unix/sysv/linux/x86_64/clone3.S:81
```
Segfault backtrace 2 (clang, debug, ASAN):
```
Thread 457 "Thread-456 (cal" received signal SIGSEGV, Segmentation fault.
[Switching to Thread 0x7fff5f1af640 (LWP 1109867)]
PyBytes_AS_STRING (op=0x0) at ./Include/cpython/bytesobject.h:25
25 return _PyBytes_CAST(op)->ob_sval;
#0 PyBytes_AS_STRING (op=0x0) at ./Include/cpython/bytesobject.h:25
#1 bytesiobuf_getbuffer (op=op@entry=<_io._BytesIOBuffer at remote 0x7fffba0e01f0>, view=<optimized out>,
flags=284) at ./Modules/_io/bytesio.c:1097
#2 0x00005555559dfed1 in PyObject_GetBuffer (obj=obj@entry=<_io._BytesIOBuffer at remote 0x7fffba0e01f0>,
view=view@entry=0x7fffba170400, flags=0, flags@entry=284) at Objects/abstract.c:445
#3 0x0000555555b69e59 in _PyManagedBuffer_FromObject (
base=base@entry=<_io._BytesIOBuffer at remote 0x7fffba0e01f0>, flags=flags@entry=284)
at Objects/memoryobject.c:97
#4 0x0000555555b65759 in PyMemoryView_FromObjectAndFlags (
v=<_io._BytesIOBuffer at remote 0x7fffba0e01f0>, flags=284) at Objects/memoryobject.c:813
#5 0x0000555556091f8d in _io_BytesIO_getbuffer_impl (self=0x7fffb4afccd0, cls=<optimized out>)
at ./Modules/_io/bytesio.c:337
#6 _io_BytesIO_getbuffer (self=<_io.BytesIO at remote 0x7fffb4afccd0>, cls=<optimized out>,
args=<optimized out>, nargs=<optimized out>, kwnames=<optimized out>)
at ./Modules/_io/clinic/bytesio.c.h:103
#7 0x0000555555a4316b in _PyObject_VectorcallTstate (tstate=0x5290011b2210,
callable=<method_descriptor at remote 0x7fffb45560c0>, args=0x7fffba170400, nargsf=0,
kwnames=<unknown at remote 0xffff684f6ed>) at ./Include/internal/pycore_call.h:169
#8 0x0000555555dfb5aa in _PyEval_EvalFrameDefault (tstate=<optimized out>, frame=<optimized out>,
throwflag=<optimized out>) at Python/generated_cases.c.h:3850
#9 0x0000555555ddcb03 in _PyEval_EvalFrame (tstate=0x5290011b2210, frame=0x529001176328, throwflag=0)
at ./Include/internal/pycore_ceval.h:119
#10 _PyEval_Vector (tstate=0x5290011b2210, func=0x7fffb4a9a110, locals=0x0, args=<optimized out>,
argcount=1, kwnames=0x0) at Python/ceval.c:1913
#11 0x0000555555a4f30b in _PyObject_VectorcallTstate (tstate=0x5290011b2210,
callable=<function at remote 0x7fffb4a9a110>, args=0x7fffba170400, nargsf=0, nargsf@entry=1,
kwnames=<unknown at remote 0xffff684f6ed>, kwnames@entry=0x0) at ./Include/internal/pycore_call.h:169
#12 0x0000555555a4cb9f in method_vectorcall (method=<optimized out>, args=<optimized out>,
nargsf=<optimized out>, kwnames=<optimized out>) at Objects/classobject.c:72
#13 0x0000555555e7f6cb in _PyObject_VectorcallTstate (tstate=0x5290011b2210,
callable=<method at remote 0x7fffba0c00d0>, args=0x7fff5f1ae6d8, nargsf=0, kwnames=0x0)
at ./Include/internal/pycore_call.h:169
#14 context_run (self=<_contextvars.Context at remote 0x7fffb4c89070>, args=<optimized out>,
nargs=<optimized out>, kwnames=0x0) at Python/context.c:728
#15 0x0000555555e09a25 in _PyEval_EvalFrameDefault (tstate=<optimized out>, frame=<optimized out>,
throwflag=<optimized out>) at Python/generated_cases.c.h:3521
#16 0x0000555555ddcb03 in _PyEval_EvalFrame (tstate=0x5290011b2210, frame=0x529001176220, throwflag=0)
at ./Include/internal/pycore_ceval.h:119
#17 _PyEval_Vector (tstate=0x5290011b2210, func=0x7fffb4a9a1f0, locals=0x0, args=<optimized out>,
argcount=1, kwnames=0x0) at Python/ceval.c:1913
#18 0x0000555555a4f30b in _PyObject_VectorcallTstate (tstate=0x5290011b2210,
callable=<function at remote 0x7fffb4a9a1f0>, args=0x7fffba170400, nargsf=0, nargsf@entry=1,
kwnames=<unknown at remote 0xffff684f6ed>, kwnames@entry=0x0) at ./Include/internal/pycore_call.h:169
#19 0x0000555555a4cb9f in method_vectorcall (method=<optimized out>, args=<optimized out>,
nargsf=<optimized out>, kwnames=<optimized out>) at Objects/classobject.c:72
#20 0x000055555612d7ae in thread_run (boot_raw=boot_raw@entry=0x507000010b80)
at ./Modules/_threadmodule.c:353
#21 0x0000555555fe705d in pythread_wrapper (arg=<optimized out>) at Python/thread_pthread.h:242
#22 0x000055555585cd47 in asan_thread_start(void*) ()
#23 0x00007ffff7cfeac3 in start_thread (arg=<optimized out>) at ./nptl/pthread_create.c:442
#24 0x00007ffff7d90850 in clone3 () at ../sysdeps/unix/sysv/linux/x86_64/clone3.S:81
```
Segfault backtrace 3 (this one only happened with code similar to the MRE, not from this exact code):
```
Thread 562 "Thread-561 (get" received signal SIGSEGV, Segmentation fault.
[Switching to Thread 0xfffebe06f100 (LWP 2600253)]
_Py_REFCNT (ob=0x0) at ./Include/refcount.h:109
109 uint32_t local = _Py_atomic_load_uint32_relaxed(&ob->ob_ref_local);
Received signal 11, stopping
(gdb) bt
#0 _Py_REFCNT (ob=0x0) at ./Include/refcount.h:109
#1 bytesiobuf_getbuffer (op=<_io._BytesIOBuffer at remote 0x200100a0060>, view=0x200101200e0, flags=284) at ./Modules/_io/bytesio.c:1090
#2 0x0000000000489e80 in PyObject_GetBuffer (obj=obj@entry=<_io._BytesIOBuffer at remote 0x200100a0060>, view=view@entry=0x200101200e0, flags=flags@entry=284)
at Objects/abstract.c:445
#3 0x00000000005499bc in _PyManagedBuffer_FromObject (base=base@entry=<_io._BytesIOBuffer at remote 0x200100a0060>, flags=flags@entry=284)
at Objects/memoryobject.c:97
#4 0x0000000000549abc in PyMemoryView_FromObjectAndFlags (v=v@entry=<_io._BytesIOBuffer at remote 0x200100a0060>, flags=flags@entry=284)
at Objects/memoryobject.c:813
#5 0x000000000054bf1c in PyMemoryView_FromObject (v=v@entry=<_io._BytesIOBuffer at remote 0x200100a0060>) at Objects/memoryobject.c:856
#6 0x00000000007f3794 in _io_BytesIO_getbuffer_impl (self=0x2000691c790, cls=<optimized out>) at ./Modules/_io/bytesio.c:337
#7 0x00000000007f393c in _io_BytesIO_getbuffer (self=<optimized out>, cls=<optimized out>, args=<optimized out>, nargs=<optimized out>, kwnames=<optimized out>)
at ./Modules/_io/clinic/bytesio.c.h:103
#8 0x000000000054deac in cfunction_vectorcall_FASTCALL_KEYWORDS_METHOD (func=<builtin_method at remote 0x2000691e5c0>, args=0xb2d058 <_PyRuntime+130520>,
nargsf=<optimized out>, kwnames=0x0) at Objects/methodobject.c:486
#9 0x00000000004b994c in _PyVectorcall_Call (tstate=tstate@entry=0xc3d0d0, func=0x54dce0 <cfunction_vectorcall_FASTCALL_KEYWORDS_METHOD>,
callable=callable@entry=<builtin_method at remote 0x2000691e5c0>, tuple=tuple@entry=(), kwargs=kwargs@entry={}) at Objects/call.c:273
#10 0x00000000004b9d30 in _PyObject_Call (tstate=0xc3d0d0, callable=callable@entry=<builtin_method at remote 0x2000691e5c0>, args=(), kwargs={}) at Objects/call.c:348
#11 0x00000000004b9d8c in PyObject_Call (callable=callable@entry=<builtin_method at remote 0x2000691e5c0>, args=<optimized out>, kwargs=<optimized out>)
at Objects/call.c:373
#12 0x00000000006a2f20 in _PyEval_EvalFrameDefault (tstate=tstate@entry=0xc3d0d0, frame=frame@entry=0xffffea6fb128, throwflag=throwflag@entry=0)
at Python/generated_cases.c.h:2449
#13 0x00000000006d13ec in _PyEval_EvalFrame (throwflag=0, frame=0xffffea6fb128, tstate=0xc3d0d0) at ./Include/internal/pycore_ceval.h:119
#14 _PyEval_Vector (tstate=0xc3d0d0, func=0x20002ada490, locals=locals@entry=0x0, args=0xfffebe06e268, argcount=1, kwnames=0x0) at Python/ceval.c:1913
#15 0x00000000004b6e24 in _PyFunction_Vectorcall (func=<optimized out>, stack=<optimized out>, nargsf=<optimized out>, kwnames=<optimized out>) at Objects/call.c:413
#16 0x00000000004bc7d8 in _PyObject_VectorcallTstate (kwnames=0x0, nargsf=1, args=0xfffebe06e268, callable=<function at remote 0x20002ada490>, tstate=0xc3d0d0)
at ./Include/internal/pycore_call.h:169
#17 method_vectorcall (method=<optimized out>, args=<optimized out>, nargsf=<optimized out>, kwnames=0x0) at Objects/classobject.c:72
#18 0x00000000006f5c70 in _PyObject_VectorcallTstate (tstate=tstate@entry=0xc3d0d0, callable=<method at remote 0x200100300d0>, args=args@entry=0xfffebe06e538,
nargsf=nargsf@entry=0, kwnames=kwnames@entry=0x0) at ./Include/internal/pycore_call.h:169
#19 0x00000000006f739c in context_run (self=<_contextvars.Context at remote 0x200054c61f0>, args=0xfffebe06e530, nargs=1, kwnames=0x0) at Python/context.c:728
#20 0x00000000006a7b30 in _PyEval_EvalFrameDefault (tstate=tstate@entry=0xc3d0d0, frame=0xffffea6fb098, frame@entry=0xffffea6fb020, throwflag=throwflag@entry=0)
at Python/generated_cases.c.h:3509
#21 0x00000000006d13ec in _PyEval_EvalFrame (throwflag=0, frame=0xffffea6fb020, tstate=0xc3d0d0) at ./Include/internal/pycore_ceval.h:119
#22 _PyEval_Vector (tstate=0xc3d0d0, func=0x20002ada570, locals=locals@entry=0x0, args=0xfffebe06e718, argcount=1, kwnames=0x0) at Python/ceval.c:1913
#23 0x00000000004b6e24 in _PyFunction_Vectorcall (func=<optimized out>, stack=<optimized out>, nargsf=<optimized out>, kwnames=<optimized out>) at Objects/call.c:413
#24 0x00000000004bc7d8 in _PyObject_VectorcallTstate (kwnames=0x0, nargsf=1, args=0xfffebe06e718, callable=<function at remote 0x20002ada570>, tstate=0xc3d0d0)
at ./Include/internal/pycore_call.h:169
```
Segfault backtrace 4:
```
Thread 612 "Thread-611 (cal" received signal SIGSEGV, Segmentation fault.
0x0000555556093f78 in _Py_atomic_load_uint32_relaxed (obj=0xc) at ./Include/cpython/pyatomic_gcc.h:367
367 { return __atomic_load_n(obj, __ATOMIC_RELAXED); }
#0 0x0000555556093f78 in _Py_atomic_load_uint32_relaxed (obj=0xc) at ./Include/cpython/pyatomic_gcc.h:367
#1 _Py_REFCNT (ob=0x0) at ./Include/refcount.h:109
#2 unshare_buffer (self=self@entry=0x7fffb4afc640, size=0) at ./Modules/_io/bytesio.c:115
#3 0x0000555556094b0a in bytesiobuf_getbuffer (op=<_io._BytesIOBuffer at remote 0x7fffce3601f0>,
view=0x7fffce2c0400, flags=284) at ./Modules/_io/bytesio.c:1091
#4 0x00005555559dfed1 in PyObject_GetBuffer (obj=obj@entry=<_io._BytesIOBuffer at remote 0x7fffce3601f0>,
view=0x0, view@entry=0x7fffce2c0400, flags=flags@entry=284) at Objects/abstract.c:445
#5 0x0000555555b69e59 in _PyManagedBuffer_FromObject (
base=base@entry=<_io._BytesIOBuffer at remote 0x7fffce3601f0>, flags=flags@entry=284)
at Objects/memoryobject.c:97
#6 0x0000555555b65759 in PyMemoryView_FromObjectAndFlags (
v=<_io._BytesIOBuffer at remote 0x7fffce3601f0>, flags=284) at Objects/memoryobject.c:813
#7 0x0000555556091f8d in _io_BytesIO_getbuffer_impl (self=0x7fffb4afc640, cls=<optimized out>)
at ./Modules/_io/bytesio.c:337
#8 _io_BytesIO_getbuffer (self=<_io.BytesIO at remote 0x7fffb4afc640>, cls=<optimized out>,
args=<optimized out>, nargs=<optimized out>, kwnames=<optimized out>)
at ./Modules/_io/clinic/bytesio.c.h:103
#9 0x0000555555a4316b in _PyObject_VectorcallTstate (tstate=0x5290017ca210,
callable=<method_descriptor at remote 0x7fffb45560c0>, args=0x0, nargsf=284,
kwnames=<unknown at remote 0xffff684f6ed>) at ./Include/internal/pycore_call.h:169
#10 0x0000555555dfb5aa in _PyEval_EvalFrameDefault (tstate=<optimized out>, frame=<optimized out>,
throwflag=<optimized out>) at Python/generated_cases.c.h:3850
#11 0x0000555555ddcb03 in _PyEval_EvalFrame (tstate=0x5290017ca210, frame=0x529001810328, throwflag=0)
at ./Include/internal/pycore_ceval.h:119
#12 _PyEval_Vector (tstate=0x5290017ca210, func=0x7fffb4a9a110, locals=0x0, args=<optimized out>,
argcount=1, kwnames=0x0) at Python/ceval.c:1913
#13 0x0000555555a4f30b in _PyObject_VectorcallTstate (tstate=0x5290017ca210,
callable=<function at remote 0x7fffb4a9a110>, args=0x0, nargsf=284, nargsf@entry=1,
kwnames=<unknown at remote 0xffff684f6ed>, kwnames@entry=0x0) at ./Include/internal/pycore_call.h:169
#14 0x0000555555a4cb9f in method_vectorcall (method=<optimized out>, args=<optimized out>,
nargsf=<optimized out>, kwnames=<optimized out>) at Objects/classobject.c:72
#15 0x0000555555e7f6cb in _PyObject_VectorcallTstate (tstate=0x5290017ca210,
callable=<method at remote 0x7fffce042410>, args=0x7fff5ecb56d8, nargsf=0, kwnames=0x0)
at ./Include/internal/pycore_call.h:169
#16 context_run (self=<_contextvars.Context at remote 0x7fffb4c78dd0>, args=<optimized out>,
nargs=<optimized out>, kwnames=0x0) at Python/context.c:728
#17 0x0000555555e09a25 in _PyEval_EvalFrameDefault (tstate=<optimized out>, frame=<optimized out>,
throwflag=<optimized out>) at Python/generated_cases.c.h:3521
#18 0x0000555555ddcb03 in _PyEval_EvalFrame (tstate=0x5290017ca210, frame=0x529001810220, throwflag=0)
at ./Include/internal/pycore_ceval.h:119
#19 _PyEval_Vector (tstate=0x5290017ca210, func=0x7fffb4a9a1f0, locals=0x0, args=<optimized out>,
argcount=1, kwnames=0x0) at Python/ceval.c:1913
#20 0x0000555555a4f30b in _PyObject_VectorcallTstate (tstate=0x5290017ca210,
callable=<function at remote 0x7fffb4a9a1f0>, args=0x0, nargsf=284, nargsf@entry=1,
kwnames=<unknown at remote 0xffff684f6ed>, kwnames@entry=0x0) at ./Include/internal/pycore_call.h:169
#21 0x0000555555a4cb9f in method_vectorcall (method=<optimized out>, args=<optimized out>,
nargsf=<optimized out>, kwnames=<optimized out>) at Objects/classobject.c:72
#22 0x000055555612d7ae in thread_run (boot_raw=boot_raw@entry=0x507000014f50)
at ./Modules/_threadmodule.c:353
#23 0x0000555555fe705d in pythread_wrapper (arg=<optimized out>) at Python/thread_pthread.h:242
#24 0x000055555585cd47 in asan_thread_start(void*) ()
#25 0x00007ffff7cfeac3 in start_thread (arg=<optimized out>) at ./nptl/pthread_create.c:442
#26 0x00007ffff7d90850 in clone3 () at ../sysdeps/unix/sysv/linux/x86_64/clone3.S:81
```
Abort backtrace:
```
python: ./Modules/_io/bytesio.c:116: int unshare_buffer(bytesio *, size_t): Assertion `self->exports == 0' failed.
[New Thread 0x7fff6e740640 (LWP 999886)]
[New Thread 0x7fff70553640 (LWP 999887)]
[Thread 0x7fff7347f640 (LWP 999882) exited]
Thread 1349 "Thread-1348 (ca" received signal SIGABRT, Aborted.
[Switching to Thread 0x7fff72c7e640 (LWP 999883)]
__pthread_kill_implementation (no_tid=0, signo=6, threadid=140735119091264) at ./nptl/pthread_kill.c:44
44 ./nptl/pthread_kill.c: No such file or directory.
(gdb) bt
#0 __pthread_kill_implementation (no_tid=0, signo=6, threadid=140735119091264) at ./nptl/pthread_kill.c:44
#1 __pthread_kill_internal (signo=6, threadid=140735119091264) at ./nptl/pthread_kill.c:78
#2 __GI___pthread_kill (threadid=140735119091264, signo=signo@entry=6) at ./nptl/pthread_kill.c:89
#3 0x00007ffff7cac476 in __GI_raise (sig=sig@entry=6) at ../sysdeps/posix/raise.c:26
#4 0x00007ffff7c927f3 in __GI_abort () at ./stdlib/abort.c:79
#5 0x00007ffff7c9271b in __assert_fail_base (
fmt=0x7ffff7e47130 "%s%s%s:%u: %s%sAssertion `%s' failed.\n%n",
assertion=0x555556388a20 <str> "self->exports == 0",
file=0x555556387300 <str> "./Modules/_io/bytesio.c", line=116, function=<optimized out>)
at ./assert/assert.c:94
#6 0x00007ffff7ca3e96 in __GI___assert_fail (assertion=0x555556388a20 <str> "self->exports == 0",
file=0x555556387300 <str> "./Modules/_io/bytesio.c", line=line@entry=116,
function=0x555556388ae0 <__PRETTY_FUNCTION__.unshare_buffer> "int unshare_buffer(bytesio *, size_t)")
at ./assert/assert.c:103
#7 0x00005555560941df in unshare_buffer (self=self@entry=0x7fffb46ca340, size=<optimized out>)
at ./Modules/_io/bytesio.c:116
#8 0x0000555556094b0a in bytesiobuf_getbuffer (op=<_io._BytesIOBuffer at remote 0x7fffbc0f0240>,
view=0x7fffbc1804a0, flags=284) at ./Modules/_io/bytesio.c:1091
#9 0x00005555559dfed1 in PyObject_GetBuffer (obj=obj@entry=<_io._BytesIOBuffer at remote 0x7fffbc0f0240>,
view=0xf41cb, view@entry=0x7fffbc1804a0, flags=6, flags@entry=284) at Objects/abstract.c:445
#10 0x0000555555b69e59 in _PyManagedBuffer_FromObject (
base=base@entry=<_io._BytesIOBuffer at remote 0x7fffbc0f0240>, flags=flags@entry=284)
at Objects/memoryobject.c:97
#11 0x0000555555b65759 in PyMemoryView_FromObjectAndFlags (
v=<_io._BytesIOBuffer at remote 0x7fffbc0f0240>, flags=284) at Objects/memoryobject.c:813
#12 0x0000555556091f8d in _io_BytesIO_getbuffer_impl (self=0x7fffb46ca340, cls=<optimized out>)
at ./Modules/_io/bytesio.c:337
#13 _io_BytesIO_getbuffer (self=<_io.BytesIO at remote 0x7fffb46ca340>, cls=<optimized out>,
args=<optimized out>, nargs=<optimized out>, kwnames=<optimized out>)
at ./Modules/_io/clinic/bytesio.c.h:103
#14 0x0000555555a4316b in _PyObject_VectorcallTstate (tstate=0x5290034c6210,
callable=<method_descriptor at remote 0x7fffb45560c0>, args=0xf41cb, nargsf=6,
kwnames=<unknown at remote 0x7fff72c7bd30>) at ./Include/internal/pycore_call.h:169
#15 0x0000555555dfb5aa in _PyEval_EvalFrameDefault (tstate=<optimized out>, frame=<optimized out>,
throwflag=<optimized out>) at Python/generated_cases.c.h:3850
#16 0x0000555555ddcb03 in _PyEval_EvalFrame (tstate=0x5290034c6210, frame=0x52900348f328, throwflag=0)
at ./Include/internal/pycore_ceval.h:119
#17 _PyEval_Vector (tstate=0x5290034c6210, func=0x7fffb4a9a110, locals=0x0, args=<optimized out>,
argcount=1, kwnames=0x0) at Python/ceval.c:1913
#18 0x0000555555a4f30b in _PyObject_VectorcallTstate (tstate=0x5290034c6210,
callable=<function at remote 0x7fffb4a9a110>, args=0xf41cb, nargsf=6, nargsf@entry=1,
kwnames=<unknown at remote 0x7fff72c7bd30>, kwnames@entry=0x0) at ./Include/internal/pycore_call.h:169
#19 0x0000555555a4cb9f in method_vectorcall (method=<optimized out>, args=<optimized out>,
nargsf=<optimized out>, kwnames=<optimized out>) at Objects/classobject.c:72
#20 0x0000555555e7f6cb in _PyObject_VectorcallTstate (tstate=0x5290034c6210,
callable=<method at remote 0x7fffbc0600d0>, args=0x7fff72c7d6d8, nargsf=0, kwnames=0x0)
at ./Include/internal/pycore_call.h:169
#21 context_run (self=<_contextvars.Context at remote 0x7fffb46773f0>, args=<optimized out>,
nargs=<optimized out>, kwnames=0x0) at Python/context.c:728
#22 0x0000555555e09a25 in _PyEval_EvalFrameDefault (tstate=<optimized out>, frame=<optimized out>,
throwflag=<optimized out>) at Python/generated_cases.c.h:3521
#23 0x0000555555ddcb03 in _PyEval_EvalFrame (tstate=0x5290034c6210, frame=0x52900348f220, throwflag=0)
at ./Include/internal/pycore_ceval.h:119
#24 _PyEval_Vector (tstate=0x5290034c6210, func=0x7fffb4a9a1f0, locals=0x0, args=<optimized out>,
argcount=1, kwnames=0x0) at Python/ceval.c:1913
#25 0x0000555555a4f30b in _PyObject_VectorcallTstate (tstate=0x5290034c6210,
callable=<function at remote 0x7fffb4a9a1f0>, args=0xf41cb, nargsf=6, nargsf@entry=1,
kwnames=<unknown at remote 0x7fff72c7bd30>, kwnames@entry=0x0) at ./Include/internal/pycore_call.h:169
#26 0x0000555555a4cb9f in method_vectorcall (method=<optimized out>, args=<optimized out>,
nargsf=<optimized out>, kwnames=<optimized out>) at Objects/classobject.c:72
#27 0x000055555612d7ae in thread_run (boot_raw=boot_raw@entry=0x5070000291c0)
at ./Modules/_threadmodule.c:353
#28 0x0000555555fe705d in pythread_wrapper (arg=<optimized out>) at Python/thread_pthread.h:242
#29 0x000055555585cd47 in asan_thread_start(void*) ()
#30 0x00007ffff7cfeac3 in start_thread (arg=<optimized out>) at ./nptl/pthread_create.c:442
#31 0x00007ffff7d90850 in clone3 () at ../sysdeps/unix/sysv/linux/x86_64/clone3.S:81
```
Rare error message:
```
Exception ignored while finalizing file <_io.BytesIO object at 0x7fffb46b7f60>:
BufferError: Existing exports of data: object cannot be re-sized
Exception ignored in the internal traceback machinery:
ImportError: sys.meta_path is None, Python is likely shutting down
SystemError: deallocated BytesIO object has exported buffers
```
Found using [fusil](https://github.com/devdanzin/fusil) by @vstinner.
### CPython versions tested on:
CPython main branch
### Operating systems tested on:
Linux
### Output from running 'python -VV' on the command line:
Python 3.14.0a7+ experimental free-threading build (heads/main:c7f6535e4a3, Apr 15 2025, 09:09:58) [GCC 11.4.0]
<!-- gh-linked-prs -->
### Linked PRs
* gh-132616
* gh-137073
<!-- /gh-linked-prs -->
| 5dd3a3a58cca4798ebfc2edd673211067453e81e | bd7c5859c6c4f23877afdf6ab7b8209de50127a6 |
python/cpython | python__cpython-132701 | # threading.Thread.native_id for forking thread wrong after fork
# Bug report
### Bug description:
On Linux, native thread IDs are unique across processes. This means that the native thread ID of a forking thread necessarily changes at forking. The threading module initializes the thread ID only when the thread is started. Hence, after forking, the old thread ID is retained, i.e. it becomes wrong for the forked process.
```python
import threading, os
print(os.getpid(), threading.current_thread(), threading.current_thread().native_id,
os.listdir(f"/proc/{os.getpid()}/task"), flush=True)
res = os.fork()
kind = "c" if res == 0 else "p"
print(os.getpid(), kind, threading.current_thread(), threading.current_thread().native_id,
os.listdir(f"/proc/{os.getpid()}/task"), flush=True)
```
This prints for example:
```
$ python3.13 ~/p/fork.py
227148 <_MainThread(MainThread, started 139735605239936)> 227148 ['227148']
227148 p <_MainThread(MainThread, started 139735605239936)> 227148 ['227148']
227150 c <_MainThread(MainThread, started 139735605239936)> 227148 ['227150']
```
Notice that the child (c) still prints the same native thread ID 227148 as the the parent while in fact the thread ID has changed to 227150 and the old native thread ID is not present at all in the child process.
This is not specific to the main thread though, as the following variant of the sample demonstrates:
```python
import threading, os
def run():
print(os.getpid(), "t", threading.current_thread(), threading.current_thread().native_id,
os.listdir(f"/proc/{os.getpid()}/task"), flush=True)
res = os.fork()
kind = "c" if res == 0 else "p"
print(os.getpid(), kind, threading.current_thread(), threading.current_thread().native_id,
os.listdir(f"/proc/{os.getpid()}/task"), flush=True)
print(os.getpid(), "m", threading.current_thread(), threading.current_thread().native_id,
os.listdir(f"/proc/{os.getpid()}/task"), flush=True)
th = threading.Thread(target=run)
th.start()
th.join()
```
This multi-threaded sample prints for example:
```
230968 m <_MainThread(MainThread, started 140596997587072)> 230968 ['230968']
230968 t <Thread(Thread-1 (run), started 140596994700992)> 230969 ['230968', '230969']
230970 c <Thread(Thread-1 (run), started 140596994700992)> 230969 ['230970']
/home/labuser/p/fork.py:6: DeprecationWarning: This process (pid=230968) is multi-threaded, use of fork() may lead to deadlocks in the child.
res = os.fork()
230968 p <Thread(Thread-1 (run), started 140596994700992)> 230969 ['230968', '230969']
```
This bug was previously seen in #82888 with the multiprocessing module but only worked around in #17088 for the multiprocessing module in and it is still present when using any other way of forking.
### CPython versions tested on:
3.13, 3.8
### Operating systems tested on:
Linux
### Proposed Solution
This could be solved with an atfork_child handler if you don't care about the ID being wrong in earlier atfork_child handlers. Safer but more complex would be marking the thread as being in-forking in atfork-prepare and then re-query the native thread IDevery time until atfork-child/parent unmarks the thread.
The native thread ID in the C API Thread state might also require a fix, I did not check it.
<!-- gh-linked-prs -->
### Linked PRs
* gh-132701
* gh-134356
* gh-134361
* gh-134408
* gh-134413
* gh-134414
<!-- /gh-linked-prs -->
| 6b735023132a4ac9dc5b849d982104eeb1e8bdad | 86397cf65d024a39ae85b81d7f611ad4864a78b4 |
python/cpython | python__cpython-132537 | # PY_THROW event can't be turned off for pdb's monitoring backend
# Bug report
### Bug description:
```python
def gen():
yield 1
def f():
breakpoint()
g = gen()
try:
g.throw(TypeError)
except TypeError:
pass # line 10
f()
```
```
b 10
c
```
```
Traceback (most recent call last):
File "/home/gaogaotiantian/programs/mycpython/scrabble.py", line 12, in <module>
f()
~^^
File "/home/gaogaotiantian/programs/mycpython/scrabble.py", line 8, in f
g.throw(TypeError)
~~~~~~~^^^^^^^^^^^
File "/home/gaogaotiantian/programs/mycpython/scrabble.py", line 1, in gen
def gen():
ValueError: Cannot disable PY_THROW events. Callback removed.
```
`PY_THROW` can't be disabled.
### CPython versions tested on:
CPython main branch
### Operating systems tested on:
Linux
<!-- gh-linked-prs -->
### Linked PRs
* gh-132537
<!-- /gh-linked-prs -->
| d19af00b90d94cd987293c479b8c3ef698bf7d3e | 4f10b93d1b2f887b42ad59168a9fcbe75bdaaf87 |
python/cpython | python__cpython-132572 | # test_timeout fails with "ENV CHANGED"
# Bug report
### Bug description:
When upgrading python 3.13 for openSUSE it suddenly fails with `== Tests result: ENV CHANGED then ENV CHANGED ==` Despite the error, I have not been able to collect any more reasonable error message in [our build log](https://github.com/user-attachments/files/19744515/_log.txt) (not even **which environmental variable** was changed):
```python
[ 807s] 0:01:44 load avg: 10.27 Re-running 1 failed tests in verbose mode in subprocesses
[ 807s] 0:01:44 load avg: 10.27 Run 1 test in parallel using 1 worker process (timeout: 1 hour 30 min, worker timeout: 1 hour 35 min)
[ 808s] 0:01:46 load avg: 9.53 [1/1/1] test_timeout failed (env changed)
[ 808s] Re-running test_timeout in verbose mode (matching: )
[ 808s] testBlockingThenTimeout (test.test_timeout.CreationTestCase.testBlockingThenTimeout) ... ok
[ 808s] testFloatReturnValue (test.test_timeout.CreationTestCase.testFloatReturnValue) ... ok
[ 808s] testObjectCreation (test.test_timeout.CreationTestCase.testObjectCreation) ... ok
[ 808s] testRangeCheck (test.test_timeout.CreationTestCase.testRangeCheck) ... ok
[ 808s] testReturnType (test.test_timeout.CreationTestCase.testReturnType) ... ok
[ 808s] testTimeoutThenBlocking (test.test_timeout.CreationTestCase.testTimeoutThenBlocking) ... ok
[ 808s] testTypeCheck (test.test_timeout.CreationTestCase.testTypeCheck) ... ok
[ 808s] testAcceptTimeout (test.test_timeout.TCPTimeoutTestCase.testAcceptTimeout) ... skipped "Resource 'www.python.org.' is not available"
[ 808s] testConnectTimeout (test.test_timeout.TCPTimeoutTestCase.testConnectTimeout) ... skipped "Resource 'www.python.org.' is not available"
[ 808s] testRecvTimeout (test.test_timeout.TCPTimeoutTestCase.testRecvTimeout) ... skipped "Resource 'www.python.org.' is not available"
[ 808s] testSend (test.test_timeout.TCPTimeoutTestCase.testSend) ... skipped "Resource 'www.python.org.' is not available"
[ 808s] testSendall (test.test_timeout.TCPTimeoutTestCase.testSendall) ... skipped "Resource 'www.python.org.' is not available"
[ 808s] testSendto (test.test_timeout.TCPTimeoutTestCase.testSendto) ... skipped "Resource 'www.python.org.' is not available"
[ 808s] testRecvfromTimeout (test.test_timeout.UDPTimeoutTestCase.testRecvfromTimeout) ... ok
```
### CPython versions tested on:
3.13
### Operating systems tested on:
Linux
<!-- gh-linked-prs -->
### Linked PRs
* gh-132572
* gh-132580
<!-- /gh-linked-prs -->
| 82f74eb2344cdb3197c726d1216e413ee61a30b3 | 4b15d105a2b954cce2d97646ebf9e672d3a23b45 |
python/cpython | python__cpython-132529 | # Outdated error message when passing an invalid typecode to `array.array` constructor
# Bug report
### Bug description:
This issue is suitable for a first-time contributor.
The `'w'` typecode is [supported](https://docs.python.org/3.14/library/array.html#module-array) but the error message does not mention it:
```python
>>> array('x')
Traceback (most recent call last):
File "<python-input-5>", line 1, in <module>
array('x')
~~~~~^^^^^
ValueError: bad typecode (must be b, B, u, h, H, i, I, l, L, q, Q, f or d)
```
We should add `'w'` to the list. This is the relevant part of the code that should be updated:
https://github.com/python/cpython/blob/61638418a7306723fedf88389c9c5aa540dfb809/Modules/arraymodule.c#L2875-L2877
Happy to help if you have any questions!
### CPython versions tested on:
CPython main branch
### Operating systems tested on:
Linux
<!-- gh-linked-prs -->
### Linked PRs
* gh-132529
* gh-132587
* gh-132938
<!-- /gh-linked-prs -->
| 52454c5d59c50147233f4229497a655e6e8c8408 | eb2e430b88afa93e7bfc05f4346e8336c2c31b48 |
python/cpython | python__cpython-132516 | # `test_dataclass_derived_generic_from_slotted_base` is duplicated
# Bug report
### Bug description:
`test_dataclass_derived_generic_from_slotted_base` is duplicated (regression introduced in fa9b9cb11379806843ae03b1e4ad4ccd95a63c02):
```py
def test_dataclass_derived_generic_from_slotted_base(self):
T = typing.TypeVar('T')
class WithSlots:
__slots__ = ('a', 'b')
@dataclass(slots=True, weakref_slot=True)
class E1(WithSlots, Generic[T]):
pass
...
def test_dataclass_derived_generic_from_slotted_base(self):
T = typing.TypeVar('T')
class WithWeakrefSlot:
__slots__ = ('__weakref__',)
@dataclass(slots=True, weakref_slot=True)
class G1(WithWeakrefSlot, Generic[T]):
pass
```
### CPython versions tested on:
CPython main branch
### Operating systems tested on:
Other, Linux
<!-- gh-linked-prs -->
### Linked PRs
* gh-132516
* gh-132518
<!-- /gh-linked-prs -->
| 45c447bf91ffabe4c0ba6d18f37d4e58925d5c91 | 4865c09cf3055de2f69abda27c9e971b0fcf98bd |
python/cpython | python__cpython-132545 | # Exception unwinding might be broken when handling a memory error.
# Bug report
### Bug description:
Exception unwinding can sometimes push an integer to the stack to support re-raising an exception from an earlier position. Creating this integer might need memory allocation and could fail when handling a memory error.
This is incredibly unlikely to ever happen, but we can fix it using tagged integers, which do not require any allocation, so we might as well.
### CPython versions tested on:
CPython main branch
### Operating systems tested on:
_No response_
<!-- gh-linked-prs -->
### Linked PRs
* gh-132545
<!-- /gh-linked-prs -->
| ccf1b0b1c18e6d00fb919bce107f2793bab0a471 | caee16f05229de5bc5ed2743c531f1696641888a |
python/cpython | python__cpython-132492 | # annotationlib: Rename value_to_string to type_repr
See https://github.com/python/typing_extensions/issues/544
<!-- gh-linked-prs -->
### Linked PRs
* gh-132492
<!-- /gh-linked-prs -->
| 11f66038453dff51ba4ee80460e30acf276d472a | 5e80fee41a61bbb39c6054c5a5d82dc80b1adf8a |
python/cpython | python__cpython-132778 | # SystemError: compiler_lookup_arg(name='name_1') with reftype=7 failed in <genexpr>
# Crash report
### What happened?
I found the following issue by fuzzing with pysource-codegen.
This script failed to compile the given code to bytecode:
``` python
code="""
(name_3): name_5
name_4: (
name_4
async for (
name_5
for something in name_1
async for () in (name_0 async for name_5 in name_0 for name_0 in name_1)
).name_3 in {name_5: name_5 for name_1 in name_4}
)
"""
compile(code,"<file>","exec")
```
running it with cpython results in the following error:
```
Traceback (most recent call last):
File "/home/frank/projects/cpython/../pysource-playground/bug.py", line 12, in <module>
compile(code,"<file>","exec")
~~~~~~~^^^^^^^^^^^^^^^^^^^^^^
SystemError: compiler_lookup_arg(name='name_1') with reftype=7 failed in <genexpr>; freevars of code <genexpr>: ('name_1',)
```
I bisected the problem down to 9b8611eeea172cd4aa626ccd1ca333dc4093cd8c @JelleZijlstra do you want to take a look at it?
### CPython versions tested on:
CPython main branch
### Operating systems tested on:
Linux
### Output from running 'python -VV' on the command line:
Python 3.14.0a0 (bisect/bad:9b8611eeea1, Apr 13 2025, 17:20:02) [GCC 12.2.0]
<!-- gh-linked-prs -->
### Linked PRs
* gh-132778
<!-- /gh-linked-prs -->
| 01317bb449612ea1dbbf36e439437909abd79a45 | 08e331d05e694b5ed3265556ea345b9b19a187da |
python/cpython | python__cpython-132475 | # Building a `ctypes.CField` with wrong `byte_size` aborts
# Crash report
### What happened?
The following code will cause an abort due to `byte_size` not matching the size of `ctypes.c_byte`, failing an assertion that `byte_size == info->size`.
```python
import ctypes
ctypes.CField(name="a", type=ctypes.c_byte, byte_size=2, byte_offset=2, index=1, _internal_use=True)
```
Abort message:
```
python: ./Modules/_ctypes/cfield.c:102: PyCField_new_impl: Assertion `byte_size == info->size' failed.
Aborted (core dumped)
```
### CPython versions tested on:
CPython main branch
### Operating systems tested on:
Linux
### Output from running 'python -VV' on the command line:
Python 3.14.0a6+ (heads/main:be2d2181e62, Mar 31 2025, 07:30:17) [GCC 11.4.0]
<!-- gh-linked-prs -->
### Linked PRs
* gh-132475
<!-- /gh-linked-prs -->
| 3b4b56f46dbfc0c336a1f70704f127593ec1f4ce | f663b2c56a2eecc258d2abd54ed33836d070e6f5 |
python/cpython | python__cpython-132460 | # Consider making `staticmethod` and `classmethod` generic
# Feature or enhancement
Typeshed defines `staticmethod` and `classmethod` as generics:
1. https://github.com/python/typeshed/blob/f6216ec6230aa51fe7e23afca30a8f5b18ace476/stdlib/builtins.pyi#L137
2. https://github.com/python/typeshed/blob/f6216ec6230aa51fe7e23afca30a8f5b18ace476/stdlib/builtins.pyi#L154
It makes sense, because they are very callable-like. However:
```python
>>> staticmethod[int]
Traceback (most recent call last):
File "<python-input-0>", line 1, in <module>
staticmethod[int]
~~~~~~~~~~~~^^^^^
TypeError: type 'staticmethod' is not subscriptable
>>> classmethod[int]
Traceback (most recent call last):
File "<python-input-1>", line 1, in <module>
classmethod[int]
~~~~~~~~~~~^^^^^
TypeError: type 'classmethod' is not subscriptable
```
We should consider making them generics in runtime as well.
@AlexWaygood @JelleZijlstra thoughts?
If you agree, I have a PR ready.
<!-- gh-linked-prs -->
### Linked PRs
* gh-132460
<!-- /gh-linked-prs -->
| a36367520eb3a954c94c71a6b2b64d2542283e38 | c8f233c53b4634b820f2aa0efd45f43d9999aa1e |
python/cpython | python__cpython-132450 | # Improve syntax error messages for keywords with typos
Currently, when users make typos in Python keywords, they receive generic "invalid syntax" error messages without any helpful suggestions about what might be wrong. This creates a frustrating experience, especially for beginners who might not immediately recognize that they've misspelled a keyword.
For example, typing `whille True:` instead of `while True:` would currently result in a generic syntax error without any hint that the problem is a misspelled keyword.
I propose to start raising errors like these:
```
==================================================
Traceback (most recent call last):
File "/home/pablogsal/github/python/main/check.py", line 18, in test_mistyped_program
exec(code_example)
~~~~^^^^^^^^^^^^^^
File "<string>", line 2
asynch def fetch_data():
^^^^^^
SyntaxError: invalid syntax. Did you mean 'async'?
==================================================
Traceback (most recent call last):
File "/home/pablogsal/github/python/main/check.py", line 18, in test_mistyped_program
exec(code_example)
~~~~^^^^^^^^^^^^^^
File "<string>", line 6
result = awaid fetch_data()
^^^^^
SyntaxError: invalid syntax. Did you mean 'await'?
==================================================
finally:
Traceback (most recent call last):
File "/home/pablogsal/github/python/main/check.py", line 18, in test_mistyped_program
exec(code_example)
~~~~^^^^^^^^^^^^^^
File "<string>", line 6
finaly:
^^^^^^
SyntaxError: invalid syntax. Did you mean 'finally'?
==================================================
Traceback (most recent call last):
File "/home/pablogsal/github/python/main/check.py", line 18, in test_mistyped_program
exec(code_example)
~~~~^^^^^^^^^^^^^^
File "<string>", line 4
raisee ValueError("Value cannot be negative")
^^^^^^
SyntaxError: invalid syntax. Did you mean 'raise'?
==================================================
```
<!-- gh-linked-prs -->
### Linked PRs
* gh-132450
* gh-132837
* gh-132838
<!-- /gh-linked-prs -->
| bf3a0a1c0f2f63867c537d80cc7163ea8426bf2a | 3cfab449ab1e3c1472d2a33dc3fae3dc06c39f7b |
python/cpython | python__cpython-132440 | # New REPL on Windows swallows characters entered via AltGr
# Bug report
### Bug description:
E.g. on my keyboard with German layout, `{` is usually entered via pressing the [AltGr key](https://en.wikipedia.org/wiki/AltGr_key) and `7`, i.e. `AltGr+7`.
Likewise, `}`, `[`, `]`, `\` and some more can only be entered via `AltGr`.
But since https://github.com/python/cpython/issues/128388 / https://github.com/python/cpython/pull/128389 these are swallowed by the REPL on Windows and can no longer be entered.
On 3.13.3, this happens independent from the terminal mode.
On main, this only happens in legacy Windows terminals, where the virtual terminal mode is turned off (e.g. `cmd.exe`).
In virtual terminal mode there are other issues, see https://github.com/python/cpython/issues/131878.
Many other keyboard layouts like French, Czech, etc, use the AltGr key, too, and suffer the same bug (tested changing my keyboard layout).
### CPython versions tested on:
3.14, 3.13
### Operating systems tested on:
Windows
<!-- gh-linked-prs -->
### Linked PRs
* gh-132440
* gh-133460
<!-- /gh-linked-prs -->
| 07f416a3f063db6b91b8b99ff61a51b64b0503f1 | b6c2ef0c7a1e344681f9a8649bb662208ffd01cf |
python/cpython | python__cpython-132436 | # Test syntax warnings emitted in a `finally` block
This is a followup to https://github.com/python/cpython/pull/131993
For context: https://github.com/python/cpython/pull/131993#issuecomment-2798777909
> Please add also a compile() test for a warning emitted in the finally block. Currently it is emitted twice because the code in the finally block is complied twice (for normal execution and for exception handling).
<!-- gh-linked-prs -->
### Linked PRs
* gh-132436
* gh-132503
<!-- /gh-linked-prs -->
| 887eabc5a74316708460120d60d0fa4f8bdf5960 | fc7e4e7bbd5d459b76b96991413569a5b7889fbe |
python/cpython | python__cpython-132431 | # Bluetooth socket support is disabled on NetBSD and DragonFly BSD
Despite that there is a code to support Bluetooth sockets on NetBSD and DragonFly BSD, it did not work from 2010 (see 2501aca6281500ecc31cccdd4d78977c84c9e4f6, 3e85dfd15e36b7d890fb70a6c9edf0b1e6bbff6c) due to error in conditional compilation. Condition `(defined(HAVE_BLUETOOTH_H) || defined(HAVE_BLUETOOTH_BLUETOOTH_H)) && !defined(__NetBSD__) && !defined(__DragonFly__)` is always false on NetBSD and DragonFly BSD, so `USE_BLUETOOTH` was not defined, and the code was omitted during compilation.
cc @gpshead
<!-- gh-linked-prs -->
### Linked PRs
* gh-132431
* gh-132458
* gh-132459
<!-- /gh-linked-prs -->
| f2f86d3f459a89273ea22389bb57eed402908302 | 9634085af3670b1eb654e3c7820aca66f358f39f |
python/cpython | python__cpython-132418 | # ctypes: NULL-dereference when using py_result restype
# Crash report
### What happened?
When using `ctypes` with a function which returns a `PyObject *`, using `restype` `ctypes.py_object`, and the function returns `NULL`, there's a `Py_DECREF` of the result, leading to a segfault:
```python
import ctypes
PyErr_Occurred = ctypes.pythonapi.PyErr_Occurred
PyErr_Occurred.argtypes = []
PyErr_Occurred.restype = ctypes.py_object
PyErr_Occurred()
```
The issue lies in `GetResult`: when `O_get` is the result handler, it calls `Py_DECREF` on the result value, even though that can be `NULL`. `O_get` handles the `NULL` case correctly, setting an exception if none was set, and passing on the `NULL`. Code after the `Py_DECREF` in `GetResult` also handles the `NULL` case correctly, so changing the `Py_DECREF` into a `Py_XDECREF` makes things work.
Pull request incoming.
### CPython versions tested on:
CPython 3.10, CPython 3.13, CPython main branch
### Operating systems tested on:
Linux
### Output from running 'python -VV' on the command line:
Python 3.14.0a7+ (heads/main-dirty:deda47d6e18, Apr 11 2025, 20:18:59) [GCC 14.2.1 20240912 (Red Hat 14.2.1-3)]
<!-- gh-linked-prs -->
### Linked PRs
* gh-132418
* gh-132425
<!-- /gh-linked-prs -->
| 2aab2db1461ef49b42549255af16a74b1bf8a5ef | e0dffc54b888a18ad22e83411c2b615cc94a1fdf |
python/cpython | python__cpython-132400 | # UBSan: undefined behaviours when using `-fsanitize=undefined -fno-sanitize-recover` on free-threaded build
# Bug report
### Bug description:
This issue mirrors #111178 but is dedicated to tracking UB failures for the free-threaded build. There should be only two free-threaded-specific issues and the rest are tracked in #132097.
### CPython versions tested on:
CPython main branch
### Operating systems tested on:
Linux
<!-- gh-linked-prs -->
### Linked PRs
* gh-132400
* gh-132428
<!-- /gh-linked-prs -->
| a81232c769a4f67ee312c5c67a2148c54c6570d0 | 292a7248cda89f497e06eff4aa0147d6ff22f6bb |
python/cpython | python__cpython-132397 | # Lint `Lib/test` with existing `ruff` configuration
# Feature or enhancement
### Proposal:
There is an existing `.ruff.toml` file in `Lib/test` with some todo to fix.
### Has this already been discussed elsewhere?
This is a minor feature, which does not need previous discussion elsewhere
### Links to previous discussion of this feature:
_No response_
<!-- gh-linked-prs -->
### Linked PRs
* gh-132397
* gh-132699
<!-- /gh-linked-prs -->
| 1d5dc5f1c37ce28a635386189020cf49b3f7f1c3 | 4c3d187d9f143eee930a88a38b90f4842911b8be |
python/cpython | python__cpython-132391 | # Lint `Tools/build` using existing ruff configuration
# Feature or enhancement
### Proposal:
There is an existing `Tools/build/.ruff.toml` that is for now only used by the `check-warnings.py` script. I've tried to minimize the number of changes but I ran `ruff check Tools/build` and selectively disabled some existing errors so that in the future we can selectively fix them.
Note that @hugovk recently ruffed the tools in #124382 but not all of them.
### Has this already been discussed elsewhere?
This is a minor feature, which does not need previous discussion elsewhere
### Links to previous discussion of this feature:
_No response_
<!-- gh-linked-prs -->
### Linked PRs
* gh-132391
<!-- /gh-linked-prs -->
| 5d8e432d9ffa8285ca67833a54a07c52c58d79f3 | 246ed23456658f82854d518ad66d52652f992591 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.